Oct 14 03:17:03 np0005486808 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 14 03:17:03 np0005486808 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 14 03:17:03 np0005486808 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 03:17:03 np0005486808 kernel: BIOS-provided physical RAM map:
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 14 03:17:03 np0005486808 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 14 03:17:03 np0005486808 kernel: NX (Execute Disable) protection: active
Oct 14 03:17:03 np0005486808 kernel: APIC: Static calls initialized
Oct 14 03:17:03 np0005486808 kernel: SMBIOS 2.8 present.
Oct 14 03:17:03 np0005486808 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 14 03:17:03 np0005486808 kernel: Hypervisor detected: KVM
Oct 14 03:17:03 np0005486808 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 14 03:17:03 np0005486808 kernel: kvm-clock: using sched offset of 4315048700 cycles
Oct 14 03:17:03 np0005486808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 14 03:17:03 np0005486808 kernel: tsc: Detected 2800.000 MHz processor
Oct 14 03:17:03 np0005486808 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 14 03:17:03 np0005486808 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 14 03:17:03 np0005486808 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 14 03:17:03 np0005486808 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 14 03:17:03 np0005486808 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 14 03:17:03 np0005486808 kernel: Using GB pages for direct mapping
Oct 14 03:17:03 np0005486808 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 14 03:17:03 np0005486808 kernel: ACPI: Early table checksum verification disabled
Oct 14 03:17:03 np0005486808 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 14 03:17:03 np0005486808 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 03:17:03 np0005486808 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 03:17:03 np0005486808 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 03:17:03 np0005486808 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 14 03:17:03 np0005486808 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 03:17:03 np0005486808 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 14 03:17:03 np0005486808 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 14 03:17:03 np0005486808 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 14 03:17:03 np0005486808 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 14 03:17:03 np0005486808 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 14 03:17:03 np0005486808 kernel: No NUMA configuration found
Oct 14 03:17:03 np0005486808 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 14 03:17:03 np0005486808 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 14 03:17:03 np0005486808 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 14 03:17:03 np0005486808 kernel: Zone ranges:
Oct 14 03:17:03 np0005486808 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 14 03:17:03 np0005486808 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 14 03:17:03 np0005486808 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 14 03:17:03 np0005486808 kernel:  Device   empty
Oct 14 03:17:03 np0005486808 kernel: Movable zone start for each node
Oct 14 03:17:03 np0005486808 kernel: Early memory node ranges
Oct 14 03:17:03 np0005486808 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 14 03:17:03 np0005486808 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 14 03:17:03 np0005486808 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 14 03:17:03 np0005486808 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 14 03:17:03 np0005486808 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 14 03:17:03 np0005486808 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 14 03:17:03 np0005486808 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 14 03:17:03 np0005486808 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 14 03:17:03 np0005486808 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 14 03:17:03 np0005486808 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 14 03:17:03 np0005486808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 14 03:17:03 np0005486808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 14 03:17:03 np0005486808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 14 03:17:03 np0005486808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 14 03:17:03 np0005486808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 14 03:17:03 np0005486808 kernel: TSC deadline timer available
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Max. logical packages:   8
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Max. logical dies:       8
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Max. dies per package:   1
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Max. threads per core:   1
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Num. cores per package:     1
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Num. threads per package:   1
Oct 14 03:17:03 np0005486808 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 14 03:17:03 np0005486808 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 14 03:17:03 np0005486808 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 14 03:17:03 np0005486808 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 14 03:17:03 np0005486808 kernel: Booting paravirtualized kernel on KVM
Oct 14 03:17:03 np0005486808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 14 03:17:03 np0005486808 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 14 03:17:03 np0005486808 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 14 03:17:03 np0005486808 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 14 03:17:03 np0005486808 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 03:17:03 np0005486808 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 14 03:17:03 np0005486808 kernel: random: crng init done
Oct 14 03:17:03 np0005486808 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: Fallback order for Node 0: 0 
Oct 14 03:17:03 np0005486808 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 14 03:17:03 np0005486808 kernel: Policy zone: Normal
Oct 14 03:17:03 np0005486808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 14 03:17:03 np0005486808 kernel: software IO TLB: area num 8.
Oct 14 03:17:03 np0005486808 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 14 03:17:03 np0005486808 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 14 03:17:03 np0005486808 kernel: ftrace: allocated 193 pages with 3 groups
Oct 14 03:17:03 np0005486808 kernel: Dynamic Preempt: voluntary
Oct 14 03:17:03 np0005486808 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 14 03:17:03 np0005486808 kernel: rcu: #011RCU event tracing is enabled.
Oct 14 03:17:03 np0005486808 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 14 03:17:03 np0005486808 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 14 03:17:03 np0005486808 kernel: #011Rude variant of Tasks RCU enabled.
Oct 14 03:17:03 np0005486808 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 14 03:17:03 np0005486808 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 14 03:17:03 np0005486808 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 14 03:17:03 np0005486808 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 03:17:03 np0005486808 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 03:17:03 np0005486808 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 14 03:17:03 np0005486808 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 14 03:17:03 np0005486808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 14 03:17:03 np0005486808 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 14 03:17:03 np0005486808 kernel: Console: colour VGA+ 80x25
Oct 14 03:17:03 np0005486808 kernel: printk: console [ttyS0] enabled
Oct 14 03:17:03 np0005486808 kernel: ACPI: Core revision 20230331
Oct 14 03:17:03 np0005486808 kernel: APIC: Switch to symmetric I/O mode setup
Oct 14 03:17:03 np0005486808 kernel: x2apic enabled
Oct 14 03:17:03 np0005486808 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 14 03:17:03 np0005486808 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 14 03:17:03 np0005486808 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 14 03:17:03 np0005486808 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 14 03:17:03 np0005486808 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 14 03:17:03 np0005486808 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 14 03:17:03 np0005486808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 14 03:17:03 np0005486808 kernel: Spectre V2 : Mitigation: Retpolines
Oct 14 03:17:03 np0005486808 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 14 03:17:03 np0005486808 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 14 03:17:03 np0005486808 kernel: RETBleed: Mitigation: untrained return thunk
Oct 14 03:17:03 np0005486808 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 14 03:17:03 np0005486808 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 14 03:17:03 np0005486808 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 14 03:17:03 np0005486808 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 14 03:17:03 np0005486808 kernel: x86/bugs: return thunk changed
Oct 14 03:17:03 np0005486808 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 14 03:17:03 np0005486808 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 14 03:17:03 np0005486808 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 14 03:17:03 np0005486808 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 14 03:17:03 np0005486808 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 14 03:17:03 np0005486808 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 14 03:17:03 np0005486808 kernel: Freeing SMP alternatives memory: 40K
Oct 14 03:17:03 np0005486808 kernel: pid_max: default: 32768 minimum: 301
Oct 14 03:17:03 np0005486808 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 14 03:17:03 np0005486808 kernel: landlock: Up and running.
Oct 14 03:17:03 np0005486808 kernel: Yama: becoming mindful.
Oct 14 03:17:03 np0005486808 kernel: SELinux:  Initializing.
Oct 14 03:17:03 np0005486808 kernel: LSM support for eBPF active
Oct 14 03:17:03 np0005486808 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 14 03:17:03 np0005486808 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 14 03:17:03 np0005486808 kernel: ... version:                0
Oct 14 03:17:03 np0005486808 kernel: ... bit width:              48
Oct 14 03:17:03 np0005486808 kernel: ... generic registers:      6
Oct 14 03:17:03 np0005486808 kernel: ... value mask:             0000ffffffffffff
Oct 14 03:17:03 np0005486808 kernel: ... max period:             00007fffffffffff
Oct 14 03:17:03 np0005486808 kernel: ... fixed-purpose events:   0
Oct 14 03:17:03 np0005486808 kernel: ... event mask:             000000000000003f
Oct 14 03:17:03 np0005486808 kernel: signal: max sigframe size: 1776
Oct 14 03:17:03 np0005486808 kernel: rcu: Hierarchical SRCU implementation.
Oct 14 03:17:03 np0005486808 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 14 03:17:03 np0005486808 kernel: smp: Bringing up secondary CPUs ...
Oct 14 03:17:03 np0005486808 kernel: smpboot: x86: Booting SMP configuration:
Oct 14 03:17:03 np0005486808 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 14 03:17:03 np0005486808 kernel: smp: Brought up 1 node, 8 CPUs
Oct 14 03:17:03 np0005486808 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 14 03:17:03 np0005486808 kernel: node 0 deferred pages initialised in 10ms
Oct 14 03:17:03 np0005486808 kernel: Memory: 7765956K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 14 03:17:03 np0005486808 kernel: devtmpfs: initialized
Oct 14 03:17:03 np0005486808 kernel: x86/mm: Memory block size: 128MB
Oct 14 03:17:03 np0005486808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 14 03:17:03 np0005486808 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: pinctrl core: initialized pinctrl subsystem
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 14 03:17:03 np0005486808 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 14 03:17:03 np0005486808 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 14 03:17:03 np0005486808 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 14 03:17:03 np0005486808 kernel: audit: initializing netlink subsys (disabled)
Oct 14 03:17:03 np0005486808 kernel: audit: type=2000 audit(1760426220.883:1): state=initialized audit_enabled=0 res=1
Oct 14 03:17:03 np0005486808 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 14 03:17:03 np0005486808 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 14 03:17:03 np0005486808 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 14 03:17:03 np0005486808 kernel: cpuidle: using governor menu
Oct 14 03:17:03 np0005486808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 14 03:17:03 np0005486808 kernel: PCI: Using configuration type 1 for base access
Oct 14 03:17:03 np0005486808 kernel: PCI: Using configuration type 1 for extended access
Oct 14 03:17:03 np0005486808 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 14 03:17:03 np0005486808 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 14 03:17:03 np0005486808 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 14 03:17:03 np0005486808 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 14 03:17:03 np0005486808 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 14 03:17:03 np0005486808 kernel: Demotion targets for Node 0: null
Oct 14 03:17:03 np0005486808 kernel: cryptd: max_cpu_qlen set to 1000
Oct 14 03:17:03 np0005486808 kernel: ACPI: Added _OSI(Module Device)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Added _OSI(Processor Device)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 14 03:17:03 np0005486808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 14 03:17:03 np0005486808 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 14 03:17:03 np0005486808 kernel: ACPI: Interpreter enabled
Oct 14 03:17:03 np0005486808 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 14 03:17:03 np0005486808 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 14 03:17:03 np0005486808 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 14 03:17:03 np0005486808 kernel: PCI: Using E820 reservations for host bridge windows
Oct 14 03:17:03 np0005486808 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 14 03:17:03 np0005486808 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [3] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [4] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [5] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [6] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [7] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [8] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [9] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [10] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [11] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [12] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [13] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [14] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [15] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [16] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [17] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [18] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [19] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [20] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [21] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [22] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [23] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [24] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [25] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [26] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [27] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [28] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [29] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [30] registered
Oct 14 03:17:03 np0005486808 kernel: acpiphp: Slot [31] registered
Oct 14 03:17:03 np0005486808 kernel: PCI host bridge to bus 0000:00
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 14 03:17:03 np0005486808 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 14 03:17:03 np0005486808 kernel: iommu: Default domain type: Translated
Oct 14 03:17:03 np0005486808 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 14 03:17:03 np0005486808 kernel: SCSI subsystem initialized
Oct 14 03:17:03 np0005486808 kernel: ACPI: bus type USB registered
Oct 14 03:17:03 np0005486808 kernel: usbcore: registered new interface driver usbfs
Oct 14 03:17:03 np0005486808 kernel: usbcore: registered new interface driver hub
Oct 14 03:17:03 np0005486808 kernel: usbcore: registered new device driver usb
Oct 14 03:17:03 np0005486808 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 14 03:17:03 np0005486808 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 14 03:17:03 np0005486808 kernel: PTP clock support registered
Oct 14 03:17:03 np0005486808 kernel: EDAC MC: Ver: 3.0.0
Oct 14 03:17:03 np0005486808 kernel: NetLabel: Initializing
Oct 14 03:17:03 np0005486808 kernel: NetLabel:  domain hash size = 128
Oct 14 03:17:03 np0005486808 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 14 03:17:03 np0005486808 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 14 03:17:03 np0005486808 kernel: PCI: Using ACPI for IRQ routing
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 14 03:17:03 np0005486808 kernel: vgaarb: loaded
Oct 14 03:17:03 np0005486808 kernel: clocksource: Switched to clocksource kvm-clock
Oct 14 03:17:03 np0005486808 kernel: VFS: Disk quotas dquot_6.6.0
Oct 14 03:17:03 np0005486808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 14 03:17:03 np0005486808 kernel: pnp: PnP ACPI init
Oct 14 03:17:03 np0005486808 kernel: pnp: PnP ACPI: found 5 devices
Oct 14 03:17:03 np0005486808 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_INET protocol family
Oct 14 03:17:03 np0005486808 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 14 03:17:03 np0005486808 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_XDP protocol family
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 14 03:17:03 np0005486808 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 14 03:17:03 np0005486808 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 14 03:17:03 np0005486808 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 96568 usecs
Oct 14 03:17:03 np0005486808 kernel: PCI: CLS 0 bytes, default 64
Oct 14 03:17:03 np0005486808 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 14 03:17:03 np0005486808 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 14 03:17:03 np0005486808 kernel: ACPI: bus type thunderbolt registered
Oct 14 03:17:03 np0005486808 kernel: Trying to unpack rootfs image as initramfs...
Oct 14 03:17:03 np0005486808 kernel: Initialise system trusted keyrings
Oct 14 03:17:03 np0005486808 kernel: Key type blacklist registered
Oct 14 03:17:03 np0005486808 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 14 03:17:03 np0005486808 kernel: zbud: loaded
Oct 14 03:17:03 np0005486808 kernel: integrity: Platform Keyring initialized
Oct 14 03:17:03 np0005486808 kernel: integrity: Machine keyring initialized
Oct 14 03:17:03 np0005486808 kernel: Freeing initrd memory: 85808K
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_ALG protocol family
Oct 14 03:17:03 np0005486808 kernel: xor: automatically using best checksumming function   avx       
Oct 14 03:17:03 np0005486808 kernel: Key type asymmetric registered
Oct 14 03:17:03 np0005486808 kernel: Asymmetric key parser 'x509' registered
Oct 14 03:17:03 np0005486808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 14 03:17:03 np0005486808 kernel: io scheduler mq-deadline registered
Oct 14 03:17:03 np0005486808 kernel: io scheduler kyber registered
Oct 14 03:17:03 np0005486808 kernel: io scheduler bfq registered
Oct 14 03:17:03 np0005486808 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 14 03:17:03 np0005486808 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 14 03:17:03 np0005486808 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 14 03:17:03 np0005486808 kernel: ACPI: button: Power Button [PWRF]
Oct 14 03:17:03 np0005486808 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 14 03:17:03 np0005486808 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 14 03:17:03 np0005486808 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 14 03:17:03 np0005486808 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 14 03:17:03 np0005486808 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 14 03:17:03 np0005486808 kernel: Non-volatile memory driver v1.3
Oct 14 03:17:03 np0005486808 kernel: rdac: device handler registered
Oct 14 03:17:03 np0005486808 kernel: hp_sw: device handler registered
Oct 14 03:17:03 np0005486808 kernel: emc: device handler registered
Oct 14 03:17:03 np0005486808 kernel: alua: device handler registered
Oct 14 03:17:03 np0005486808 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 14 03:17:03 np0005486808 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 14 03:17:03 np0005486808 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 14 03:17:03 np0005486808 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 14 03:17:03 np0005486808 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 14 03:17:03 np0005486808 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 14 03:17:03 np0005486808 kernel: usb usb1: Product: UHCI Host Controller
Oct 14 03:17:03 np0005486808 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 14 03:17:03 np0005486808 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 14 03:17:03 np0005486808 kernel: hub 1-0:1.0: USB hub found
Oct 14 03:17:03 np0005486808 kernel: hub 1-0:1.0: 2 ports detected
Oct 14 03:17:03 np0005486808 kernel: usbcore: registered new interface driver usbserial_generic
Oct 14 03:17:03 np0005486808 kernel: usbserial: USB Serial support registered for generic
Oct 14 03:17:03 np0005486808 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 14 03:17:03 np0005486808 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 14 03:17:03 np0005486808 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 14 03:17:03 np0005486808 kernel: mousedev: PS/2 mouse device common for all mice
Oct 14 03:17:03 np0005486808 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 14 03:17:03 np0005486808 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 14 03:17:03 np0005486808 kernel: rtc_cmos 00:04: registered as rtc0
Oct 14 03:17:03 np0005486808 kernel: rtc_cmos 00:04: setting system clock to 2025-10-14T07:17:02 UTC (1760426222)
Oct 14 03:17:03 np0005486808 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 14 03:17:03 np0005486808 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 14 03:17:03 np0005486808 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 14 03:17:03 np0005486808 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 14 03:17:03 np0005486808 kernel: usbcore: registered new interface driver usbhid
Oct 14 03:17:03 np0005486808 kernel: usbhid: USB HID core driver
Oct 14 03:17:03 np0005486808 kernel: drop_monitor: Initializing network drop monitor service
Oct 14 03:17:03 np0005486808 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 14 03:17:03 np0005486808 kernel: Initializing XFRM netlink socket
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_INET6 protocol family
Oct 14 03:17:03 np0005486808 kernel: Segment Routing with IPv6
Oct 14 03:17:03 np0005486808 kernel: NET: Registered PF_PACKET protocol family
Oct 14 03:17:03 np0005486808 kernel: mpls_gso: MPLS GSO support
Oct 14 03:17:03 np0005486808 kernel: IPI shorthand broadcast: enabled
Oct 14 03:17:03 np0005486808 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 14 03:17:03 np0005486808 kernel: AES CTR mode by8 optimization enabled
Oct 14 03:17:03 np0005486808 kernel: sched_clock: Marking stable (1371002070, 142622460)->(1603771730, -90147200)
Oct 14 03:17:03 np0005486808 kernel: registered taskstats version 1
Oct 14 03:17:03 np0005486808 kernel: Loading compiled-in X.509 certificates
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 14 03:17:03 np0005486808 kernel: Demotion targets for Node 0: null
Oct 14 03:17:03 np0005486808 kernel: page_owner is disabled
Oct 14 03:17:03 np0005486808 kernel: Key type .fscrypt registered
Oct 14 03:17:03 np0005486808 kernel: Key type fscrypt-provisioning registered
Oct 14 03:17:03 np0005486808 kernel: Key type big_key registered
Oct 14 03:17:03 np0005486808 kernel: Key type encrypted registered
Oct 14 03:17:03 np0005486808 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 14 03:17:03 np0005486808 kernel: Loading compiled-in module X.509 certificates
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 14 03:17:03 np0005486808 kernel: ima: Allocated hash algorithm: sha256
Oct 14 03:17:03 np0005486808 kernel: ima: No architecture policies found
Oct 14 03:17:03 np0005486808 kernel: evm: Initialising EVM extended attributes:
Oct 14 03:17:03 np0005486808 kernel: evm: security.selinux
Oct 14 03:17:03 np0005486808 kernel: evm: security.SMACK64 (disabled)
Oct 14 03:17:03 np0005486808 kernel: evm: security.SMACK64EXEC (disabled)
Oct 14 03:17:03 np0005486808 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 14 03:17:03 np0005486808 kernel: evm: security.SMACK64MMAP (disabled)
Oct 14 03:17:03 np0005486808 kernel: evm: security.apparmor (disabled)
Oct 14 03:17:03 np0005486808 kernel: evm: security.ima
Oct 14 03:17:03 np0005486808 kernel: evm: security.capability
Oct 14 03:17:03 np0005486808 kernel: evm: HMAC attrs: 0x1
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 14 03:17:03 np0005486808 kernel: Running certificate verification RSA selftest
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 14 03:17:03 np0005486808 kernel: Running certificate verification ECDSA selftest
Oct 14 03:17:03 np0005486808 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: Manufacturer: QEMU
Oct 14 03:17:03 np0005486808 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 14 03:17:03 np0005486808 kernel: clk: Disabling unused clocks
Oct 14 03:17:03 np0005486808 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 14 03:17:03 np0005486808 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 14 03:17:03 np0005486808 kernel: Freeing unused decrypted memory: 2028K
Oct 14 03:17:03 np0005486808 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 14 03:17:03 np0005486808 kernel: Write protecting the kernel read-only data: 30720k
Oct 14 03:17:03 np0005486808 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 14 03:17:03 np0005486808 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 14 03:17:03 np0005486808 kernel: Run /init as init process
Oct 14 03:17:03 np0005486808 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 14 03:17:03 np0005486808 systemd: Detected virtualization kvm.
Oct 14 03:17:03 np0005486808 systemd: Detected architecture x86-64.
Oct 14 03:17:03 np0005486808 systemd: Running in initrd.
Oct 14 03:17:03 np0005486808 systemd: No hostname configured, using default hostname.
Oct 14 03:17:03 np0005486808 systemd: Hostname set to <localhost>.
Oct 14 03:17:03 np0005486808 systemd: Initializing machine ID from VM UUID.
Oct 14 03:17:03 np0005486808 systemd: Queued start job for default target Initrd Default Target.
Oct 14 03:17:03 np0005486808 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 14 03:17:03 np0005486808 systemd: Reached target Local Encrypted Volumes.
Oct 14 03:17:03 np0005486808 systemd: Reached target Initrd /usr File System.
Oct 14 03:17:03 np0005486808 systemd: Reached target Local File Systems.
Oct 14 03:17:03 np0005486808 systemd: Reached target Path Units.
Oct 14 03:17:03 np0005486808 systemd: Reached target Slice Units.
Oct 14 03:17:03 np0005486808 systemd: Reached target Swaps.
Oct 14 03:17:03 np0005486808 systemd: Reached target Timer Units.
Oct 14 03:17:03 np0005486808 systemd: Listening on D-Bus System Message Bus Socket.
Oct 14 03:17:03 np0005486808 systemd: Listening on Journal Socket (/dev/log).
Oct 14 03:17:03 np0005486808 systemd: Listening on Journal Socket.
Oct 14 03:17:03 np0005486808 systemd: Listening on udev Control Socket.
Oct 14 03:17:03 np0005486808 systemd: Listening on udev Kernel Socket.
Oct 14 03:17:03 np0005486808 systemd: Reached target Socket Units.
Oct 14 03:17:03 np0005486808 systemd: Starting Create List of Static Device Nodes...
Oct 14 03:17:03 np0005486808 systemd: Starting Journal Service...
Oct 14 03:17:03 np0005486808 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 14 03:17:03 np0005486808 systemd: Starting Apply Kernel Variables...
Oct 14 03:17:03 np0005486808 systemd: Starting Create System Users...
Oct 14 03:17:03 np0005486808 systemd: Starting Setup Virtual Console...
Oct 14 03:17:03 np0005486808 systemd: Finished Create List of Static Device Nodes.
Oct 14 03:17:03 np0005486808 systemd: Finished Apply Kernel Variables.
Oct 14 03:17:03 np0005486808 systemd: Finished Create System Users.
Oct 14 03:17:03 np0005486808 systemd-journald[306]: Journal started
Oct 14 03:17:03 np0005486808 systemd-journald[306]: Runtime Journal (/run/log/journal/1a1d621ed70142a8a9a32d332c90e100) is 8.0M, max 153.6M, 145.6M free.
Oct 14 03:17:03 np0005486808 systemd-sysusers[310]: Creating group 'users' with GID 100.
Oct 14 03:17:03 np0005486808 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Oct 14 03:17:03 np0005486808 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 14 03:17:03 np0005486808 systemd: Started Journal Service.
Oct 14 03:17:03 np0005486808 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 14 03:17:03 np0005486808 systemd[1]: Starting Create Volatile Files and Directories...
Oct 14 03:17:03 np0005486808 systemd[1]: Finished Create Volatile Files and Directories.
Oct 14 03:17:03 np0005486808 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 14 03:17:03 np0005486808 systemd[1]: Finished Setup Virtual Console.
Oct 14 03:17:03 np0005486808 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 14 03:17:03 np0005486808 systemd[1]: Starting dracut cmdline hook...
Oct 14 03:17:03 np0005486808 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 14 03:17:03 np0005486808 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 14 03:17:03 np0005486808 systemd[1]: Finished dracut cmdline hook.
Oct 14 03:17:03 np0005486808 systemd[1]: Starting dracut pre-udev hook...
Oct 14 03:17:03 np0005486808 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 14 03:17:03 np0005486808 kernel: device-mapper: uevent: version 1.0.3
Oct 14 03:17:03 np0005486808 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 14 03:17:04 np0005486808 kernel: RPC: Registered named UNIX socket transport module.
Oct 14 03:17:04 np0005486808 kernel: RPC: Registered udp transport module.
Oct 14 03:17:04 np0005486808 kernel: RPC: Registered tcp transport module.
Oct 14 03:17:04 np0005486808 kernel: RPC: Registered tcp-with-tls transport module.
Oct 14 03:17:04 np0005486808 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 14 03:17:04 np0005486808 rpc.statd[443]: Version 2.5.4 starting
Oct 14 03:17:04 np0005486808 rpc.statd[443]: Initializing NSM state
Oct 14 03:17:04 np0005486808 rpc.idmapd[448]: Setting log level to 0
Oct 14 03:17:04 np0005486808 systemd[1]: Finished dracut pre-udev hook.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 14 03:17:04 np0005486808 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 14 03:17:04 np0005486808 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting dracut pre-trigger hook...
Oct 14 03:17:04 np0005486808 systemd[1]: Finished dracut pre-trigger hook.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting Coldplug All udev Devices...
Oct 14 03:17:04 np0005486808 systemd[1]: Created slice Slice /system/modprobe.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting Load Kernel Module configfs...
Oct 14 03:17:04 np0005486808 systemd[1]: Finished Coldplug All udev Devices.
Oct 14 03:17:04 np0005486808 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 14 03:17:04 np0005486808 systemd[1]: Finished Load Kernel Module configfs.
Oct 14 03:17:04 np0005486808 systemd[1]: Mounting Kernel Configuration File System...
Oct 14 03:17:04 np0005486808 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Network.
Oct 14 03:17:04 np0005486808 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 14 03:17:04 np0005486808 systemd[1]: Starting dracut initqueue hook...
Oct 14 03:17:04 np0005486808 systemd[1]: Mounted Kernel Configuration File System.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target System Initialization.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Basic System.
Oct 14 03:17:04 np0005486808 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 14 03:17:04 np0005486808 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 14 03:17:04 np0005486808 kernel: vda: vda1
Oct 14 03:17:04 np0005486808 kernel: scsi host0: ata_piix
Oct 14 03:17:04 np0005486808 kernel: scsi host1: ata_piix
Oct 14 03:17:04 np0005486808 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 14 03:17:04 np0005486808 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 14 03:17:04 np0005486808 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Initrd Root Device.
Oct 14 03:17:04 np0005486808 kernel: ata1: found unknown device (class 0)
Oct 14 03:17:04 np0005486808 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 14 03:17:04 np0005486808 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 14 03:17:04 np0005486808 systemd-udevd[466]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 03:17:04 np0005486808 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 14 03:17:04 np0005486808 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 14 03:17:04 np0005486808 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 14 03:17:04 np0005486808 systemd[1]: Finished dracut initqueue hook.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 14 03:17:04 np0005486808 systemd[1]: Reached target Remote File Systems.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting dracut pre-mount hook...
Oct 14 03:17:04 np0005486808 systemd[1]: Finished dracut pre-mount hook.
Oct 14 03:17:04 np0005486808 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 14 03:17:04 np0005486808 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct 14 03:17:04 np0005486808 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 14 03:17:04 np0005486808 systemd[1]: Mounting /sysroot...
Oct 14 03:17:05 np0005486808 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 14 03:17:05 np0005486808 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 14 03:17:05 np0005486808 kernel: XFS (vda1): Ending clean mount
Oct 14 03:17:05 np0005486808 systemd[1]: Mounted /sysroot.
Oct 14 03:17:05 np0005486808 systemd[1]: Reached target Initrd Root File System.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 14 03:17:05 np0005486808 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 14 03:17:05 np0005486808 systemd[1]: Reached target Initrd File Systems.
Oct 14 03:17:05 np0005486808 systemd[1]: Reached target Initrd Default Target.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting dracut mount hook...
Oct 14 03:17:05 np0005486808 systemd[1]: Finished dracut mount hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 14 03:17:05 np0005486808 rpc.idmapd[448]: exiting on signal 15
Oct 14 03:17:05 np0005486808 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Network.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Timer Units.
Oct 14 03:17:05 np0005486808 systemd[1]: dbus.socket: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Initrd Default Target.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Basic System.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Initrd Root Device.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Initrd /usr File System.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Path Units.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Remote File Systems.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Slice Units.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Socket Units.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target System Initialization.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Local File Systems.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Swaps.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut mount hook.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut pre-mount hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut initqueue hook.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Apply Kernel Variables.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Coldplug All udev Devices.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut pre-trigger hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Setup Virtual Console.
Oct 14 03:17:05 np0005486808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 14 03:17:05 np0005486808 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Closed udev Control Socket.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Closed udev Kernel Socket.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut pre-udev hook.
Oct 14 03:17:05 np0005486808 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped dracut cmdline hook.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting Cleanup udev Database...
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 14 03:17:05 np0005486808 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 14 03:17:05 np0005486808 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Stopped Create System Users.
Oct 14 03:17:05 np0005486808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 14 03:17:05 np0005486808 systemd[1]: Finished Cleanup udev Database.
Oct 14 03:17:05 np0005486808 systemd[1]: Reached target Switch Root.
Oct 14 03:17:05 np0005486808 systemd[1]: Starting Switch Root...
Oct 14 03:17:05 np0005486808 systemd[1]: Switching root.
Oct 14 03:17:05 np0005486808 systemd-journald[306]: Journal stopped
Oct 14 03:17:06 np0005486808 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct 14 03:17:06 np0005486808 kernel: audit: type=1404 audit(1760426226.059:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 03:17:06 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 03:17:06 np0005486808 kernel: audit: type=1403 audit(1760426226.246:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 14 03:17:06 np0005486808 systemd: Successfully loaded SELinux policy in 192.195ms.
Oct 14 03:17:06 np0005486808 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.470ms.
Oct 14 03:17:06 np0005486808 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 14 03:17:06 np0005486808 systemd: Detected virtualization kvm.
Oct 14 03:17:06 np0005486808 systemd: Detected architecture x86-64.
Oct 14 03:17:06 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 03:17:06 np0005486808 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 14 03:17:06 np0005486808 systemd: Stopped Switch Root.
Oct 14 03:17:06 np0005486808 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 14 03:17:06 np0005486808 systemd: Created slice Slice /system/getty.
Oct 14 03:17:06 np0005486808 systemd: Created slice Slice /system/serial-getty.
Oct 14 03:17:06 np0005486808 systemd: Created slice Slice /system/sshd-keygen.
Oct 14 03:17:06 np0005486808 systemd: Created slice User and Session Slice.
Oct 14 03:17:06 np0005486808 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 14 03:17:06 np0005486808 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 14 03:17:06 np0005486808 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 14 03:17:06 np0005486808 systemd: Reached target Local Encrypted Volumes.
Oct 14 03:17:06 np0005486808 systemd: Stopped target Switch Root.
Oct 14 03:17:06 np0005486808 systemd: Stopped target Initrd File Systems.
Oct 14 03:17:06 np0005486808 systemd: Stopped target Initrd Root File System.
Oct 14 03:17:06 np0005486808 systemd: Reached target Local Integrity Protected Volumes.
Oct 14 03:17:06 np0005486808 systemd: Reached target Path Units.
Oct 14 03:17:06 np0005486808 systemd: Reached target rpc_pipefs.target.
Oct 14 03:17:06 np0005486808 systemd: Reached target Slice Units.
Oct 14 03:17:06 np0005486808 systemd: Reached target Swaps.
Oct 14 03:17:06 np0005486808 systemd: Reached target Local Verity Protected Volumes.
Oct 14 03:17:06 np0005486808 systemd: Listening on RPCbind Server Activation Socket.
Oct 14 03:17:06 np0005486808 systemd: Reached target RPC Port Mapper.
Oct 14 03:17:06 np0005486808 systemd: Listening on Process Core Dump Socket.
Oct 14 03:17:06 np0005486808 systemd: Listening on initctl Compatibility Named Pipe.
Oct 14 03:17:06 np0005486808 systemd: Listening on udev Control Socket.
Oct 14 03:17:06 np0005486808 systemd: Listening on udev Kernel Socket.
Oct 14 03:17:06 np0005486808 systemd: Mounting Huge Pages File System...
Oct 14 03:17:06 np0005486808 systemd: Mounting POSIX Message Queue File System...
Oct 14 03:17:06 np0005486808 systemd: Mounting Kernel Debug File System...
Oct 14 03:17:06 np0005486808 systemd: Mounting Kernel Trace File System...
Oct 14 03:17:06 np0005486808 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 14 03:17:06 np0005486808 systemd: Starting Create List of Static Device Nodes...
Oct 14 03:17:06 np0005486808 systemd: Starting Load Kernel Module configfs...
Oct 14 03:17:06 np0005486808 systemd: Starting Load Kernel Module drm...
Oct 14 03:17:06 np0005486808 systemd: Starting Load Kernel Module efi_pstore...
Oct 14 03:17:06 np0005486808 systemd: Starting Load Kernel Module fuse...
Oct 14 03:17:06 np0005486808 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 14 03:17:06 np0005486808 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 14 03:17:06 np0005486808 systemd: Stopped File System Check on Root Device.
Oct 14 03:17:06 np0005486808 systemd: Stopped Journal Service.
Oct 14 03:17:06 np0005486808 systemd: Starting Journal Service...
Oct 14 03:17:06 np0005486808 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 14 03:17:06 np0005486808 systemd: Starting Generate network units from Kernel command line...
Oct 14 03:17:06 np0005486808 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 14 03:17:06 np0005486808 systemd: Starting Remount Root and Kernel File Systems...
Oct 14 03:17:06 np0005486808 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 14 03:17:06 np0005486808 systemd: Starting Apply Kernel Variables...
Oct 14 03:17:06 np0005486808 systemd: Starting Coldplug All udev Devices...
Oct 14 03:17:06 np0005486808 systemd: Mounted Huge Pages File System.
Oct 14 03:17:06 np0005486808 systemd-journald[679]: Journal started
Oct 14 03:17:06 np0005486808 systemd-journald[679]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 14 03:17:06 np0005486808 systemd[1]: Queued start job for default target Multi-User System.
Oct 14 03:17:06 np0005486808 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 14 03:17:06 np0005486808 systemd: Started Journal Service.
Oct 14 03:17:06 np0005486808 systemd[1]: Mounted POSIX Message Queue File System.
Oct 14 03:17:07 np0005486808 systemd[1]: Mounted Kernel Debug File System.
Oct 14 03:17:07 np0005486808 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 14 03:17:07 np0005486808 systemd[1]: Mounted Kernel Trace File System.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Create List of Static Device Nodes.
Oct 14 03:17:07 np0005486808 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load Kernel Module configfs.
Oct 14 03:17:07 np0005486808 kernel: ACPI: bus type drm_connector registered
Oct 14 03:17:07 np0005486808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 14 03:17:07 np0005486808 kernel: fuse: init (API version 7.37)
Oct 14 03:17:07 np0005486808 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load Kernel Module drm.
Oct 14 03:17:07 np0005486808 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load Kernel Module fuse.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Generate network units from Kernel command line.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Apply Kernel Variables.
Oct 14 03:17:07 np0005486808 systemd[1]: Mounting FUSE Control File System...
Oct 14 03:17:07 np0005486808 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Rebuild Hardware Database...
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 14 03:17:07 np0005486808 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Load/Save OS Random Seed...
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Create System Users...
Oct 14 03:17:07 np0005486808 systemd[1]: Mounted FUSE Control File System.
Oct 14 03:17:07 np0005486808 systemd-journald[679]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 14 03:17:07 np0005486808 systemd-journald[679]: Received client request to flush runtime journal.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load/Save OS Random Seed.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Coldplug All udev Devices.
Oct 14 03:17:07 np0005486808 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Create System Users.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 14 03:17:07 np0005486808 systemd[1]: Reached target Preparation for Local File Systems.
Oct 14 03:17:07 np0005486808 systemd[1]: Reached target Local File Systems.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 14 03:17:07 np0005486808 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 14 03:17:07 np0005486808 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 14 03:17:07 np0005486808 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Automatic Boot Loader Update...
Oct 14 03:17:07 np0005486808 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Create Volatile Files and Directories...
Oct 14 03:17:07 np0005486808 bootctl[697]: Couldn't find EFI system partition, skipping.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Automatic Boot Loader Update.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Create Volatile Files and Directories.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Security Auditing Service...
Oct 14 03:17:07 np0005486808 systemd[1]: Starting RPC Bind...
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Rebuild Journal Catalog...
Oct 14 03:17:07 np0005486808 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 14 03:17:07 np0005486808 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 14 03:17:07 np0005486808 systemd[1]: Started RPC Bind.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Rebuild Journal Catalog.
Oct 14 03:17:07 np0005486808 augenrules[708]: /sbin/augenrules: No change
Oct 14 03:17:07 np0005486808 augenrules[723]: No rules
Oct 14 03:17:07 np0005486808 augenrules[723]: enabled 1
Oct 14 03:17:07 np0005486808 augenrules[723]: failure 1
Oct 14 03:17:07 np0005486808 augenrules[723]: pid 703
Oct 14 03:17:07 np0005486808 augenrules[723]: rate_limit 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_limit 8192
Oct 14 03:17:07 np0005486808 augenrules[723]: lost 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time 60000
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time_actual 0
Oct 14 03:17:07 np0005486808 augenrules[723]: enabled 1
Oct 14 03:17:07 np0005486808 augenrules[723]: failure 1
Oct 14 03:17:07 np0005486808 augenrules[723]: pid 703
Oct 14 03:17:07 np0005486808 augenrules[723]: rate_limit 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_limit 8192
Oct 14 03:17:07 np0005486808 augenrules[723]: lost 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog 1
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time 60000
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time_actual 0
Oct 14 03:17:07 np0005486808 augenrules[723]: enabled 1
Oct 14 03:17:07 np0005486808 augenrules[723]: failure 1
Oct 14 03:17:07 np0005486808 augenrules[723]: pid 703
Oct 14 03:17:07 np0005486808 augenrules[723]: rate_limit 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_limit 8192
Oct 14 03:17:07 np0005486808 augenrules[723]: lost 0
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog 2
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time 60000
Oct 14 03:17:07 np0005486808 augenrules[723]: backlog_wait_time_actual 0
Oct 14 03:17:07 np0005486808 systemd[1]: Started Security Auditing Service.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Rebuild Hardware Database.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 14 03:17:07 np0005486808 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Update is Completed...
Oct 14 03:17:07 np0005486808 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Update is Completed.
Oct 14 03:17:07 np0005486808 systemd[1]: Starting Load Kernel Module configfs...
Oct 14 03:17:07 np0005486808 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 14 03:17:07 np0005486808 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 14 03:17:07 np0005486808 systemd[1]: Finished Load Kernel Module configfs.
Oct 14 03:17:07 np0005486808 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 03:17:07 np0005486808 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 14 03:17:07 np0005486808 systemd[1]: Reached target System Initialization.
Oct 14 03:17:07 np0005486808 systemd[1]: Started dnf makecache --timer.
Oct 14 03:17:07 np0005486808 systemd[1]: Started Daily rotation of log files.
Oct 14 03:17:07 np0005486808 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 14 03:17:07 np0005486808 systemd[1]: Reached target Timer Units.
Oct 14 03:17:08 np0005486808 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 14 03:17:08 np0005486808 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 14 03:17:08 np0005486808 systemd[1]: Reached target Socket Units.
Oct 14 03:17:08 np0005486808 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 14 03:17:08 np0005486808 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 14 03:17:08 np0005486808 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 14 03:17:08 np0005486808 systemd[1]: Starting D-Bus System Message Bus...
Oct 14 03:17:08 np0005486808 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 14 03:17:08 np0005486808 systemd[1]: Started D-Bus System Message Bus.
Oct 14 03:17:08 np0005486808 systemd[1]: Reached target Basic System.
Oct 14 03:17:08 np0005486808 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 14 03:17:08 np0005486808 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 14 03:17:08 np0005486808 dbus-broker-lau[774]: Ready
Oct 14 03:17:08 np0005486808 kernel: Console: switching to colour dummy device 80x25
Oct 14 03:17:08 np0005486808 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 14 03:17:08 np0005486808 kernel: [drm] features: -context_init
Oct 14 03:17:08 np0005486808 kernel: [drm] number of scanouts: 1
Oct 14 03:17:08 np0005486808 kernel: [drm] number of cap sets: 0
Oct 14 03:17:08 np0005486808 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 14 03:17:08 np0005486808 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 14 03:17:08 np0005486808 kernel: Console: switching to colour frame buffer device 128x48
Oct 14 03:17:08 np0005486808 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 14 03:17:08 np0005486808 systemd[1]: Starting NTP client/server...
Oct 14 03:17:08 np0005486808 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 14 03:17:08 np0005486808 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 14 03:17:08 np0005486808 systemd[1]: Starting IPv4 firewall with iptables...
Oct 14 03:17:08 np0005486808 systemd[1]: Started irqbalance daemon.
Oct 14 03:17:08 np0005486808 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 14 03:17:08 np0005486808 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 03:17:08 np0005486808 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 03:17:08 np0005486808 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 03:17:08 np0005486808 systemd[1]: Reached target sshd-keygen.target.
Oct 14 03:17:08 np0005486808 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 14 03:17:08 np0005486808 systemd[1]: Reached target User and Group Name Lookups.
Oct 14 03:17:08 np0005486808 systemd[1]: Starting User Login Management...
Oct 14 03:17:08 np0005486808 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 14 03:17:08 np0005486808 systemd-logind[799]: New seat seat0.
Oct 14 03:17:08 np0005486808 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 14 03:17:08 np0005486808 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 14 03:17:08 np0005486808 systemd[1]: Started User Login Management.
Oct 14 03:17:08 np0005486808 kernel: kvm_amd: TSC scaling supported
Oct 14 03:17:08 np0005486808 kernel: kvm_amd: Nested Virtualization enabled
Oct 14 03:17:08 np0005486808 kernel: kvm_amd: Nested Paging enabled
Oct 14 03:17:08 np0005486808 kernel: kvm_amd: LBR virtualization supported
Oct 14 03:17:08 np0005486808 chronyd[809]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 14 03:17:08 np0005486808 chronyd[809]: Loaded 0 symmetric keys
Oct 14 03:17:08 np0005486808 chronyd[809]: Using right/UTC timezone to obtain leap second data
Oct 14 03:17:08 np0005486808 chronyd[809]: Loaded seccomp filter (level 2)
Oct 14 03:17:08 np0005486808 systemd[1]: Started NTP client/server.
Oct 14 03:17:08 np0005486808 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 14 03:17:08 np0005486808 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 14 03:17:08 np0005486808 iptables.init[793]: iptables: Applying firewall rules: [  OK  ]
Oct 14 03:17:08 np0005486808 systemd[1]: Finished IPv4 firewall with iptables.
Oct 14 03:17:08 np0005486808 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 14 Oct 2025 07:17:08 +0000. Up 7.58 seconds.
Oct 14 03:17:09 np0005486808 systemd[1]: run-cloud\x2dinit-tmp-tmpfac2bbg8.mount: Deactivated successfully.
Oct 14 03:17:09 np0005486808 systemd[1]: Starting Hostname Service...
Oct 14 03:17:09 np0005486808 systemd[1]: Started Hostname Service.
Oct 14 03:17:09 np0005486808 systemd-hostnamed[853]: Hostname set to <np0005486808.novalocal> (static)
Oct 14 03:17:09 np0005486808 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 14 03:17:09 np0005486808 systemd[1]: Reached target Preparation for Network.
Oct 14 03:17:09 np0005486808 systemd[1]: Starting Network Manager...
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5037] NetworkManager (version 1.54.1-1.el9) is starting... (boot:9f631222-b49c-47d2-a156-3705510cf21d)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5041] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5197] manager[0x55a4a5f00080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5261] hostname: hostname: using hostnamed
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5262] hostname: static hostname changed from (none) to "np0005486808.novalocal"
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5265] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5397] manager[0x55a4a5f00080]: rfkill: Wi-Fi hardware radio set enabled
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5397] manager[0x55a4a5f00080]: rfkill: WWAN hardware radio set enabled
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5494] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5494] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 14 03:17:09 np0005486808 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5494] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5495] manager: Networking is enabled by state file
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5497] settings: Loaded settings plugin: keyfile (internal)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5536] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5562] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5588] dhcp: init: Using DHCP client 'internal'
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5591] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5604] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5617] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5627] device (lo): Activation: starting connection 'lo' (047addc7-63d3-4a5b-85cf-f898174b7a4c)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5637] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5643] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5689] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5697] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5702] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5705] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5711] device (eth0): carrier: link connected
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5715] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5728] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5742] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5750] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5751] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5756] manager: NetworkManager state is now CONNECTING
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5757] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5767] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:17:09 np0005486808 systemd[1]: Started Network Manager.
Oct 14 03:17:09 np0005486808 systemd[1]: Reached target Network.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5839] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5847] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 14 03:17:09 np0005486808 systemd[1]: Starting Network Manager Wait Online...
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.5869] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 14 03:17:09 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6064] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6066] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6072] device (lo): Activation: successful, device activated.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6079] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6080] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6083] manager: NetworkManager state is now CONNECTED_SITE
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6085] device (eth0): Activation: successful, device activated.
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6094] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 14 03:17:09 np0005486808 NetworkManager[857]: <info>  [1760426229.6096] manager: startup complete
Oct 14 03:17:09 np0005486808 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 14 03:17:09 np0005486808 systemd[1]: Finished Network Manager Wait Online.
Oct 14 03:17:09 np0005486808 systemd[1]: Starting Cloud-init: Network Stage...
Oct 14 03:17:09 np0005486808 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 14 03:17:09 np0005486808 systemd[1]: Reached target NFS client services.
Oct 14 03:17:09 np0005486808 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 14 03:17:09 np0005486808 systemd[1]: Reached target Remote File Systems.
Oct 14 03:17:09 np0005486808 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 14 03:17:10 np0005486808 cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 14 Oct 2025 07:17:09 +0000. Up 8.78 seconds.
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.202         | 255.255.255.0 | global | fa:16:3e:0b:4d:c3 |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe0b:4dc3/64 |       .       |  link  | fa:16:3e:0b:4d:c3 |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 14 03:17:10 np0005486808 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 14 03:17:11 np0005486808 cloud-init[918]: Generating public/private rsa key pair.
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key fingerprint is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: SHA256:KSnJm3EvascK9nyPj0yqWSUU9uF95iPnpnU+8ATkRiU root@np0005486808.novalocal
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key's randomart image is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: +---[RSA 3072]----+
Oct 14 03:17:11 np0005486808 cloud-init[918]: |    o .   E..    |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |   . + o  o.     |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |    . o .+o      |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |   o . . =+      |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |    * = S.+.     |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |     O o +...    |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |  o +.o . ++.    |
Oct 14 03:17:11 np0005486808 cloud-init[918]: | . *.=++ + oo    |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |  oo*++o+   ..   |
Oct 14 03:17:11 np0005486808 cloud-init[918]: +----[SHA256]-----+
Oct 14 03:17:11 np0005486808 cloud-init[918]: Generating public/private ecdsa key pair.
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key fingerprint is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: SHA256:D6MyqAhvBfNEDB5aAlgPSwaCO8csEzFVlB/CgUcBDkU root@np0005486808.novalocal
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key's randomart image is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: +---[ECDSA 256]---+
Oct 14 03:17:11 np0005486808 cloud-init[918]: |%BEX*+           |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |=@.=B .          |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |.=+o.o .         |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |= * . .          |
Oct 14 03:17:11 np0005486808 cloud-init[918]: | = =    S        |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |   .o  . +       |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |. ..o .   .      |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |oo.  o           |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |o..              |
Oct 14 03:17:11 np0005486808 cloud-init[918]: +----[SHA256]-----+
Oct 14 03:17:11 np0005486808 cloud-init[918]: Generating public/private ed25519 key pair.
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 14 03:17:11 np0005486808 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key fingerprint is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: SHA256:5L9eade61v2aN6BSUF/qZpxbRyX3kuj4/v7zXw48RMA root@np0005486808.novalocal
Oct 14 03:17:11 np0005486808 cloud-init[918]: The key's randomart image is:
Oct 14 03:17:11 np0005486808 cloud-init[918]: +--[ED25519 256]--+
Oct 14 03:17:11 np0005486808 cloud-init[918]: |           ..    |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |           .E...o|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |        . . ..++o|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |       o .  .+o o|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |        S .oo oo |
Oct 14 03:17:11 np0005486808 cloud-init[918]: |         ....@ o.|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |          o.B O.=|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |         . =.o.O=|
Oct 14 03:17:11 np0005486808 cloud-init[918]: |         .+..+B=%|
Oct 14 03:17:11 np0005486808 cloud-init[918]: +----[SHA256]-----+
Oct 14 03:17:11 np0005486808 systemd[1]: Finished Cloud-init: Network Stage.
Oct 14 03:17:11 np0005486808 systemd[1]: Reached target Cloud-config availability.
Oct 14 03:17:11 np0005486808 systemd[1]: Reached target Network is Online.
Oct 14 03:17:11 np0005486808 systemd[1]: Starting Cloud-init: Config Stage...
Oct 14 03:17:11 np0005486808 systemd[1]: Starting Notify NFS peers of a restart...
Oct 14 03:17:11 np0005486808 systemd[1]: Starting System Logging Service...
Oct 14 03:17:11 np0005486808 sm-notify[1001]: Version 2.5.4 starting
Oct 14 03:17:11 np0005486808 systemd[1]: Starting OpenSSH server daemon...
Oct 14 03:17:11 np0005486808 systemd[1]: Starting Permit User Sessions...
Oct 14 03:17:11 np0005486808 systemd[1]: Started Notify NFS peers of a restart.
Oct 14 03:17:11 np0005486808 systemd[1]: Finished Permit User Sessions.
Oct 14 03:17:11 np0005486808 systemd[1]: Started Command Scheduler.
Oct 14 03:17:11 np0005486808 systemd[1]: Started Getty on tty1.
Oct 14 03:17:11 np0005486808 systemd[1]: Started Serial Getty on ttyS0.
Oct 14 03:17:11 np0005486808 systemd[1]: Reached target Login Prompts.
Oct 14 03:17:11 np0005486808 systemd[1]: Started OpenSSH server daemon.
Oct 14 03:17:11 np0005486808 rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Oct 14 03:17:11 np0005486808 rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 14 03:17:11 np0005486808 systemd[1]: Started System Logging Service.
Oct 14 03:17:11 np0005486808 systemd[1]: Reached target Multi-User System.
Oct 14 03:17:11 np0005486808 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 14 03:17:11 np0005486808 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 14 03:17:11 np0005486808 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 14 03:17:11 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 03:17:11 np0005486808 cloud-init[1022]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 14 Oct 2025 07:17:11 +0000. Up 10.75 seconds.
Oct 14 03:17:12 np0005486808 systemd[1]: Finished Cloud-init: Config Stage.
Oct 14 03:17:12 np0005486808 systemd[1]: Starting Cloud-init: Final Stage...
Oct 14 03:17:12 np0005486808 cloud-init[1037]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 14 Oct 2025 07:17:12 +0000. Up 11.14 seconds.
Oct 14 03:17:12 np0005486808 cloud-init[1039]: #############################################################
Oct 14 03:17:12 np0005486808 cloud-init[1040]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 14 03:17:12 np0005486808 cloud-init[1042]: 256 SHA256:D6MyqAhvBfNEDB5aAlgPSwaCO8csEzFVlB/CgUcBDkU root@np0005486808.novalocal (ECDSA)
Oct 14 03:17:12 np0005486808 cloud-init[1044]: 256 SHA256:5L9eade61v2aN6BSUF/qZpxbRyX3kuj4/v7zXw48RMA root@np0005486808.novalocal (ED25519)
Oct 14 03:17:12 np0005486808 cloud-init[1046]: 3072 SHA256:KSnJm3EvascK9nyPj0yqWSUU9uF95iPnpnU+8ATkRiU root@np0005486808.novalocal (RSA)
Oct 14 03:17:12 np0005486808 cloud-init[1047]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 14 03:17:12 np0005486808 cloud-init[1048]: #############################################################
Oct 14 03:17:12 np0005486808 cloud-init[1037]: Cloud-init v. 24.4-7.el9 finished at Tue, 14 Oct 2025 07:17:12 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.33 seconds
Oct 14 03:17:12 np0005486808 systemd[1]: Finished Cloud-init: Final Stage.
Oct 14 03:17:12 np0005486808 systemd[1]: Reached target Cloud-init target.
Oct 14 03:17:12 np0005486808 systemd[1]: Startup finished in 1.857s (kernel) + 3.003s (initrd) + 6.561s (userspace) = 11.422s.
Oct 14 03:17:14 np0005486808 chronyd[809]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Oct 14 03:17:14 np0005486808 chronyd[809]: System clock TAI offset set to 37 seconds
Oct 14 03:17:16 np0005486808 chronyd[809]: Selected source 54.39.196.172 (2.centos.pool.ntp.org)
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 25 affinity is now unmanaged
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 31 affinity is now unmanaged
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 28 affinity is now unmanaged
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 32 affinity is now unmanaged
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 30 affinity is now unmanaged
Oct 14 03:17:18 np0005486808 irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 14 03:17:18 np0005486808 irqbalance[794]: IRQ 29 affinity is now unmanaged
Oct 14 03:17:19 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 03:17:39 np0005486808 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 14 03:32:45 np0005486808 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 14 03:32:45 np0005486808 systemd[1]: Starting dnf makecache...
Oct 14 03:32:45 np0005486808 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 14 03:32:45 np0005486808 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 14 03:32:45 np0005486808 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 14 03:32:46 np0005486808 dnf[1064]: Failed determining last makecache time.
Oct 14 03:32:46 np0005486808 dnf[1064]: CentOS Stream 9 - BaseOS                         17 kB/s | 5.1 kB     00:00
Oct 14 03:32:47 np0005486808 dnf[1064]: CentOS Stream 9 - BaseOS                         18 MB/s | 8.8 MB     00:00
Oct 14 03:32:48 np0005486808 dnf[1064]: CentOS Stream 9 - AppStream                      47 kB/s | 5.2 kB     00:00
Oct 14 03:32:50 np0005486808 dnf[1064]: CentOS Stream 9 - AppStream                      16 MB/s |  25 MB     00:01
Oct 14 03:32:55 np0005486808 dnf[1064]: CentOS Stream 9 - CRB                            50 kB/s | 5.0 kB     00:00
Oct 14 03:32:57 np0005486808 dnf[1064]: CentOS Stream 9 - CRB                           6.2 MB/s | 7.2 MB     00:01
Oct 14 03:32:58 np0005486808 dnf[1064]: CentOS Stream 9 - Extras packages                86 kB/s | 8.0 kB     00:00
Oct 14 03:32:59 np0005486808 dnf[1064]: Metadata cache created.
Oct 14 03:32:59 np0005486808 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 14 03:32:59 np0005486808 systemd[1]: Finished dnf makecache.
Oct 14 03:32:59 np0005486808 systemd[1]: dnf-makecache.service: Consumed 10.731s CPU time.
Oct 14 03:47:19 np0005486808 systemd[1]: Created slice User Slice of UID 1000.
Oct 14 03:47:19 np0005486808 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 14 03:47:19 np0005486808 systemd-logind[799]: New session 1 of user zuul.
Oct 14 03:47:19 np0005486808 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 14 03:47:19 np0005486808 systemd[1]: Starting User Manager for UID 1000...
Oct 14 03:47:19 np0005486808 systemd[1097]: Queued start job for default target Main User Target.
Oct 14 03:47:19 np0005486808 systemd[1097]: Created slice User Application Slice.
Oct 14 03:47:19 np0005486808 systemd[1097]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 14 03:47:19 np0005486808 systemd[1097]: Started Daily Cleanup of User's Temporary Directories.
Oct 14 03:47:19 np0005486808 systemd[1097]: Reached target Paths.
Oct 14 03:47:19 np0005486808 systemd[1097]: Reached target Timers.
Oct 14 03:47:19 np0005486808 systemd[1097]: Starting D-Bus User Message Bus Socket...
Oct 14 03:47:19 np0005486808 systemd[1097]: Starting Create User's Volatile Files and Directories...
Oct 14 03:47:19 np0005486808 systemd[1097]: Listening on D-Bus User Message Bus Socket.
Oct 14 03:47:19 np0005486808 systemd[1097]: Reached target Sockets.
Oct 14 03:47:19 np0005486808 systemd[1097]: Finished Create User's Volatile Files and Directories.
Oct 14 03:47:19 np0005486808 systemd[1097]: Reached target Basic System.
Oct 14 03:47:19 np0005486808 systemd[1097]: Reached target Main User Target.
Oct 14 03:47:19 np0005486808 systemd[1097]: Startup finished in 153ms.
Oct 14 03:47:19 np0005486808 systemd[1]: Started User Manager for UID 1000.
Oct 14 03:47:19 np0005486808 systemd[1]: Started Session 1 of User zuul.
Oct 14 03:47:19 np0005486808 python3[1182]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 03:47:22 np0005486808 python3[1210]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 03:47:28 np0005486808 python3[1268]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 03:47:29 np0005486808 python3[1308]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 14 03:47:31 np0005486808 python3[1334]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZMUxzmGzMTVFKKZYY3qNPJbKpOIyjPz3y5DgtPheQLkIvtNri9PRWfD7dV85nXAlENJaNzyNarVKV14jzjNyTul7DhFjenkyjhlGJAnj5uH7kJ1LMRT+SKzVWlZJn7ZqbHZM4Go7YYOBBiYD7hoBUwZwa1npCIzOgaPf4TWUTr5QPQ/vizpc/vcNjmj0L7iCnEZFH6UOzpdRRCp/CMsGW5rnvR7j3lMEna3vaDz14S7JeTwTfhiCavSO27rwhB8O4ttLPh3sqHZLPuNqedcO5ofRXJDUR0i2bo8iPHuTxMRBTziNj+lEMnTYPc/Vm4vjpK4XCDdtUb3uwHsqkZ1wFsjmzKkx9UhsqgisTFvgohltmMKOp90Q5BhwRSkwvILL8fI5aGQXSXEp7bLcNx+PyMNNBbN/RWFAotff/1dXShMlCI6xZey8HrV7wgqDdNgmSuduDKWy7rys44RuyLT5GVxOm2H8MPAShtGwdoZY2YESyG3Nyft02qFwrkrldUfc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:31 np0005486808 python3[1358]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:32 np0005486808 python3[1457]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:32 np0005486808 python3[1528]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428052.043777-207-10871870957679/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b9a50d2f73cc46ee90b8dfc666ad4db9_id_rsa follow=False checksum=7a44efc42b32d7ad3379286d062501ae6d501eb8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:33 np0005486808 python3[1651]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:33 np0005486808 python3[1722]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428053.071688-240-273898562719752/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b9a50d2f73cc46ee90b8dfc666ad4db9_id_rsa.pub follow=False checksum=381d8b93d7af08a6121e2882dd9c9e65345364fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:35 np0005486808 python3[1770]: ansible-ping Invoked with data=pong
Oct 14 03:47:36 np0005486808 python3[1794]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 03:47:38 np0005486808 python3[1852]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 14 03:47:39 np0005486808 python3[1884]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:40 np0005486808 python3[1908]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:40 np0005486808 python3[1932]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:40 np0005486808 python3[1956]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:40 np0005486808 python3[1980]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:41 np0005486808 python3[2004]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:42 np0005486808 python3[2030]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:43 np0005486808 python3[2108]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:43 np0005486808 python3[2181]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760428062.936967-21-207878248377631/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:44 np0005486808 python3[2229]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:44 np0005486808 python3[2253]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:45 np0005486808 python3[2277]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:45 np0005486808 python3[2301]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:45 np0005486808 python3[2325]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:46 np0005486808 python3[2349]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:46 np0005486808 python3[2373]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:46 np0005486808 python3[2397]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:46 np0005486808 python3[2421]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:47 np0005486808 python3[2445]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:47 np0005486808 python3[2469]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:47 np0005486808 python3[2493]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:48 np0005486808 python3[2517]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:48 np0005486808 python3[2541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:48 np0005486808 python3[2565]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:49 np0005486808 python3[2589]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:49 np0005486808 python3[2613]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:49 np0005486808 python3[2637]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:50 np0005486808 python3[2661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:50 np0005486808 python3[2685]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:50 np0005486808 python3[2709]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:50 np0005486808 python3[2733]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:51 np0005486808 python3[2757]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:51 np0005486808 python3[2781]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:51 np0005486808 python3[2805]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:52 np0005486808 python3[2829]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:47:54 np0005486808 python3[2855]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 14 03:47:54 np0005486808 systemd[1]: Starting Time & Date Service...
Oct 14 03:47:54 np0005486808 systemd[1]: Started Time & Date Service.
Oct 14 03:47:54 np0005486808 systemd-timedated[2857]: Changed time zone to 'UTC' (UTC).
Oct 14 03:47:55 np0005486808 python3[2886]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:55 np0005486808 python3[2962]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:56 np0005486808 python3[3033]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760428075.4381955-153-278448771091398/source _original_basename=tmpne06rgu2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:56 np0005486808 python3[3133]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:57 np0005486808 python3[3204]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760428076.3746967-183-126963539622212/source _original_basename=tmppp6a5ls6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:57 np0005486808 python3[3306]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:58 np0005486808 python3[3379]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760428077.4304926-231-133757372578588/source _original_basename=tmpo7b8ulbf follow=False checksum=a70bebf2c8ca4f48b35db326a6da932097539870 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:47:58 np0005486808 python3[3427]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:47:58 np0005486808 python3[3453]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:47:59 np0005486808 python3[3534]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:47:59 np0005486808 python3[3607]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760428079.189213-273-69644275919718/source _original_basename=tmp2guznt7i follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:48:00 np0005486808 python3[3658]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-8438-24fb-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:48:01 np0005486808 python3[3686]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-8438-24fb-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 14 03:48:02 np0005486808 python3[3714]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:48:19 np0005486808 python3[3740]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:48:24 np0005486808 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 14 03:48:53 np0005486808 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 14 03:48:53 np0005486808 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9076] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 14 03:48:53 np0005486808 systemd-udevd[3743]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9295] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9333] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9338] device (eth1): carrier: link connected
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9340] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9347] policy: auto-activating connection 'Wired connection 1' (4f8df825-5230-3ccd-ac25-c891458e3587)
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9351] device (eth1): Activation: starting connection 'Wired connection 1' (4f8df825-5230-3ccd-ac25-c891458e3587)
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9353] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9355] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9360] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 03:48:53 np0005486808 NetworkManager[857]: <info>  [1760428133.9365] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:48:55 np0005486808 python3[3770]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-8ca2-366c-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:49:05 np0005486808 python3[3853]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:49:05 np0005486808 python3[3926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760428144.8672793-102-153074403969119/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=74d7ef16b25228340f846578bd4ad5493b108dd4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:49:06 np0005486808 python3[3976]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 03:49:06 np0005486808 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 14 03:49:06 np0005486808 systemd[1]: Stopped Network Manager Wait Online.
Oct 14 03:49:06 np0005486808 systemd[1]: Stopping Network Manager Wait Online...
Oct 14 03:49:06 np0005486808 systemd[1]: Stopping Network Manager...
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6132] caught SIGTERM, shutting down normally.
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6149] dhcp4 (eth0): canceled DHCP transaction
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6149] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6150] dhcp4 (eth0): state changed no lease
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6155] manager: NetworkManager state is now CONNECTING
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6301] dhcp4 (eth1): canceled DHCP transaction
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6301] dhcp4 (eth1): state changed no lease
Oct 14 03:49:06 np0005486808 NetworkManager[857]: <info>  [1760428146.6359] exiting (success)
Oct 14 03:49:06 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 03:49:06 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 03:49:06 np0005486808 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 14 03:49:06 np0005486808 systemd[1]: Stopped Network Manager.
Oct 14 03:49:06 np0005486808 systemd[1]: NetworkManager.service: Consumed 11.682s CPU time, 10.0M memory peak.
Oct 14 03:49:06 np0005486808 systemd[1]: Starting Network Manager...
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.7286] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:9f631222-b49c-47d2-a156-3705510cf21d)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.7288] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.7387] manager[0x55693da7e070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 14 03:49:06 np0005486808 systemd[1]: Starting Hostname Service...
Oct 14 03:49:06 np0005486808 systemd[1]: Started Hostname Service.
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8551] hostname: hostname: using hostnamed
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8553] hostname: static hostname changed from (none) to "np0005486808.novalocal"
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8562] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8572] manager[0x55693da7e070]: rfkill: Wi-Fi hardware radio set enabled
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8572] manager[0x55693da7e070]: rfkill: WWAN hardware radio set enabled
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8623] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8624] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8625] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8625] manager: Networking is enabled by state file
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8629] settings: Loaded settings plugin: keyfile (internal)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8636] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8676] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8691] dhcp: init: Using DHCP client 'internal'
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8695] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8705] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8714] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8727] device (lo): Activation: starting connection 'lo' (047addc7-63d3-4a5b-85cf-f898174b7a4c)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8739] device (eth0): carrier: link connected
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8746] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8754] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8755] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8766] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8777] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8789] device (eth1): carrier: link connected
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8797] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8806] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4f8df825-5230-3ccd-ac25-c891458e3587) (indicated)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8807] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8818] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8834] device (eth1): Activation: starting connection 'Wired connection 1' (4f8df825-5230-3ccd-ac25-c891458e3587)
Oct 14 03:49:06 np0005486808 systemd[1]: Started Network Manager.
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8844] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8854] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8858] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8863] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8868] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8884] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8890] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8895] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8904] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8916] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8923] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8941] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8946] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8971] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8978] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.8990] device (lo): Activation: successful, device activated.
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9003] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9014] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 14 03:49:06 np0005486808 systemd[1]: Starting Network Manager Wait Online...
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9109] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9137] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9139] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9145] manager: NetworkManager state is now CONNECTED_SITE
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9150] device (eth0): Activation: successful, device activated.
Oct 14 03:49:06 np0005486808 NetworkManager[3987]: <info>  [1760428146.9159] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 14 03:49:07 np0005486808 python3[4063]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-8ca2-366c-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:49:08 np0005486808 irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 14 03:49:08 np0005486808 irqbalance[794]: IRQ 26 affinity is now unmanaged
Oct 14 03:49:17 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 03:49:36 np0005486808 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.1859] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 14 03:49:52 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 03:49:52 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2286] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2290] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2302] device (eth1): Activation: successful, device activated.
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2313] manager: startup complete
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2316] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <warn>  [1760428192.2324] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2337] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 systemd[1]: Finished Network Manager Wait Online.
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2471] dhcp4 (eth1): canceled DHCP transaction
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2473] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2474] dhcp4 (eth1): state changed no lease
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2501] policy: auto-activating connection 'ci-private-network' (bc870d37-615b-5421-ae37-b2ab437e826e)
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2510] device (eth1): Activation: starting connection 'ci-private-network' (bc870d37-615b-5421-ae37-b2ab437e826e)
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2516] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2524] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2541] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2558] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2614] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2618] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 03:49:52 np0005486808 NetworkManager[3987]: <info>  [1760428192.2632] device (eth1): Activation: successful, device activated.
Oct 14 03:49:55 np0005486808 systemd[1097]: Starting Mark boot as successful...
Oct 14 03:49:55 np0005486808 systemd[1097]: Finished Mark boot as successful.
Oct 14 03:50:02 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 03:50:06 np0005486808 systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Oct 14 03:50:07 np0005486808 systemd-logind[799]: New session 3 of user zuul.
Oct 14 03:50:07 np0005486808 systemd[1]: Started Session 3 of User zuul.
Oct 14 03:50:07 np0005486808 python3[4174]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:50:07 np0005486808 python3[4247]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760428207.174438-267-197447175610387/source _original_basename=tmpux_ptn_4 follow=False checksum=79fc925f60db06e658c99798c3329287fca4fe9e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:50:10 np0005486808 systemd[1]: session-3.scope: Deactivated successfully.
Oct 14 03:50:10 np0005486808 systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Oct 14 03:50:10 np0005486808 systemd-logind[799]: Removed session 3.
Oct 14 03:52:55 np0005486808 systemd[1097]: Created slice User Background Tasks Slice.
Oct 14 03:52:55 np0005486808 systemd[1097]: Starting Cleanup of User's Temporary Files and Directories...
Oct 14 03:52:55 np0005486808 systemd[1097]: Finished Cleanup of User's Temporary Files and Directories.
Oct 14 03:55:13 np0005486808 systemd-logind[799]: New session 4 of user zuul.
Oct 14 03:55:13 np0005486808 systemd[1]: Started Session 4 of User zuul.
Oct 14 03:55:13 np0005486808 python3[4306]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7623-dfb2-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:14 np0005486808 python3[4335]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:55:14 np0005486808 python3[4361]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:55:14 np0005486808 python3[4387]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:55:15 np0005486808 python3[4413]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:55:15 np0005486808 python3[4439]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:55:15 np0005486808 python3[4439]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 14 03:55:16 np0005486808 python3[4465]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 03:55:16 np0005486808 systemd[1]: Reloading.
Oct 14 03:55:16 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 03:55:18 np0005486808 python3[4520]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 14 03:55:18 np0005486808 python3[4546]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:18 np0005486808 python3[4574]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:18 np0005486808 python3[4602]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:19 np0005486808 python3[4630]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:19 np0005486808 python3[4657]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7623-dfb2-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:55:20 np0005486808 python3[4687]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 03:55:22 np0005486808 systemd[1]: session-4.scope: Deactivated successfully.
Oct 14 03:55:22 np0005486808 systemd[1]: session-4.scope: Consumed 3.738s CPU time.
Oct 14 03:55:22 np0005486808 systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Oct 14 03:55:22 np0005486808 systemd-logind[799]: Removed session 4.
Oct 14 03:55:24 np0005486808 systemd-logind[799]: New session 5 of user zuul.
Oct 14 03:55:24 np0005486808 systemd[1]: Started Session 5 of User zuul.
Oct 14 03:55:24 np0005486808 python3[4723]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 14 03:55:38 np0005486808 kernel: SELinux:  Converting 364 SID table entries...
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 03:55:38 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  Converting 364 SID table entries...
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 03:55:46 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  Converting 364 SID table entries...
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 03:55:55 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 03:55:56 np0005486808 setsebool[4782]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 14 03:55:56 np0005486808 setsebool[4782]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 14 03:56:06 np0005486808 kernel: SELinux:  Converting 367 SID table entries...
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 03:56:06 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 03:56:25 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 14 03:56:25 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 03:56:25 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 03:56:25 np0005486808 systemd[1]: Reloading.
Oct 14 03:56:25 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 03:56:25 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 03:56:26 np0005486808 systemd[1]: Starting PackageKit Daemon...
Oct 14 03:56:26 np0005486808 systemd[1]: Starting Authorization Manager...
Oct 14 03:56:26 np0005486808 polkitd[6428]: Started polkitd version 0.117
Oct 14 03:56:26 np0005486808 systemd[1]: Started Authorization Manager.
Oct 14 03:56:26 np0005486808 systemd[1]: Started PackageKit Daemon.
Oct 14 03:56:38 np0005486808 irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 14 03:56:38 np0005486808 irqbalance[794]: IRQ 27 affinity is now unmanaged
Oct 14 03:56:43 np0005486808 python3[15187]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-2a42-1cd4-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 03:56:44 np0005486808 kernel: evm: overlay not supported
Oct 14 03:56:44 np0005486808 systemd[1097]: Starting D-Bus User Message Bus...
Oct 14 03:56:44 np0005486808 dbus-broker-launch[15548]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 14 03:56:44 np0005486808 dbus-broker-launch[15548]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 14 03:56:44 np0005486808 systemd[1097]: Started D-Bus User Message Bus.
Oct 14 03:56:44 np0005486808 dbus-broker-lau[15548]: Ready
Oct 14 03:56:44 np0005486808 systemd[1097]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 14 03:56:44 np0005486808 systemd[1097]: Created slice Slice /user.
Oct 14 03:56:44 np0005486808 systemd[1097]: podman-15490.scope: unit configures an IP firewall, but not running as root.
Oct 14 03:56:44 np0005486808 systemd[1097]: (This warning is only shown for the first unit using IP firewalling.)
Oct 14 03:56:44 np0005486808 systemd[1097]: Started podman-15490.scope.
Oct 14 03:56:44 np0005486808 systemd[1097]: Started podman-pause-6a00c625.scope.
Oct 14 03:56:45 np0005486808 python3[15829]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.233:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.233:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:56:45 np0005486808 systemd[1]: session-5.scope: Deactivated successfully.
Oct 14 03:56:45 np0005486808 systemd[1]: session-5.scope: Consumed 59.037s CPU time.
Oct 14 03:56:45 np0005486808 systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Oct 14 03:56:45 np0005486808 systemd-logind[799]: Removed session 5.
Oct 14 03:57:10 np0005486808 systemd-logind[799]: New session 6 of user zuul.
Oct 14 03:57:10 np0005486808 systemd[1]: Started Session 6 of User zuul.
Oct 14 03:57:10 np0005486808 python3[24129]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNs3xmiAs++gsyMqV8LSHGUxpDSNCW/85omX77426IPks6O3czlsCFIU6FIho2AM3nB/Il5od3UCdon4iLmw0c= zuul@np0005487132.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:57:10 np0005486808 python3[24270]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNs3xmiAs++gsyMqV8LSHGUxpDSNCW/85omX77426IPks6O3czlsCFIU6FIho2AM3nB/Il5od3UCdon4iLmw0c= zuul@np0005487132.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:57:11 np0005486808 python3[24607]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005486808.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 14 03:57:12 np0005486808 python3[24801]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNs3xmiAs++gsyMqV8LSHGUxpDSNCW/85omX77426IPks6O3czlsCFIU6FIho2AM3nB/Il5od3UCdon4iLmw0c= zuul@np0005487132.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 14 03:57:12 np0005486808 python3[25023]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 03:57:13 np0005486808 python3[25313]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760428632.1818092-135-229015122398412/source _original_basename=tmpmep6tkal follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 03:57:13 np0005486808 python3[25605]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 14 03:57:13 np0005486808 systemd[1]: Starting Hostname Service...
Oct 14 03:57:13 np0005486808 systemd[1]: Started Hostname Service.
Oct 14 03:57:13 np0005486808 systemd-hostnamed[25706]: Changed pretty hostname to 'compute-0'
Oct 14 03:57:13 np0005486808 systemd-hostnamed[25706]: Hostname set to <compute-0> (static)
Oct 14 03:57:13 np0005486808 NetworkManager[3987]: <info>  [1760428633.9957] hostname: static hostname changed from "np0005486808.novalocal" to "compute-0"
Oct 14 03:57:14 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 03:57:14 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 03:57:14 np0005486808 systemd[1]: session-6.scope: Deactivated successfully.
Oct 14 03:57:14 np0005486808 systemd[1]: session-6.scope: Consumed 2.407s CPU time.
Oct 14 03:57:14 np0005486808 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Oct 14 03:57:14 np0005486808 systemd-logind[799]: Removed session 6.
Oct 14 03:57:16 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 03:57:16 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 03:57:16 np0005486808 systemd[1]: man-db-cache-update.service: Consumed 1min 2.438s CPU time.
Oct 14 03:57:16 np0005486808 systemd[1]: run-rab8a5f13eaff43e18b04391617ecd1f8.service: Deactivated successfully.
Oct 14 03:57:24 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 03:57:44 np0005486808 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 14 04:00:48 np0005486808 systemd-logind[799]: New session 7 of user zuul.
Oct 14 04:00:48 np0005486808 systemd[1]: Started Session 7 of User zuul.
Oct 14 04:00:48 np0005486808 python3[26643]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:00:50 np0005486808 python3[26759]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:50 np0005486808 python3[26832]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=delorean.repo follow=False checksum=56ac791d04f1e01ef1ce64d672fcd569241a176f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:51 np0005486808 python3[26858]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:51 np0005486808 python3[26931]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:51 np0005486808 python3[26957]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:52 np0005486808 python3[27030]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:52 np0005486808 python3[27056]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:52 np0005486808 python3[27129]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:53 np0005486808 python3[27155]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:53 np0005486808 python3[27228]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:53 np0005486808 python3[27254]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:54 np0005486808 python3[27327]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:00:54 np0005486808 python3[27353]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:00:55 np0005486808 python3[27426]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760428849.9442744-30220-118277591302670/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2093c458f7853874ca1ddd3de93319068540cb3b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:01:10 np0005486808 python3[27499]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:01:31 np0005486808 systemd[1]: packagekit.service: Deactivated successfully.
Oct 14 04:06:09 np0005486808 systemd[1]: session-7.scope: Deactivated successfully.
Oct 14 04:06:09 np0005486808 systemd[1]: session-7.scope: Consumed 5.957s CPU time.
Oct 14 04:06:09 np0005486808 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Oct 14 04:06:09 np0005486808 systemd-logind[799]: Removed session 7.
Oct 14 04:12:06 np0005486808 systemd-logind[799]: New session 8 of user zuul.
Oct 14 04:12:06 np0005486808 systemd[1]: Started Session 8 of User zuul.
Oct 14 04:12:07 np0005486808 python3.9[27658]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:12:08 np0005486808 python3.9[27839]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:12:16 np0005486808 systemd[1]: session-8.scope: Deactivated successfully.
Oct 14 04:12:16 np0005486808 systemd[1]: session-8.scope: Consumed 8.016s CPU time.
Oct 14 04:12:16 np0005486808 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Oct 14 04:12:16 np0005486808 systemd-logind[799]: Removed session 8.
Oct 14 04:12:32 np0005486808 systemd-logind[799]: New session 9 of user zuul.
Oct 14 04:12:32 np0005486808 systemd[1]: Started Session 9 of User zuul.
Oct 14 04:12:33 np0005486808 python3.9[28051]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 14 04:12:34 np0005486808 python3.9[28225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:12:35 np0005486808 python3.9[28377]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:12:36 np0005486808 python3.9[28530]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:12:37 np0005486808 python3.9[28682]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:12:38 np0005486808 python3.9[28834]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:12:39 np0005486808 python3.9[28957]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429557.9540186-73-247085295754873/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:12:40 np0005486808 python3.9[29109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:12:41 np0005486808 python3.9[29265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:12:42 np0005486808 python3.9[29415]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:12:47 np0005486808 python3.9[29670]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:12:48 np0005486808 python3.9[29820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:12:49 np0005486808 python3.9[29974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:12:51 np0005486808 python3.9[30132]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:12:51 np0005486808 python3.9[30216]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:13:35 np0005486808 systemd[1]: Reloading.
Oct 14 04:13:35 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:13:35 np0005486808 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 14 04:13:36 np0005486808 systemd[1]: Reloading.
Oct 14 04:13:36 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:13:36 np0005486808 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 14 04:13:36 np0005486808 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 14 04:13:36 np0005486808 systemd[1]: Reloading.
Oct 14 04:13:36 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:13:36 np0005486808 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 14 04:13:36 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:13:36 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:13:36 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:14:38 np0005486808 kernel: SELinux:  Converting 2713 SID table entries...
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:14:38 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:14:39 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 14 04:14:39 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:14:39 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:14:39 np0005486808 systemd[1]: Reloading.
Oct 14 04:14:39 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:14:39 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:14:39 np0005486808 systemd[1]: Starting PackageKit Daemon...
Oct 14 04:14:40 np0005486808 systemd[1]: Started PackageKit Daemon.
Oct 14 04:14:40 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:14:40 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:14:40 np0005486808 systemd[1]: man-db-cache-update.service: Consumed 1.446s CPU time.
Oct 14 04:14:40 np0005486808 systemd[1]: run-r9e0ea618dc8d4d7096590ebb441c0e01.service: Deactivated successfully.
Oct 14 04:14:41 np0005486808 python3.9[31734]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:14:43 np0005486808 python3.9[32015]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 14 04:14:44 np0005486808 python3.9[32167]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 14 04:14:46 np0005486808 python3.9[32320]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:14:47 np0005486808 python3.9[32472]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 14 04:14:48 np0005486808 python3.9[32624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:14:49 np0005486808 python3.9[32776]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:14:50 np0005486808 python3.9[32899]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429688.9470966-227-41015085363952/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24131129ee325d825c164cc94da98bd455903352 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:14:53 np0005486808 python3.9[33051]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 14 04:14:54 np0005486808 python3.9[33204]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:14:54 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:14:55 np0005486808 python3.9[33363]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 14 04:14:56 np0005486808 python3.9[33523]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 14 04:14:57 np0005486808 python3.9[33676]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:14:58 np0005486808 python3.9[33834]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 14 04:14:59 np0005486808 python3.9[33986]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:15:01 np0005486808 python3.9[34139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:15:02 np0005486808 python3.9[34291]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:15:03 np0005486808 python3.9[34414]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760429701.736112-322-96849784331948/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:15:04 np0005486808 python3.9[34566]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:15:04 np0005486808 systemd[1]: Starting Load Kernel Modules...
Oct 14 04:15:04 np0005486808 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 14 04:15:04 np0005486808 kernel: Bridge firewalling registered
Oct 14 04:15:04 np0005486808 systemd-modules-load[34570]: Inserted module 'br_netfilter'
Oct 14 04:15:04 np0005486808 systemd[1]: Finished Load Kernel Modules.
Oct 14 04:15:05 np0005486808 python3.9[34725]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:15:06 np0005486808 python3.9[34848]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760429704.7491586-345-245975614006690/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:15:07 np0005486808 python3.9[35000]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:15:10 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:15:10 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:15:10 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:15:10 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:15:10 np0005486808 systemd[1]: Reloading.
Oct 14 04:15:11 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:15:11 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:15:12 np0005486808 python3.9[36399]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:15:13 np0005486808 python3.9[37257]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 14 04:15:14 np0005486808 python3.9[38028]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:15:15 np0005486808 python3.9[38931]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:15:15 np0005486808 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 14 04:15:15 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:15:15 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:15:15 np0005486808 systemd[1]: man-db-cache-update.service: Consumed 5.771s CPU time.
Oct 14 04:15:15 np0005486808 systemd[1]: run-rd8d8ca042450469580c851a615d1af93.service: Deactivated successfully.
Oct 14 04:15:15 np0005486808 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 14 04:15:17 np0005486808 python3.9[39545]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:15:17 np0005486808 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 14 04:15:17 np0005486808 systemd[1]: tuned.service: Deactivated successfully.
Oct 14 04:15:17 np0005486808 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 14 04:15:17 np0005486808 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 14 04:15:17 np0005486808 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 14 04:15:18 np0005486808 python3.9[39707]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 14 04:15:20 np0005486808 python3.9[39859]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:15:20 np0005486808 systemd[1]: Reloading.
Oct 14 04:15:21 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:15:22 np0005486808 python3.9[40048]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:15:22 np0005486808 systemd[1]: Reloading.
Oct 14 04:15:22 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:15:23 np0005486808 python3.9[40237]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:15:24 np0005486808 python3.9[40390]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:15:24 np0005486808 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 14 04:15:24 np0005486808 python3.9[40543]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:15:27 np0005486808 python3.9[40705]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:15:28 np0005486808 python3.9[40858]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:15:28 np0005486808 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 14 04:15:28 np0005486808 systemd[1]: Stopped Apply Kernel Variables.
Oct 14 04:15:28 np0005486808 systemd[1]: Stopping Apply Kernel Variables...
Oct 14 04:15:28 np0005486808 systemd[1]: Starting Apply Kernel Variables...
Oct 14 04:15:28 np0005486808 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 14 04:15:28 np0005486808 systemd[1]: Finished Apply Kernel Variables.
Oct 14 04:15:28 np0005486808 systemd[1]: session-9.scope: Deactivated successfully.
Oct 14 04:15:28 np0005486808 systemd[1]: session-9.scope: Consumed 2min 18.196s CPU time.
Oct 14 04:15:28 np0005486808 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Oct 14 04:15:28 np0005486808 systemd-logind[799]: Removed session 9.
Oct 14 04:15:35 np0005486808 systemd-logind[799]: New session 10 of user zuul.
Oct 14 04:15:35 np0005486808 systemd[1]: Started Session 10 of User zuul.
Oct 14 04:15:36 np0005486808 python3.9[41041]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:15:37 np0005486808 python3.9[41197]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 14 04:15:38 np0005486808 python3.9[41350]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:15:39 np0005486808 python3.9[41508]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 14 04:15:40 np0005486808 python3.9[41668]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:15:41 np0005486808 python3.9[41752]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 14 04:15:44 np0005486808 python3.9[41915]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:15:55 np0005486808 kernel: SELinux:  Converting 2723 SID table entries...
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:15:55 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:15:55 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 14 04:15:55 np0005486808 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 14 04:15:57 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:15:57 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:15:57 np0005486808 systemd[1]: Reloading.
Oct 14 04:15:57 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:15:57 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:15:57 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:15:58 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:15:58 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:15:58 np0005486808 systemd[1]: run-rb7a6bda8c9d34c79bb0fccc81deb44ba.service: Deactivated successfully.
Oct 14 04:15:59 np0005486808 python3.9[43018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:15:59 np0005486808 systemd[1]: Reloading.
Oct 14 04:15:59 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:15:59 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:15:59 np0005486808 systemd[1]: Starting Open vSwitch Database Unit...
Oct 14 04:15:59 np0005486808 chown[43060]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 14 04:16:00 np0005486808 ovs-ctl[43065]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 14 04:16:00 np0005486808 ovs-ctl[43065]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 14 04:16:00 np0005486808 ovs-ctl[43065]: Starting ovsdb-server [  OK  ]
Oct 14 04:16:00 np0005486808 ovs-vsctl[43114]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 14 04:16:00 np0005486808 ovs-vsctl[43134]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bb42e45d-8149-4fcf-a722-37b1def68e20\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 14 04:16:00 np0005486808 ovs-ctl[43065]: Configuring Open vSwitch system IDs [  OK  ]
Oct 14 04:16:00 np0005486808 ovs-ctl[43065]: Enabling remote OVSDB managers [  OK  ]
Oct 14 04:16:00 np0005486808 systemd[1]: Started Open vSwitch Database Unit.
Oct 14 04:16:00 np0005486808 ovs-vsctl[43140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 14 04:16:00 np0005486808 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 14 04:16:00 np0005486808 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 14 04:16:00 np0005486808 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 14 04:16:00 np0005486808 kernel: openvswitch: Open vSwitch switching datapath
Oct 14 04:16:00 np0005486808 ovs-ctl[43184]: Inserting openvswitch module [  OK  ]
Oct 14 04:16:00 np0005486808 ovs-ctl[43153]: Starting ovs-vswitchd [  OK  ]
Oct 14 04:16:00 np0005486808 ovs-vsctl[43202]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 14 04:16:00 np0005486808 ovs-ctl[43153]: Enabling remote OVSDB managers [  OK  ]
Oct 14 04:16:00 np0005486808 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 14 04:16:00 np0005486808 systemd[1]: Starting Open vSwitch...
Oct 14 04:16:00 np0005486808 systemd[1]: Finished Open vSwitch.
Oct 14 04:16:01 np0005486808 python3.9[43353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:16:02 np0005486808 python3.9[43505]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 14 04:16:03 np0005486808 kernel: SELinux:  Converting 2737 SID table entries...
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:16:03 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:16:04 np0005486808 python3.9[43660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:16:05 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 14 04:16:06 np0005486808 python3.9[43819]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:16:08 np0005486808 python3.9[43972]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:16:10 np0005486808 python3.9[44259]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 14 04:16:11 np0005486808 python3.9[44409]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:16:11 np0005486808 python3.9[44563]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:16:13 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:16:13 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:16:13 np0005486808 systemd[1]: Reloading.
Oct 14 04:16:13 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:16:13 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:16:14 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:16:14 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:16:14 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:16:14 np0005486808 systemd[1]: run-rdaab52f180ee46ccb06f082fde570be3.service: Deactivated successfully.
Oct 14 04:16:15 np0005486808 python3.9[44879]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:16:15 np0005486808 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 14 04:16:15 np0005486808 systemd[1]: Stopped Network Manager Wait Online.
Oct 14 04:16:15 np0005486808 systemd[1]: Stopping Network Manager Wait Online...
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4242] caught SIGTERM, shutting down normally.
Oct 14 04:16:15 np0005486808 systemd[1]: Stopping Network Manager...
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4254] dhcp4 (eth0): canceled DHCP transaction
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4254] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4254] dhcp4 (eth0): state changed no lease
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4257] manager: NetworkManager state is now CONNECTED_SITE
Oct 14 04:16:15 np0005486808 NetworkManager[3987]: <info>  [1760429775.4320] exiting (success)
Oct 14 04:16:15 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 04:16:15 np0005486808 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 14 04:16:15 np0005486808 systemd[1]: Stopped Network Manager.
Oct 14 04:16:15 np0005486808 systemd[1]: NetworkManager.service: Consumed 10.114s CPU time, 4.1M memory peak, read 0B from disk, written 31.0K to disk.
Oct 14 04:16:15 np0005486808 systemd[1]: Starting Network Manager...
Oct 14 04:16:15 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.5088] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:9f631222-b49c-47d2-a156-3705510cf21d)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.5089] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.5160] manager[0x55d3d38e7090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 14 04:16:15 np0005486808 systemd[1]: Starting Hostname Service...
Oct 14 04:16:15 np0005486808 systemd[1]: Started Hostname Service.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6246] hostname: hostname: using hostnamed
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6246] hostname: static hostname changed from (none) to "compute-0"
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6255] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6262] manager[0x55d3d38e7090]: rfkill: Wi-Fi hardware radio set enabled
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6262] manager[0x55d3d38e7090]: rfkill: WWAN hardware radio set enabled
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6295] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6311] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6312] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6313] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6314] manager: Networking is enabled by state file
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6317] settings: Loaded settings plugin: keyfile (internal)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6322] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6367] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6382] dhcp: init: Using DHCP client 'internal'
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6386] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6395] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6403] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6415] device (lo): Activation: starting connection 'lo' (047addc7-63d3-4a5b-85cf-f898174b7a4c)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6425] device (eth0): carrier: link connected
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6432] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6438] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6439] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6448] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6457] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6467] device (eth1): carrier: link connected
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6474] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6480] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (bc870d37-615b-5421-ae37-b2ab437e826e) (indicated)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6481] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6488] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6498] device (eth1): Activation: starting connection 'ci-private-network' (bc870d37-615b-5421-ae37-b2ab437e826e)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6508] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 14 04:16:15 np0005486808 systemd[1]: Started Network Manager.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6863] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6869] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6874] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6878] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6884] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6888] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6893] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6902] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6914] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6918] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6934] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6966] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 systemd[1]: Starting Network Manager Wait Online...
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6986] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.6991] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7012] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7103] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7113] device (lo): Activation: successful, device activated.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7122] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7130] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7133] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7138] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7143] device (eth1): Activation: successful, device activated.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7163] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7165] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7170] manager: NetworkManager state is now CONNECTED_SITE
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7175] device (eth0): Activation: successful, device activated.
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7182] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 14 04:16:15 np0005486808 NetworkManager[44885]: <info>  [1760429775.7186] manager: startup complete
Oct 14 04:16:15 np0005486808 systemd[1]: Finished Network Manager Wait Online.
Oct 14 04:16:16 np0005486808 python3.9[45105]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:16:20 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:16:20 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:16:20 np0005486808 systemd[1]: Reloading.
Oct 14 04:16:21 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:16:21 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:16:21 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:16:21 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:16:21 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:16:21 np0005486808 systemd[1]: run-rc033b2e95c0c4649a6d7e3ac55596a83.service: Deactivated successfully.
Oct 14 04:16:23 np0005486808 python3.9[45570]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:16:24 np0005486808 python3.9[45722]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:24 np0005486808 python3.9[45876]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:25 np0005486808 python3.9[46028]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:25 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 04:16:26 np0005486808 python3.9[46180]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:27 np0005486808 python3.9[46332]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:28 np0005486808 python3.9[46484]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:16:28 np0005486808 python3.9[46607]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429787.5385358-229-220793846710372/.source _original_basename=.lfeaph4k follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:29 np0005486808 python3.9[46759]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:30 np0005486808 python3.9[46911]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 14 04:16:31 np0005486808 python3.9[47063]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:34 np0005486808 python3.9[47490]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 14 04:16:35 np0005486808 ansible-async_wrapper.py[47665]: Invoked with j660913395303 300 /home/zuul/.ansible/tmp/ansible-tmp-1760429794.7541473-295-80754545344334/AnsiballZ_edpm_os_net_config.py _
Oct 14 04:16:35 np0005486808 ansible-async_wrapper.py[47668]: Starting module and watcher
Oct 14 04:16:35 np0005486808 ansible-async_wrapper.py[47668]: Start watching 47669 (300)
Oct 14 04:16:35 np0005486808 ansible-async_wrapper.py[47669]: Start module (47669)
Oct 14 04:16:35 np0005486808 ansible-async_wrapper.py[47665]: Return async_wrapper task started.
Oct 14 04:16:36 np0005486808 python3.9[47670]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 14 04:16:36 np0005486808 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 14 04:16:36 np0005486808 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 14 04:16:36 np0005486808 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 14 04:16:36 np0005486808 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 14 04:16:36 np0005486808 kernel: cfg80211: failed to load regulatory.db
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.0593] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.0611] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1232] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1234] audit: op="connection-add" uuid="61b2bb8b-d909-432b-bdb0-7b12bb14ea46" name="br-ex-br" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1253] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1254] audit: op="connection-add" uuid="4639acf0-9998-4264-915c-a2369dfcd16b" name="br-ex-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1268] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1269] audit: op="connection-add" uuid="47107c94-616d-40d9-8aa7-cb20638d66c5" name="eth1-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1285] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1286] audit: op="connection-add" uuid="1dfe247b-d3a9-465f-bfa1-8af40ad86b65" name="vlan20-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1300] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1302] audit: op="connection-add" uuid="1bb500e5-b837-477c-86d7-abea6e0f780d" name="vlan21-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1315] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1316] audit: op="connection-add" uuid="8009e833-5f8d-4652-a430-781dc3688e3d" name="vlan22-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1331] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1332] audit: op="connection-add" uuid="8d48b584-cf75-4b58-a9d8-436da7589a51" name="vlan23-port" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1356] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1376] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1377] audit: op="connection-add" uuid="e98a5fed-8822-4dfa-ad60-76dd63d1597b" name="br-ex-if" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1417] audit: op="connection-update" uuid="bc870d37-615b-5421-ae37-b2ab437e826e" name="ci-private-network" args="ovs-external-ids.data,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.routes,ovs-interface.type,connection.controller,connection.master,connection.timestamp,connection.slave-type,connection.port-type,ipv6.addresses,ipv6.method,ipv6.routing-rules,ipv6.dns,ipv6.addr-gen-mode,ipv6.routes" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1437] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1438] audit: op="connection-add" uuid="3e377e31-94f8-4b90-b64c-6e4ac4c60aa5" name="vlan20-if" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1457] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1458] audit: op="connection-add" uuid="78667420-cf1c-40b3-88e5-f8b0dd4bd731" name="vlan21-if" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1477] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1478] audit: op="connection-add" uuid="d1292836-92b6-4a66-a1d2-2aaf9d96d192" name="vlan22-if" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1499] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1500] audit: op="connection-add" uuid="5f835f8e-ee27-4a10-b5eb-f7eb4766cd73" name="vlan23-if" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1515] audit: op="connection-delete" uuid="4f8df825-5230-3ccd-ac25-c891458e3587" name="Wired connection 1" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1530] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1539] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1543] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (61b2bb8b-d909-432b-bdb0-7b12bb14ea46)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1544] audit: op="connection-activate" uuid="61b2bb8b-d909-432b-bdb0-7b12bb14ea46" name="br-ex-br" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1545] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1551] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1555] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4639acf0-9998-4264-915c-a2369dfcd16b)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1556] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1562] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1565] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (47107c94-616d-40d9-8aa7-cb20638d66c5)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1566] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1572] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1576] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1dfe247b-d3a9-465f-bfa1-8af40ad86b65)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1577] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1583] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1586] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1bb500e5-b837-477c-86d7-abea6e0f780d)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1588] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1594] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1598] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8009e833-5f8d-4652-a430-781dc3688e3d)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1600] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1606] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1610] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8d48b584-cf75-4b58-a9d8-436da7589a51)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1611] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1613] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1614] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1620] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1624] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1627] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e98a5fed-8822-4dfa-ad60-76dd63d1597b)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1628] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1632] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1633] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1634] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1635] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1645] device (eth1): disconnecting for new activation request.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1646] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1649] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1651] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1653] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1656] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1661] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1666] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3e377e31-94f8-4b90-b64c-6e4ac4c60aa5)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1666] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1669] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1672] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1673] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1676] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1682] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1685] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (78667420-cf1c-40b3-88e5-f8b0dd4bd731)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1686] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1688] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1689] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1691] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1693] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1696] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1700] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d1292836-92b6-4a66-a1d2-2aaf9d96d192)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1701] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1703] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1705] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1705] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1708] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1712] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1715] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (5f835f8e-ee27-4a10-b5eb-f7eb4766cd73)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1716] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1718] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1720] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1721] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1723] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1735] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1736] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1739] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1741] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1747] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1751] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1754] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1757] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1759] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1764] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1768] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1771] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 kernel: ovs-system: entered promiscuous mode
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1782] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1791] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1797] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1801] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1803] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1811] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1816] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 kernel: Timeout policy base is empty
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1819] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1821] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1827] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 systemd-udevd[47675]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1832] dhcp4 (eth0): canceled DHCP transaction
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1832] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1832] dhcp4 (eth0): state changed no lease
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1834] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1847] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1850] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47671 uid=0 result="fail" reason="Device is not activated"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1897] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1903] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1973] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1986] device (eth1): disconnecting for new activation request.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1987] audit: op="connection-activate" uuid="bc870d37-615b-5421-ae37-b2ab437e826e" name="ci-private-network" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1989] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.1998] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2025] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2141] device (eth1): Activation: starting connection 'ci-private-network' (bc870d37-615b-5421-ae37-b2ab437e826e)
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2145] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2169] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2177] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2184] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2194] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2205] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47671 uid=0 result="success"
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2206] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2209] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2213] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2215] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2218] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2221] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2229] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2242] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2249] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2255] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2263] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2268] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2275] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2281] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2288] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2294] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2300] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2306] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2312] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2320] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2326] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2388] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 kernel: br-ex: entered promiscuous mode
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2391] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2397] device (eth1): Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 kernel: vlan22: entered promiscuous mode
Oct 14 04:16:38 np0005486808 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2549] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2571] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2590] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2592] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2597] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 systemd-udevd[47676]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:16:38 np0005486808 kernel: vlan20: entered promiscuous mode
Oct 14 04:16:38 np0005486808 kernel: vlan23: entered promiscuous mode
Oct 14 04:16:38 np0005486808 systemd-udevd[47677]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2767] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2779] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2816] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2822] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2824] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2834] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2857] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2913] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2914] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2917] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2926] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 kernel: vlan21: entered promiscuous mode
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2952] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2993] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.2994] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3000] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3051] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3061] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3103] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3104] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 14 04:16:38 np0005486808 NetworkManager[44885]: <info>  [1760429798.3109] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 14 04:16:39 np0005486808 NetworkManager[44885]: <info>  [1760429799.4426] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47671 uid=0 result="success"
Oct 14 04:16:39 np0005486808 NetworkManager[44885]: <info>  [1760429799.6253] checkpoint[0x55d3d38bd950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 14 04:16:39 np0005486808 NetworkManager[44885]: <info>  [1760429799.6257] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47671 uid=0 result="success"
Oct 14 04:16:39 np0005486808 python3.9[48032]: ansible-ansible.legacy.async_status Invoked with jid=j660913395303.47665 mode=status _async_dir=/root/.ansible_async
Oct 14 04:16:39 np0005486808 NetworkManager[44885]: <info>  [1760429799.9903] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47671 uid=0 result="success"
Oct 14 04:16:39 np0005486808 NetworkManager[44885]: <info>  [1760429799.9917] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47671 uid=0 result="success"
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.2689] audit: op="networking-control" arg="global-dns-configuration" pid=47671 uid=0 result="success"
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.2723] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.2750] audit: op="networking-control" arg="global-dns-configuration" pid=47671 uid=0 result="success"
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.3247] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47671 uid=0 result="success"
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.5378] checkpoint[0x55d3d38bda20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 14 04:16:40 np0005486808 NetworkManager[44885]: <info>  [1760429800.5383] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47671 uid=0 result="success"
Oct 14 04:16:40 np0005486808 ansible-async_wrapper.py[47669]: Module complete (47669)
Oct 14 04:16:40 np0005486808 ansible-async_wrapper.py[47668]: Done in kid B.
Oct 14 04:16:43 np0005486808 python3.9[48138]: ansible-ansible.legacy.async_status Invoked with jid=j660913395303.47665 mode=status _async_dir=/root/.ansible_async
Oct 14 04:16:44 np0005486808 python3.9[48237]: ansible-ansible.legacy.async_status Invoked with jid=j660913395303.47665 mode=cleanup _async_dir=/root/.ansible_async
Oct 14 04:16:45 np0005486808 python3.9[48389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:16:45 np0005486808 python3.9[48512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429804.404302-322-26314021214375/.source.returncode _original_basename=.x51b38jt follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:45 np0005486808 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 14 04:16:46 np0005486808 python3.9[48666]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:16:47 np0005486808 python3.9[48790]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429805.924758-338-84063476518580/.source.cfg _original_basename=.vuasa1fv follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:16:48 np0005486808 python3.9[48942]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:16:48 np0005486808 systemd[1]: Reloading Network Manager...
Oct 14 04:16:48 np0005486808 NetworkManager[44885]: <info>  [1760429808.3070] audit: op="reload" arg="0" pid=48946 uid=0 result="success"
Oct 14 04:16:48 np0005486808 NetworkManager[44885]: <info>  [1760429808.3081] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 14 04:16:48 np0005486808 systemd[1]: Reloaded Network Manager.
Oct 14 04:16:48 np0005486808 systemd[1]: session-10.scope: Deactivated successfully.
Oct 14 04:16:48 np0005486808 systemd[1]: session-10.scope: Consumed 54.759s CPU time.
Oct 14 04:16:48 np0005486808 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Oct 14 04:16:48 np0005486808 systemd-logind[799]: Removed session 10.
Oct 14 04:16:54 np0005486808 systemd-logind[799]: New session 11 of user zuul.
Oct 14 04:16:54 np0005486808 systemd[1]: Started Session 11 of User zuul.
Oct 14 04:16:55 np0005486808 python3.9[49130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:16:56 np0005486808 python3.9[49285]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:16:57 np0005486808 python3.9[49478]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:16:58 np0005486808 systemd[1]: session-11.scope: Deactivated successfully.
Oct 14 04:16:58 np0005486808 systemd[1]: session-11.scope: Consumed 2.618s CPU time.
Oct 14 04:16:58 np0005486808 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Oct 14 04:16:58 np0005486808 systemd-logind[799]: Removed session 11.
Oct 14 04:16:58 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 04:17:03 np0005486808 systemd-logind[799]: New session 12 of user zuul.
Oct 14 04:17:03 np0005486808 systemd[1]: Started Session 12 of User zuul.
Oct 14 04:17:04 np0005486808 python3.9[49660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:17:05 np0005486808 python3.9[49814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:17:06 np0005486808 python3.9[49971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:17:07 np0005486808 python3.9[50055]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:17:10 np0005486808 python3.9[50209]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:17:11 np0005486808 python3.9[50404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:12 np0005486808 python3.9[50556]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:17:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-compat3488454926-merged.mount: Deactivated successfully.
Oct 14 04:17:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4283341148-merged.mount: Deactivated successfully.
Oct 14 04:17:12 np0005486808 podman[50557]: 2025-10-14 08:17:12.73849417 +0000 UTC m=+0.084400689 system refresh
Oct 14 04:17:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:17:13 np0005486808 python3.9[50719]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:17:14 np0005486808 python3.9[50842]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429833.0132737-79-76454913121833/.source.json follow=False _original_basename=podman_network_config.j2 checksum=5492a837168c5f572d6559497a520e3da3e3ba0a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:15 np0005486808 python3.9[50994]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:17:16 np0005486808 python3.9[51117]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760429834.9586022-94-189352622897952/.source.conf follow=False _original_basename=registries.conf.j2 checksum=88781afee5b5da15b4e5a77559a69fa53d49a457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:17:17 np0005486808 python3.9[51269]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:17:17 np0005486808 python3.9[51421]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:17:18 np0005486808 python3.9[51573]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:17:19 np0005486808 python3.9[51725]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:17:20 np0005486808 python3.9[51877]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:17:22 np0005486808 python3.9[52030]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:17:23 np0005486808 python3.9[52184]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:17:24 np0005486808 python3.9[52336]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:17:25 np0005486808 python3.9[52488]: ansible-service_facts Invoked
Oct 14 04:17:25 np0005486808 network[52505]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:17:25 np0005486808 network[52506]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:17:25 np0005486808 network[52507]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:17:31 np0005486808 python3.9[52961]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:17:34 np0005486808 python3.9[53114]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 14 04:17:35 np0005486808 python3.9[53266]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:17:36 np0005486808 python3.9[53391]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429854.926027-226-210783952394195/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:37 np0005486808 python3.9[53545]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:17:37 np0005486808 python3.9[53670]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429856.5104156-241-101197621256578/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:39 np0005486808 python3.9[53824]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:40 np0005486808 python3.9[53978]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:17:41 np0005486808 python3.9[54062]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:17:42 np0005486808 python3.9[54216]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:17:43 np0005486808 python3.9[54300]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:17:43 np0005486808 chronyd[809]: chronyd exiting
Oct 14 04:17:43 np0005486808 systemd[1]: Stopping NTP client/server...
Oct 14 04:17:43 np0005486808 systemd[1]: chronyd.service: Deactivated successfully.
Oct 14 04:17:43 np0005486808 systemd[1]: Stopped NTP client/server.
Oct 14 04:17:43 np0005486808 systemd[1]: Starting NTP client/server...
Oct 14 04:17:43 np0005486808 chronyd[54308]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 14 04:17:43 np0005486808 chronyd[54308]: Frequency -24.266 +/- 0.223 ppm read from /var/lib/chrony/drift
Oct 14 04:17:43 np0005486808 chronyd[54308]: Loaded seccomp filter (level 2)
Oct 14 04:17:43 np0005486808 systemd[1]: Started NTP client/server.
Oct 14 04:17:44 np0005486808 systemd[1]: session-12.scope: Deactivated successfully.
Oct 14 04:17:44 np0005486808 systemd[1]: session-12.scope: Consumed 29.486s CPU time.
Oct 14 04:17:44 np0005486808 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Oct 14 04:17:44 np0005486808 systemd-logind[799]: Removed session 12.
Oct 14 04:17:50 np0005486808 systemd-logind[799]: New session 13 of user zuul.
Oct 14 04:17:50 np0005486808 systemd[1]: Started Session 13 of User zuul.
Oct 14 04:17:51 np0005486808 python3.9[54489]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:52 np0005486808 python3.9[54641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:17:52 np0005486808 python3.9[54764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429871.348423-34-36376575687956/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:17:53 np0005486808 systemd[1]: session-13.scope: Deactivated successfully.
Oct 14 04:17:53 np0005486808 systemd[1]: session-13.scope: Consumed 2.058s CPU time.
Oct 14 04:17:53 np0005486808 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Oct 14 04:17:53 np0005486808 systemd-logind[799]: Removed session 13.
Oct 14 04:17:59 np0005486808 systemd-logind[799]: New session 14 of user zuul.
Oct 14 04:17:59 np0005486808 systemd[1]: Started Session 14 of User zuul.
Oct 14 04:18:00 np0005486808 python3.9[54942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:18:01 np0005486808 python3.9[55098]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:02 np0005486808 python3.9[55273]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:03 np0005486808 python3.9[55396]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760429881.9004865-41-118446680427938/.source.json _original_basename=.2eso1bd7 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:04 np0005486808 python3.9[55548]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:05 np0005486808 python3.9[55671]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429884.107456-64-266017790189199/.source _original_basename=.l382pnr7 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:06 np0005486808 python3.9[55823]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:18:07 np0005486808 python3.9[55975]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:07 np0005486808 python3.9[56098]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760429886.4835887-88-188787085370333/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:18:08 np0005486808 python3.9[56250]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:09 np0005486808 python3.9[56373]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760429887.9218683-88-73313073368629/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:18:09 np0005486808 python3.9[56525]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:10 np0005486808 python3.9[56677]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:11 np0005486808 python3.9[56800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429890.1287584-125-33394464632517/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:12 np0005486808 python3.9[56952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:12 np0005486808 python3.9[57075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429891.5698056-140-61983517363940/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:14 np0005486808 python3.9[57227]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:18:14 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:14 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:14 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:14 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:14 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:14 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:14 np0005486808 systemd[1]: Starting EDPM Container Shutdown...
Oct 14 04:18:14 np0005486808 systemd[1]: Finished EDPM Container Shutdown.
Oct 14 04:18:15 np0005486808 python3.9[57456]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:16 np0005486808 python3.9[57579]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429895.048379-163-147750087297663/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:17 np0005486808 python3.9[57731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:17 np0005486808 python3.9[57854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429896.592511-178-248328794024402/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:19 np0005486808 python3.9[58006]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:18:19 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:19 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:19 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:19 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:19 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:19 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:19 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:18:19 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:18:19 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:18:19 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:18:20 np0005486808 python3.9[58231]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:18:20 np0005486808 network[58248]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:18:20 np0005486808 network[58249]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:18:20 np0005486808 network[58250]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:18:24 np0005486808 python3.9[58514]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:18:24 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:25 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:25 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:25 np0005486808 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 14 04:18:25 np0005486808 iptables.init[58554]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 14 04:18:25 np0005486808 iptables.init[58554]: iptables: Flushing firewall rules: [  OK  ]
Oct 14 04:18:25 np0005486808 systemd[1]: iptables.service: Deactivated successfully.
Oct 14 04:18:25 np0005486808 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 14 04:18:26 np0005486808 python3.9[58752]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:18:27 np0005486808 python3.9[58906]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:18:27 np0005486808 systemd[1]: Reloading.
Oct 14 04:18:27 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:18:27 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:18:27 np0005486808 systemd[1]: Starting Netfilter Tables...
Oct 14 04:18:27 np0005486808 systemd[1]: Finished Netfilter Tables.
Oct 14 04:18:28 np0005486808 python3.9[59097]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:18:30 np0005486808 python3.9[59250]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:18:30 np0005486808 python3.9[59375]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429909.382458-247-106786729713771/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:18:31 np0005486808 python3.9[59526]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:18:57 np0005486808 systemd[1]: session-14.scope: Deactivated successfully.
Oct 14 04:18:57 np0005486808 systemd[1]: session-14.scope: Consumed 22.948s CPU time.
Oct 14 04:18:57 np0005486808 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Oct 14 04:18:57 np0005486808 systemd-logind[799]: Removed session 14.
Oct 14 04:19:09 np0005486808 systemd-logind[799]: New session 15 of user zuul.
Oct 14 04:19:09 np0005486808 systemd[1]: Started Session 15 of User zuul.
Oct 14 04:19:11 np0005486808 python3.9[59719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:19:12 np0005486808 python3.9[59875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:13 np0005486808 python3.9[60050]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:13 np0005486808 python3.9[60128]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.vdts14rn recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:14 np0005486808 python3.9[60280]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:15 np0005486808 python3.9[60358]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oqckljji recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:16 np0005486808 python3.9[60510]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:19:17 np0005486808 python3.9[60662]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:17 np0005486808 python3.9[60740]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:19:18 np0005486808 python3.9[60892]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:19 np0005486808 python3.9[60970]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:19:19 np0005486808 python3.9[61122]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:20 np0005486808 python3.9[61274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:21 np0005486808 python3.9[61352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:22 np0005486808 python3.9[61504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:22 np0005486808 python3.9[61582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:24 np0005486808 python3.9[61734]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:19:24 np0005486808 systemd[1]: Reloading.
Oct 14 04:19:24 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:19:24 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:19:25 np0005486808 python3.9[61923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:25 np0005486808 python3.9[62001]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:26 np0005486808 python3.9[62153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:27 np0005486808 python3.9[62231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:28 np0005486808 python3.9[62383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:19:28 np0005486808 systemd[1]: Reloading.
Oct 14 04:19:28 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:19:28 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:19:28 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:19:28 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:19:28 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:19:28 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:19:29 np0005486808 python3.9[62574]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:19:29 np0005486808 network[62591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:19:29 np0005486808 network[62592]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:19:29 np0005486808 network[62593]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:19:34 np0005486808 python3.9[62856]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:34 np0005486808 python3.9[62934]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:35 np0005486808 python3.9[63086]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:36 np0005486808 python3.9[63238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:37 np0005486808 python3.9[63361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429975.7667031-216-121753460891768/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:38 np0005486808 python3.9[63513]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 14 04:19:38 np0005486808 systemd[1]: Starting Time & Date Service...
Oct 14 04:19:38 np0005486808 systemd[1]: Started Time & Date Service.
Oct 14 04:19:39 np0005486808 python3.9[63669]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:40 np0005486808 python3.9[63821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:40 np0005486808 python3.9[63944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429979.6680214-251-30834841210494/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:41 np0005486808 python3.9[64096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:42 np0005486808 python3.9[64219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760429981.1077712-266-256518454844029/.source.yaml _original_basename=.8cejq45y follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:43 np0005486808 python3.9[64371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:43 np0005486808 python3.9[64494]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429982.5615184-281-104946734960579/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:44 np0005486808 python3.9[64646]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:19:45 np0005486808 python3.9[64799]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:19:46 np0005486808 python3[64952]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 14 04:19:47 np0005486808 python3.9[65104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:48 np0005486808 python3.9[65227]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429987.2398684-320-144398530673955/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:49 np0005486808 python3.9[65379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:50 np0005486808 python3.9[65502]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429988.8161979-335-180976290865589/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:50 np0005486808 python3.9[65654]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:51 np0005486808 python3.9[65777]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429990.3739488-350-57323697510630/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:52 np0005486808 python3.9[65929]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:53 np0005486808 python3.9[66052]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429991.9889054-365-270550420174483/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:53 np0005486808 chronyd[54308]: Selected source 162.159.200.1 (pool.ntp.org)
Oct 14 04:19:54 np0005486808 python3.9[66204]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:19:54 np0005486808 python3.9[66327]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760429993.555697-380-67425853704895/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:55 np0005486808 python3.9[66479]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:56 np0005486808 python3.9[66631]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:19:57 np0005486808 python3.9[66790]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:58 np0005486808 python3.9[66943]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:19:59 np0005486808 python3.9[67095]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:20:00 np0005486808 python3.9[67247]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 14 04:20:01 np0005486808 python3.9[67400]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 14 04:20:01 np0005486808 systemd[1]: session-15.scope: Deactivated successfully.
Oct 14 04:20:01 np0005486808 systemd[1]: session-15.scope: Consumed 39.410s CPU time.
Oct 14 04:20:01 np0005486808 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Oct 14 04:20:01 np0005486808 systemd-logind[799]: Removed session 15.
Oct 14 04:20:07 np0005486808 systemd-logind[799]: New session 16 of user zuul.
Oct 14 04:20:07 np0005486808 systemd[1]: Started Session 16 of User zuul.
Oct 14 04:20:08 np0005486808 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 14 04:20:08 np0005486808 python3.9[67581]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 14 04:20:09 np0005486808 python3.9[67735]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:20:10 np0005486808 python3.9[67887]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:20:11 np0005486808 python3.9[68039]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbD0jeEdAD28KoydcVu5yA2H4y36YscXuUgJaE3C/cEyfj4GDxnbf6fjuLrWgt0V1vuSYy9h4+9hUAzbmwcp59dIbFgOBDi7Bd44dJmTnHc56+Gnm0n+QYyTlF7uo0MQyoSEiBgGUX1jLr5WCLgew6anvUJgKbZ89ahpP/VzaNN5MqjpzhIwuQh8R/LYkEe8le9z1iCSYwIDu6nyLPqTLnI07UXsiLmIdlR/PnW7KvchK4X0p3aDBVgL+ttMjjnTNX5UESVPLtz4RdNGoM3LEepArlLDeWxxYcZWeZrolMjKejO/0JTJCFTN5YZXS6JKZPNtpVGnFqSngZZhBddZDR5eUH5YkxVN26phcqfD0M10RmKJWweIAVpBAajZzX7a3/hnPvqvDvknmrHuP3oBFG0g6ob+DFSJ4J8P0tLW2+9tr5WSKdFrif1SV4IZ7GeMPw2Kmn1M9lSY6jBKuijfHcZiLbXcb1HDAbtzJyg5ZiC4zFFv1ec6YwwzGBEZSvQaU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtM6RG1AF74QDB/MlmJuKRHUwVo+xUupUBaKPmgtdyz#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDelpe3LIZkyZDfaefVt9ehyOEDK8jUgacMl5u0ZTDVBAEIWtSCf4OV4QXavOPHhTEl/3/bcmrKz7fkPs3Dzpks=#012 create=True mode=0644 path=/tmp/ansible.sapp2_k6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:20:12 np0005486808 python3.9[68191]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.sapp2_k6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:13 np0005486808 python3.9[68345]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.sapp2_k6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:20:14 np0005486808 systemd[1]: session-16.scope: Deactivated successfully.
Oct 14 04:20:14 np0005486808 systemd[1]: session-16.scope: Consumed 3.912s CPU time.
Oct 14 04:20:14 np0005486808 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Oct 14 04:20:14 np0005486808 systemd-logind[799]: Removed session 16.
Oct 14 04:20:19 np0005486808 systemd-logind[799]: New session 17 of user zuul.
Oct 14 04:20:19 np0005486808 systemd[1]: Started Session 17 of User zuul.
Oct 14 04:20:20 np0005486808 python3.9[68523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:20:22 np0005486808 python3.9[68679]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 14 04:20:23 np0005486808 python3.9[68833]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:20:24 np0005486808 python3.9[68986]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:25 np0005486808 python3.9[69139]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:20:26 np0005486808 python3.9[69293]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:27 np0005486808 python3.9[69448]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:20:27 np0005486808 systemd[1]: session-17.scope: Deactivated successfully.
Oct 14 04:20:27 np0005486808 systemd[1]: session-17.scope: Consumed 5.358s CPU time.
Oct 14 04:20:27 np0005486808 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Oct 14 04:20:27 np0005486808 systemd-logind[799]: Removed session 17.
Oct 14 04:20:32 np0005486808 systemd-logind[799]: New session 18 of user zuul.
Oct 14 04:20:32 np0005486808 systemd[1]: Started Session 18 of User zuul.
Oct 14 04:20:34 np0005486808 python3.9[69626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:20:35 np0005486808 python3.9[69782]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:20:36 np0005486808 python3.9[69866]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 14 04:20:38 np0005486808 python3.9[70017]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:39 np0005486808 python3.9[70168]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 14 04:20:40 np0005486808 python3.9[70318]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:20:40 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:20:41 np0005486808 python3.9[70469]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:20:42 np0005486808 systemd[1]: session-18.scope: Deactivated successfully.
Oct 14 04:20:42 np0005486808 systemd[1]: session-18.scope: Consumed 6.861s CPU time.
Oct 14 04:20:42 np0005486808 systemd-logind[799]: Session 18 logged out. Waiting for processes to exit.
Oct 14 04:20:42 np0005486808 systemd-logind[799]: Removed session 18.
Oct 14 04:20:49 np0005486808 systemd-logind[799]: New session 19 of user zuul.
Oct 14 04:20:49 np0005486808 systemd[1]: Started Session 19 of User zuul.
Oct 14 04:20:55 np0005486808 python3[71235]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:20:57 np0005486808 python3[71330]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 14 04:20:58 np0005486808 python3[71357]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:20:58 np0005486808 python3[71383]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:58 np0005486808 kernel: loop: module loaded
Oct 14 04:20:58 np0005486808 kernel: loop3: detected capacity change from 0 to 41943040
Oct 14 04:20:59 np0005486808 python3[71417]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:20:59 np0005486808 lvm[71420]: PV /dev/loop3 not used.
Oct 14 04:20:59 np0005486808 lvm[71429]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 04:20:59 np0005486808 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 14 04:20:59 np0005486808 lvm[71431]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct 14 04:20:59 np0005486808 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 14 04:21:00 np0005486808 python3[71509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:21:00 np0005486808 python3[71582]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430059.800292-32827-118877840183304/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:01 np0005486808 python3[71632]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:21:01 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:01 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:01 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:01 np0005486808 systemd[1]: Starting Ceph OSD losetup...
Oct 14 04:21:01 np0005486808 bash[71673]: /dev/loop3: [64513]:4555332 (/var/lib/ceph-osd-0.img)
Oct 14 04:21:01 np0005486808 systemd[1]: Finished Ceph OSD losetup.
Oct 14 04:21:01 np0005486808 lvm[71674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 04:21:01 np0005486808 lvm[71674]: VG ceph_vg0 finished
Oct 14 04:21:02 np0005486808 python3[71700]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 14 04:21:03 np0005486808 python3[71727]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:04 np0005486808 python3[71753]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:04 np0005486808 kernel: loop4: detected capacity change from 0 to 41943040
Oct 14 04:21:04 np0005486808 python3[71785]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:04 np0005486808 lvm[71788]: PV /dev/loop4 not used.
Oct 14 04:21:04 np0005486808 lvm[71790]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 04:21:04 np0005486808 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 14 04:21:04 np0005486808 lvm[71801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 04:21:04 np0005486808 lvm[71801]: VG ceph_vg1 finished
Oct 14 04:21:04 np0005486808 lvm[71799]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct 14 04:21:04 np0005486808 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 14 04:21:05 np0005486808 python3[71879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:21:05 np0005486808 python3[71952]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430064.916856-32856-220499282436262/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:06 np0005486808 python3[72002]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:21:06 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:06 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:06 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:06 np0005486808 systemd[1]: Starting Ceph OSD losetup...
Oct 14 04:21:06 np0005486808 bash[72042]: /dev/loop4: [64513]:4655590 (/var/lib/ceph-osd-1.img)
Oct 14 04:21:06 np0005486808 systemd[1]: Finished Ceph OSD losetup.
Oct 14 04:21:06 np0005486808 lvm[72043]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 04:21:06 np0005486808 lvm[72043]: VG ceph_vg1 finished
Oct 14 04:21:07 np0005486808 python3[72069]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 14 04:21:08 np0005486808 python3[72096]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:08 np0005486808 python3[72122]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:08 np0005486808 kernel: loop5: detected capacity change from 0 to 41943040
Oct 14 04:21:09 np0005486808 python3[72154]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:09 np0005486808 lvm[72157]: PV /dev/loop5 not used.
Oct 14 04:21:09 np0005486808 lvm[72159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 04:21:09 np0005486808 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 14 04:21:09 np0005486808 lvm[72170]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 04:21:09 np0005486808 lvm[72170]: VG ceph_vg2 finished
Oct 14 04:21:09 np0005486808 lvm[72168]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct 14 04:21:09 np0005486808 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 14 04:21:10 np0005486808 python3[72248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:21:10 np0005486808 python3[72321]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430069.8858593-32883-117764093049586/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:11 np0005486808 python3[72371]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:21:11 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:11 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:11 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:11 np0005486808 systemd[1]: Starting Ceph OSD losetup...
Oct 14 04:21:11 np0005486808 bash[72411]: /dev/loop5: [64513]:4812230 (/var/lib/ceph-osd-2.img)
Oct 14 04:21:11 np0005486808 systemd[1]: Finished Ceph OSD losetup.
Oct 14 04:21:11 np0005486808 lvm[72412]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 04:21:11 np0005486808 lvm[72412]: VG ceph_vg2 finished
Oct 14 04:21:13 np0005486808 python3[72436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:21:15 np0005486808 python3[72529]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 14 04:21:17 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:21:17 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:21:17 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:21:17 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:21:17 np0005486808 systemd[1]: run-ra72162bba0ee4c3da222b9158df94c80.service: Deactivated successfully.
Oct 14 04:21:18 np0005486808 python3[72644]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:18 np0005486808 python3[72672]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:19 np0005486808 python3[72734]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:19 np0005486808 python3[72760]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:20 np0005486808 python3[72838]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:21:21 np0005486808 python3[72911]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430080.3709686-33030-281364943094189/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:22 np0005486808 python3[73013]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:21:22 np0005486808 python3[73086]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430081.7438915-33048-224430759479031/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:21:22 np0005486808 python3[73136]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:23 np0005486808 python3[73164]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:23 np0005486808 python3[73192]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:21:24 np0005486808 python3[73220]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:21:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:24 np0005486808 systemd-logind[799]: New session 20 of user ceph-admin.
Oct 14 04:21:24 np0005486808 systemd[1]: Created slice User Slice of UID 42477.
Oct 14 04:21:24 np0005486808 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 14 04:21:24 np0005486808 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 14 04:21:24 np0005486808 systemd[1]: Starting User Manager for UID 42477...
Oct 14 04:21:24 np0005486808 systemd[73241]: Queued start job for default target Main User Target.
Oct 14 04:21:24 np0005486808 systemd[73241]: Created slice User Application Slice.
Oct 14 04:21:24 np0005486808 systemd[73241]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 14 04:21:24 np0005486808 systemd[73241]: Started Daily Cleanup of User's Temporary Directories.
Oct 14 04:21:24 np0005486808 systemd[73241]: Reached target Paths.
Oct 14 04:21:24 np0005486808 systemd[73241]: Reached target Timers.
Oct 14 04:21:24 np0005486808 systemd[73241]: Starting D-Bus User Message Bus Socket...
Oct 14 04:21:24 np0005486808 systemd[73241]: Starting Create User's Volatile Files and Directories...
Oct 14 04:21:24 np0005486808 systemd[73241]: Finished Create User's Volatile Files and Directories.
Oct 14 04:21:24 np0005486808 systemd[73241]: Listening on D-Bus User Message Bus Socket.
Oct 14 04:21:24 np0005486808 systemd[73241]: Reached target Sockets.
Oct 14 04:21:24 np0005486808 systemd[73241]: Reached target Basic System.
Oct 14 04:21:24 np0005486808 systemd[73241]: Reached target Main User Target.
Oct 14 04:21:24 np0005486808 systemd[73241]: Startup finished in 172ms.
Oct 14 04:21:24 np0005486808 systemd[1]: Started User Manager for UID 42477.
Oct 14 04:21:24 np0005486808 systemd[1]: Started Session 20 of User ceph-admin.
Oct 14 04:21:24 np0005486808 systemd[1]: session-20.scope: Deactivated successfully.
Oct 14 04:21:24 np0005486808 systemd-logind[799]: Session 20 logged out. Waiting for processes to exit.
Oct 14 04:21:24 np0005486808 systemd-logind[799]: Removed session 20.
Oct 14 04:21:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-compat480440035-lower\x2dmapped.mount: Deactivated successfully.
Oct 14 04:21:35 np0005486808 systemd[1]: Stopping User Manager for UID 42477...
Oct 14 04:21:35 np0005486808 systemd[73241]: Activating special unit Exit the Session...
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped target Main User Target.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped target Basic System.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped target Paths.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped target Sockets.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped target Timers.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 14 04:21:35 np0005486808 systemd[73241]: Closed D-Bus User Message Bus Socket.
Oct 14 04:21:35 np0005486808 systemd[73241]: Stopped Create User's Volatile Files and Directories.
Oct 14 04:21:35 np0005486808 systemd[73241]: Removed slice User Application Slice.
Oct 14 04:21:35 np0005486808 systemd[73241]: Reached target Shutdown.
Oct 14 04:21:35 np0005486808 systemd[73241]: Finished Exit the Session.
Oct 14 04:21:35 np0005486808 systemd[73241]: Reached target Exit the Session.
Oct 14 04:21:35 np0005486808 systemd[1]: user@42477.service: Deactivated successfully.
Oct 14 04:21:35 np0005486808 systemd[1]: Stopped User Manager for UID 42477.
Oct 14 04:21:35 np0005486808 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 14 04:21:35 np0005486808 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 14 04:21:35 np0005486808 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 14 04:21:35 np0005486808 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 14 04:21:35 np0005486808 systemd[1]: Removed slice User Slice of UID 42477.
Oct 14 04:21:39 np0005486808 podman[73296]: 2025-10-14 08:21:39.660221294 +0000 UTC m=+14.570783078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:39 np0005486808 podman[73360]: 2025-10-14 08:21:39.75272425 +0000 UTC m=+0.053073429 container create fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:21:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1654753305-merged.mount: Deactivated successfully.
Oct 14 04:21:39 np0005486808 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 14 04:21:39 np0005486808 systemd[1]: Started libpod-conmon-fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db.scope.
Oct 14 04:21:39 np0005486808 podman[73360]: 2025-10-14 08:21:39.731477952 +0000 UTC m=+0.031827091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:39 np0005486808 podman[73360]: 2025-10-14 08:21:39.891786037 +0000 UTC m=+0.192135246 container init fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:21:39 np0005486808 podman[73360]: 2025-10-14 08:21:39.903296217 +0000 UTC m=+0.203645386 container start fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:21:39 np0005486808 podman[73360]: 2025-10-14 08:21:39.907365013 +0000 UTC m=+0.207714252 container attach fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:21:40 np0005486808 heuristic_lamport[73376]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73360]: 2025-10-14 08:21:40.217444252 +0000 UTC m=+0.517793421 container died fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:21:40 np0005486808 podman[73360]: 2025-10-14 08:21:40.285224991 +0000 UTC m=+0.585574170 container remove fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db (image=quay.io/ceph/ceph:v18, name=heuristic_lamport, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-conmon-fbc0be13389ec0c495e6fe4427ed35967e10851d206a2483381723dbf50e57db.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.373040482 +0000 UTC m=+0.063044424 container create 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:40 np0005486808 systemd[1]: Started libpod-conmon-741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab.scope.
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.335935201 +0000 UTC m=+0.025939193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.4572027 +0000 UTC m=+0.147206662 container init 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.468694868 +0000 UTC m=+0.158698770 container start 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.472729564 +0000 UTC m=+0.162733556 container attach 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:40 np0005486808 musing_fermi[73411]: 167 167
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.47433697 +0000 UTC m=+0.164340902 container died 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:21:40 np0005486808 podman[73395]: 2025-10-14 08:21:40.531166585 +0000 UTC m=+0.221170487 container remove 741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab (image=quay.io/ceph/ceph:v18, name=musing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-conmon-741c49161e120beffd28ed9794c9664349b529df209e5eee19d72b7c1e64efab.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.618352289 +0000 UTC m=+0.063851087 container create 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:40 np0005486808 systemd[1]: Started libpod-conmon-655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934.scope.
Oct 14 04:21:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-138707165f4258d4a0edb6c507534474b26174e36e3833a9c80f4e58f0e7246f-merged.mount: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.59005616 +0000 UTC m=+0.035555028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.696936057 +0000 UTC m=+0.142434845 container init 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.709370032 +0000 UTC m=+0.154868800 container start 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.713372397 +0000 UTC m=+0.158871175 container attach 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:21:40 np0005486808 gifted_faraday[73445]: AQAUCO5o44tLKxAAWau5pZC2HIm/E6XHREk4aQ==
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.731202567 +0000 UTC m=+0.176701365 container died 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:21:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-af84530fa3c8b2194aab21a295adabf72e989e9a6cf28e8ae7df97fdd7c4ea80-merged.mount: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73429]: 2025-10-14 08:21:40.783504363 +0000 UTC m=+0.229003131 container remove 655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934 (image=quay.io/ceph/ceph:v18, name=gifted_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-conmon-655930bb2ef0e8bd41e01ce1b20cbbc8ac0dd93d6bf48a1f05c32828429c8934.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.882089973 +0000 UTC m=+0.065085373 container create b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:21:40 np0005486808 systemd[1]: Started libpod-conmon-b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d.scope.
Oct 14 04:21:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.939266998 +0000 UTC m=+0.122262518 container init b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.944266551 +0000 UTC m=+0.127262001 container start b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.855747019 +0000 UTC m=+0.038742519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.948455121 +0000 UTC m=+0.131450601 container attach b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:21:40 np0005486808 elated_wilson[73480]: AQAUCO5oTbRNORAARiQMadJKBlLeWiP9xdVaIw==
Oct 14 04:21:40 np0005486808 systemd[1]: libpod-b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d.scope: Deactivated successfully.
Oct 14 04:21:40 np0005486808 podman[73464]: 2025-10-14 08:21:40.964704245 +0000 UTC m=+0.147699705 container died b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:41 np0005486808 podman[73464]: 2025-10-14 08:21:41.008130278 +0000 UTC m=+0.191125708 container remove b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d (image=quay.io/ceph/ceph:v18, name=elated_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-conmon-b381b7acad7b8aeec1d2f4bb651a0cad5cb02d1c85af78a8c0700bb08f13039d.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.095673732 +0000 UTC m=+0.055454618 container create 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:41 np0005486808 systemd[1]: Started libpod-conmon-87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71.scope.
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.076955756 +0000 UTC m=+0.036736652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.187868909 +0000 UTC m=+0.147649805 container init 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.198844553 +0000 UTC m=+0.158625429 container start 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.202914419 +0000 UTC m=+0.162695335 container attach 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:41 np0005486808 infallible_varahamihira[73518]: AQAVCO5oWnEUDhAApZ0lvWzKZgQNIWB6Pnecjw==
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.242134071 +0000 UTC m=+0.201914977 container died 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:21:41 np0005486808 podman[73500]: 2025-10-14 08:21:41.295925639 +0000 UTC m=+0.255706545 container remove 87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71 (image=quay.io/ceph/ceph:v18, name=infallible_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-conmon-87a74dbe4b78930ec6038cb6e39028c26c897827fcf3fa89e47a350fe9d39b71.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73537]: 2025-10-14 08:21:41.400601793 +0000 UTC m=+0.068236612 container create 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:41 np0005486808 systemd[1]: Started libpod-conmon-80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5.scope.
Oct 14 04:21:41 np0005486808 podman[73537]: 2025-10-14 08:21:41.373677783 +0000 UTC m=+0.041312652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8f55892571175f6b6c5be5292515ec2f60ee257b7d443a1657ca2649fa35f9/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:41 np0005486808 podman[73537]: 2025-10-14 08:21:41.512733261 +0000 UTC m=+0.180368090 container init 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:21:41 np0005486808 podman[73537]: 2025-10-14 08:21:41.523817558 +0000 UTC m=+0.191452377 container start 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 14 04:21:41 np0005486808 podman[73537]: 2025-10-14 08:21:41.528669896 +0000 UTC m=+0.196304785 container attach 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:41 np0005486808 strange_babbage[73553]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 14 04:21:41 np0005486808 strange_babbage[73553]: setting min_mon_release = pacific
Oct 14 04:21:41 np0005486808 strange_babbage[73553]: /usr/bin/monmaptool: set fsid to c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:41 np0005486808 strange_babbage[73553]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73560]: 2025-10-14 08:21:41.626926957 +0000 UTC m=+0.032547282 container died 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:41 np0005486808 podman[73560]: 2025-10-14 08:21:41.677308318 +0000 UTC m=+0.082928643 container remove 80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5 (image=quay.io/ceph/ceph:v18, name=strange_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:21:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-conmon-80b2f741d5e592aa70e11acd46172abd6fd40946ee0e2c357a47dbdf4f8a40b5.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.775185407 +0000 UTC m=+0.058189085 container create 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:41 np0005486808 systemd[1]: Started libpod-conmon-94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343.scope.
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.743847901 +0000 UTC m=+0.026851619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f80305a78206f302e358904b28c39a44e63e601a5677c5987a2562967eeaed/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f80305a78206f302e358904b28c39a44e63e601a5677c5987a2562967eeaed/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f80305a78206f302e358904b28c39a44e63e601a5677c5987a2562967eeaed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f80305a78206f302e358904b28c39a44e63e601a5677c5987a2562967eeaed/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.868433474 +0000 UTC m=+0.151437122 container init 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.878349168 +0000 UTC m=+0.161352816 container start 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.882386143 +0000 UTC m=+0.165389831 container attach 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:41 np0005486808 systemd[1]: libpod-94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343.scope: Deactivated successfully.
Oct 14 04:21:41 np0005486808 podman[73575]: 2025-10-14 08:21:41.975234829 +0000 UTC m=+0.258238477 container died 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:21:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-42f80305a78206f302e358904b28c39a44e63e601a5677c5987a2562967eeaed-merged.mount: Deactivated successfully.
Oct 14 04:21:42 np0005486808 podman[73575]: 2025-10-14 08:21:42.019182496 +0000 UTC m=+0.302186134 container remove 94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343 (image=quay.io/ceph/ceph:v18, name=romantic_payne, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:21:42 np0005486808 systemd[1]: libpod-conmon-94a9be6ff43fb08df918d08824536b5edd1197fcf455326d17dfcb444d175343.scope: Deactivated successfully.
Oct 14 04:21:42 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:42 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:42 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:42 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:42 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:42 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:42 np0005486808 systemd[1]: Reached target All Ceph clusters and services.
Oct 14 04:21:42 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:42 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:42 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:42 np0005486808 systemd[1]: Reached target Ceph cluster c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:42 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:42 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:42 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:43 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:43 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:43 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:43 np0005486808 systemd[1]: Created slice Slice /system/ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:43 np0005486808 systemd[1]: Reached target System Time Set.
Oct 14 04:21:43 np0005486808 systemd[1]: Reached target System Time Synchronized.
Oct 14 04:21:43 np0005486808 systemd[1]: Starting Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:21:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:43 np0005486808 podman[73869]: 2025-10-14 08:21:43.676627932 +0000 UTC m=+0.056550188 container create d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511897e8a0627f981da958e247674d8b7bec80fe75ae136c26ae04dd366791a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511897e8a0627f981da958e247674d8b7bec80fe75ae136c26ae04dd366791a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511897e8a0627f981da958e247674d8b7bec80fe75ae136c26ae04dd366791a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511897e8a0627f981da958e247674d8b7bec80fe75ae136c26ae04dd366791a9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 podman[73869]: 2025-10-14 08:21:43.75278107 +0000 UTC m=+0.132703366 container init d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:43 np0005486808 podman[73869]: 2025-10-14 08:21:43.660168212 +0000 UTC m=+0.040090478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:43 np0005486808 podman[73869]: 2025-10-14 08:21:43.764128095 +0000 UTC m=+0.144050371 container start d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:43 np0005486808 bash[73869]: d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e
Oct 14 04:21:43 np0005486808 systemd[1]: Started Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: pidfile_write: ignore empty --pid-file
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: load: jerasure load: lrc 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Git sha 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: DB SUMMARY
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: DB Session ID:  T0F303NWE1RRMD3QGLLD
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                                     Options.env: 0x555e00f45c40
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                                Options.info_log: 0x555e02fcce80
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                                 Options.wal_dir: 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                    Options.write_buffer_manager: 0x555e02fdcb40
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                               Options.row_cache: None
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                              Options.wal_filter: None
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.wal_compression: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.max_background_jobs: 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.max_total_wal_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:       Options.compaction_readahead_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Compression algorithms supported:
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kZSTD supported: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:           Options.merge_operator: 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:        Options.compaction_filter: None
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e02fcca80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e02fc51f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:        Options.write_buffer_size: 33554432
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:  Options.max_write_buffer_number: 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.compression: NoCompression
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.num_levels: 7
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f7a3431-24cf-46a0-945d-86985446add9
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430103818169, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430103820386, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "T0F303NWE1RRMD3QGLLD", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430103820504, "job": 1, "event": "recovery_finished"}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555e02feee00
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: DB pointer 0x555e03078000
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e02fc51f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@-1(???) e0 preinit fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-14T08:21:41.917239Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,os=Linux}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).mds e1 new map
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [DBG] : fsmap 
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mkfs c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:43 np0005486808 podman[73890]: 2025-10-14 08:21:43.874253245 +0000 UTC m=+0.061273634 container create b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 14 04:21:43 np0005486808 ceph-mon[73889]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:43 np0005486808 systemd[1]: Started libpod-conmon-b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a.scope.
Oct 14 04:21:43 np0005486808 podman[73890]: 2025-10-14 08:21:43.84400237 +0000 UTC m=+0.031022809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06725dba371d37534424469d7deb7885a6b0dbdf31ecee730cce0240d5a0a7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06725dba371d37534424469d7deb7885a6b0dbdf31ecee730cce0240d5a0a7a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06725dba371d37534424469d7deb7885a6b0dbdf31ecee730cce0240d5a0a7a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:43 np0005486808 podman[73890]: 2025-10-14 08:21:43.974963055 +0000 UTC m=+0.161983524 container init b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:21:43 np0005486808 podman[73890]: 2025-10-14 08:21:43.985215659 +0000 UTC m=+0.172236028 container start b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:21:43 np0005486808 podman[73890]: 2025-10-14 08:21:43.98911806 +0000 UTC m=+0.176138529 container attach b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:21:44 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 04:21:44 np0005486808 ceph-mon[73889]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528715734' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:  cluster:
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    id:     c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    health: HEALTH_OK
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]: 
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:  services:
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    mon: 1 daemons, quorum compute-0 (age 0.521254s)
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    mgr: no daemons active
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    osd: 0 osds: 0 up, 0 in
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]: 
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:  data:
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    pools:   0 pools, 0 pgs
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    objects: 0 objects, 0 B
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    usage:   0 B used, 0 B / 0 B avail
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]:    pgs:     
Oct 14 04:21:44 np0005486808 amazing_hopper[73944]: 
Oct 14 04:21:44 np0005486808 systemd[1]: libpod-b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a.scope: Deactivated successfully.
Oct 14 04:21:44 np0005486808 podman[73970]: 2025-10-14 08:21:44.438407211 +0000 UTC m=+0.029471934 container died b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e06725dba371d37534424469d7deb7885a6b0dbdf31ecee730cce0240d5a0a7a-merged.mount: Deactivated successfully.
Oct 14 04:21:44 np0005486808 podman[73970]: 2025-10-14 08:21:44.490245464 +0000 UTC m=+0.081310157 container remove b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a (image=quay.io/ceph/ceph:v18, name=amazing_hopper, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:44 np0005486808 systemd[1]: libpod-conmon-b2048a5d5215863d828a315761d73311474a222b46ad3ffaf21c8537f6eccf0a.scope: Deactivated successfully.
Oct 14 04:21:44 np0005486808 podman[73985]: 2025-10-14 08:21:44.565701552 +0000 UTC m=+0.041423586 container create 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:44 np0005486808 systemd[1]: Started libpod-conmon-8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9.scope.
Oct 14 04:21:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3ddbb284acc08853f6fd9eeaa65ef2c5688a3b6338ccee3243a1cbf15eb11a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3ddbb284acc08853f6fd9eeaa65ef2c5688a3b6338ccee3243a1cbf15eb11a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3ddbb284acc08853f6fd9eeaa65ef2c5688a3b6338ccee3243a1cbf15eb11a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3ddbb284acc08853f6fd9eeaa65ef2c5688a3b6338ccee3243a1cbf15eb11a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:44 np0005486808 podman[73985]: 2025-10-14 08:21:44.55061004 +0000 UTC m=+0.026332104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:44 np0005486808 podman[73985]: 2025-10-14 08:21:44.662284725 +0000 UTC m=+0.138006849 container init 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:44 np0005486808 podman[73985]: 2025-10-14 08:21:44.672004633 +0000 UTC m=+0.147726667 container start 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:21:44 np0005486808 podman[73985]: 2025-10-14 08:21:44.675473152 +0000 UTC m=+0.151195276 container attach 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:44 np0005486808 ceph-mon[73889]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637600634' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637600634' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 14 04:21:45 np0005486808 boring_poincare[74001]: 
Oct 14 04:21:45 np0005486808 boring_poincare[74001]: [global]
Oct 14 04:21:45 np0005486808 boring_poincare[74001]: #011fsid = c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:45 np0005486808 boring_poincare[74001]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 14 04:21:45 np0005486808 boring_poincare[74001]: #011osd_crush_chooseleaf_type = 0
Oct 14 04:21:45 np0005486808 systemd[1]: libpod-8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9.scope: Deactivated successfully.
Oct 14 04:21:45 np0005486808 podman[73985]: 2025-10-14 08:21:45.095133235 +0000 UTC m=+0.570855309 container died 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-da3ddbb284acc08853f6fd9eeaa65ef2c5688a3b6338ccee3243a1cbf15eb11a-merged.mount: Deactivated successfully.
Oct 14 04:21:45 np0005486808 podman[73985]: 2025-10-14 08:21:45.161250206 +0000 UTC m=+0.636972290 container remove 8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9 (image=quay.io/ceph/ceph:v18, name=boring_poincare, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:45 np0005486808 systemd[1]: libpod-conmon-8596e974f69ea64aadee9173df2cbf4d8341a100b5c95c65306900fc3487b1f9.scope: Deactivated successfully.
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.249537291 +0000 UTC m=+0.055779226 container create a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:21:45 np0005486808 systemd[1]: Started libpod-conmon-a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a.scope.
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.2309659 +0000 UTC m=+0.037207855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d8e1ab11445e29685a068f431f9c169a60d764fc6d6c210627f5b277c691b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d8e1ab11445e29685a068f431f9c169a60d764fc6d6c210627f5b277c691b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d8e1ab11445e29685a068f431f9c169a60d764fc6d6c210627f5b277c691b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d8e1ab11445e29685a068f431f9c169a60d764fc6d6c210627f5b277c691b4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.350686255 +0000 UTC m=+0.156928220 container init a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.362202974 +0000 UTC m=+0.168444909 container start a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.365714054 +0000 UTC m=+0.171956019 container attach a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744027110' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:21:45 np0005486808 systemd[1]: libpod-a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a.scope: Deactivated successfully.
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.751953802 +0000 UTC m=+0.558195767 container died a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b8d8e1ab11445e29685a068f431f9c169a60d764fc6d6c210627f5b277c691b4-merged.mount: Deactivated successfully.
Oct 14 04:21:45 np0005486808 podman[74039]: 2025-10-14 08:21:45.798627087 +0000 UTC m=+0.604869012 container remove a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a (image=quay.io/ceph/ceph:v18, name=objective_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:45 np0005486808 systemd[1]: libpod-conmon-a338b3720629127ab4a6cd21701e69fbac54b3db7b8b1cb28506304ea175e62a.scope: Deactivated successfully.
Oct 14 04:21:45 np0005486808 systemd[1]: Stopping Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: from='client.? 192.168.122.100:0/1637600634' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:21:45 np0005486808 ceph-mon[73889]: from='client.? 192.168.122.100:0/1637600634' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 14 04:21:46 np0005486808 ceph-mon[73889]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 14 04:21:46 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 14 04:21:46 np0005486808 ceph-mon[73889]: mon.compute-0@0(leader) e1 shutdown
Oct 14 04:21:46 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0[73885]: 2025-10-14T08:21:46.044+0000 7f4cbecd4640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 14 04:21:46 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0[73885]: 2025-10-14T08:21:46.044+0000 7f4cbecd4640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 14 04:21:46 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 14 04:21:46 np0005486808 ceph-mon[73889]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 14 04:21:46 np0005486808 podman[74125]: 2025-10-14 08:21:46.1574404 +0000 UTC m=+0.161507801 container died d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:21:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-511897e8a0627f981da958e247674d8b7bec80fe75ae136c26ae04dd366791a9-merged.mount: Deactivated successfully.
Oct 14 04:21:46 np0005486808 podman[74125]: 2025-10-14 08:21:46.209317114 +0000 UTC m=+0.213384515 container remove d240882297f9d9663e8cb1b5313204a0aae641aa3169eb388dc91dc01008ce4e (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:21:46 np0005486808 bash[74125]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0
Oct 14 04:21:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 14 04:21:46 np0005486808 systemd[1]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mon.compute-0.service: Deactivated successfully.
Oct 14 04:21:46 np0005486808 systemd[1]: Stopped Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:46 np0005486808 systemd[1]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mon.compute-0.service: Consumed 1.221s CPU time.
Oct 14 04:21:46 np0005486808 systemd[1]: Starting Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:21:46 np0005486808 podman[74230]: 2025-10-14 08:21:46.755634409 +0000 UTC m=+0.052653157 container create c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f031c50b6bdba07c59a7df648aa346d817483dac54cd13031ea8d793a73cbc9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f031c50b6bdba07c59a7df648aa346d817483dac54cd13031ea8d793a73cbc9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f031c50b6bdba07c59a7df648aa346d817483dac54cd13031ea8d793a73cbc9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f031c50b6bdba07c59a7df648aa346d817483dac54cd13031ea8d793a73cbc9d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:46 np0005486808 podman[74230]: 2025-10-14 08:21:46.734181085 +0000 UTC m=+0.031199923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:46 np0005486808 podman[74230]: 2025-10-14 08:21:46.840321451 +0000 UTC m=+0.137340289 container init c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:46 np0005486808 podman[74230]: 2025-10-14 08:21:46.855574177 +0000 UTC m=+0.152592955 container start c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:21:46 np0005486808 bash[74230]: c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f
Oct 14 04:21:46 np0005486808 systemd[1]: Started Ceph mon.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: pidfile_write: ignore empty --pid-file
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: load: jerasure load: lrc 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Git sha 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: DB SUMMARY
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: DB Session ID:  BMHF7YJUWAF2M714LKG8
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                                     Options.env: 0x5646f1667c40
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                                Options.info_log: 0x5646f3b33040
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                                 Options.wal_dir: 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                    Options.write_buffer_manager: 0x5646f3b42b40
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                               Options.row_cache: None
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                              Options.wal_filter: None
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.wal_compression: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.max_background_jobs: 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.max_total_wal_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:       Options.compaction_readahead_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Compression algorithms supported:
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kZSTD supported: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:           Options.merge_operator: 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:        Options.compaction_filter: None
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5646f3b32c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5646f3b2b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:        Options.write_buffer_size: 33554432
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:  Options.max_write_buffer_number: 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.compression: NoCompression
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.num_levels: 7
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f7a3431-24cf-46a0-945d-86985446add9
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430106932946, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430106940755, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430106, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430106940874, "job": 1, "event": "recovery_finished"}
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 14 04:21:46 np0005486808 podman[74250]: 2025-10-14 08:21:46.949062141 +0000 UTC m=+0.057773873 container create 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5646f3b54e00
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: DB pointer 0x5646f3c5c000
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???) e1 preinit fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).mds e1 new map
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : fsmap 
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 14 04:21:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 14 04:21:46 np0005486808 systemd[1]: Started libpod-conmon-073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337.scope.
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:46.921719109 +0000 UTC m=+0.030430901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:47 np0005486808 ceph-mon[74249]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 14 04:21:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d440dd19052ce1c80dc74a43562bf5178d99a4cdaf03446115de9dc7f17c4afc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d440dd19052ce1c80dc74a43562bf5178d99a4cdaf03446115de9dc7f17c4afc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d440dd19052ce1c80dc74a43562bf5178d99a4cdaf03446115de9dc7f17c4afc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:47.068269391 +0000 UTC m=+0.176981113 container init 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:47.080739647 +0000 UTC m=+0.189451369 container start 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:47.084932037 +0000 UTC m=+0.193643779 container attach 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:21:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 14 04:21:47 np0005486808 systemd[1]: libpod-073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337.scope: Deactivated successfully.
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:47.510116229 +0000 UTC m=+0.618827961 container died 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:21:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d440dd19052ce1c80dc74a43562bf5178d99a4cdaf03446115de9dc7f17c4afc-merged.mount: Deactivated successfully.
Oct 14 04:21:47 np0005486808 podman[74250]: 2025-10-14 08:21:47.556615249 +0000 UTC m=+0.665326971 container remove 073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337 (image=quay.io/ceph/ceph:v18, name=stupefied_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:21:47 np0005486808 systemd[1]: libpod-conmon-073342c87a282a1fb1c37c2160d870b7d3bbee8437053662cc055247bc3aa337.scope: Deactivated successfully.
Oct 14 04:21:47 np0005486808 podman[74344]: 2025-10-14 08:21:47.624515841 +0000 UTC m=+0.045240065 container create b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:21:47 np0005486808 systemd[1]: Started libpod-conmon-b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34.scope.
Oct 14 04:21:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12591f771a8f4aefe48d5a92c11869a9143ecccdcd05459fc6c433f8a2aed8d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12591f771a8f4aefe48d5a92c11869a9143ecccdcd05459fc6c433f8a2aed8d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12591f771a8f4aefe48d5a92c11869a9143ecccdcd05459fc6c433f8a2aed8d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:47 np0005486808 podman[74344]: 2025-10-14 08:21:47.601711468 +0000 UTC m=+0.022435773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:47 np0005486808 podman[74344]: 2025-10-14 08:21:47.715515454 +0000 UTC m=+0.136239768 container init b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:21:47 np0005486808 podman[74344]: 2025-10-14 08:21:47.726816737 +0000 UTC m=+0.147540991 container start b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:21:47 np0005486808 podman[74344]: 2025-10-14 08:21:47.731288735 +0000 UTC m=+0.152013039 container attach b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 14 04:21:48 np0005486808 systemd[1]: libpod-b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34.scope: Deactivated successfully.
Oct 14 04:21:48 np0005486808 podman[74344]: 2025-10-14 08:21:48.146974394 +0000 UTC m=+0.567698688 container died b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-12591f771a8f4aefe48d5a92c11869a9143ecccdcd05459fc6c433f8a2aed8d1-merged.mount: Deactivated successfully.
Oct 14 04:21:48 np0005486808 podman[74344]: 2025-10-14 08:21:48.198444957 +0000 UTC m=+0.619169201 container remove b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34 (image=quay.io/ceph/ceph:v18, name=gifted_tesla, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:21:48 np0005486808 systemd[1]: libpod-conmon-b24e5153aeee66e46d054073c5c79b2e06e5133b1798db7171204378efe64b34.scope: Deactivated successfully.
Oct 14 04:21:48 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:48 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:48 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:48 np0005486808 systemd[1]: Reloading.
Oct 14 04:21:48 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:21:48 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:21:48 np0005486808 systemd[1]: Starting Ceph mgr.compute-0.euuwqu for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:21:49 np0005486808 podman[74524]: 2025-10-14 08:21:49.120718796 +0000 UTC m=+0.056486747 container create 295afd06ae4ac197c2c5a35bf227a06f697efaa30884a95475fffc3fd4b6ecf9 (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feb550a6441e6491c861f2ec4cfe41d1de6210f420df8dab070b2ed12e6f9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feb550a6441e6491c861f2ec4cfe41d1de6210f420df8dab070b2ed12e6f9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feb550a6441e6491c861f2ec4cfe41d1de6210f420df8dab070b2ed12e6f9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08feb550a6441e6491c861f2ec4cfe41d1de6210f420df8dab070b2ed12e6f9c/merged/var/lib/ceph/mgr/ceph-compute-0.euuwqu supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 podman[74524]: 2025-10-14 08:21:49.092782197 +0000 UTC m=+0.028550218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:49 np0005486808 podman[74524]: 2025-10-14 08:21:49.19182436 +0000 UTC m=+0.127592381 container init 295afd06ae4ac197c2c5a35bf227a06f697efaa30884a95475fffc3fd4b6ecf9 (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:21:49 np0005486808 podman[74524]: 2025-10-14 08:21:49.210223466 +0000 UTC m=+0.145991437 container start 295afd06ae4ac197c2c5a35bf227a06f697efaa30884a95475fffc3fd4b6ecf9 (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 14 04:21:49 np0005486808 bash[74524]: 295afd06ae4ac197c2c5a35bf227a06f697efaa30884a95475fffc3fd4b6ecf9
Oct 14 04:21:49 np0005486808 systemd[1]: Started Ceph mgr.compute-0.euuwqu for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: pidfile_write: ignore empty --pid-file
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.339145484 +0000 UTC m=+0.073534805 container create 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:49 np0005486808 systemd[1]: Started libpod-conmon-41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec.scope.
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'alerts'
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.310854054 +0000 UTC m=+0.045243405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a52b5e7f0a5afa07d6d8beb287fdd6ab985976f38fc38d14edf6a59f13bde90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a52b5e7f0a5afa07d6d8beb287fdd6ab985976f38fc38d14edf6a59f13bde90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a52b5e7f0a5afa07d6d8beb287fdd6ab985976f38fc38d14edf6a59f13bde90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.448559413 +0000 UTC m=+0.182948754 container init 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.460177965 +0000 UTC m=+0.194567276 container start 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.464409076 +0000 UTC m=+0.198798387 container attach 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'balancer'
Oct 14 04:21:49 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:49.716+0000 7fb011bc9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:21:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:21:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234784528' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:21:49 np0005486808 crazy_moore[74585]: 
Oct 14 04:21:49 np0005486808 crazy_moore[74585]: {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "health": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "status": "HEALTH_OK",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "checks": {},
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "mutes": []
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "election_epoch": 5,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "quorum": [
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        0
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    ],
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "quorum_names": [
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "compute-0"
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    ],
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "quorum_age": 2,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "monmap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "epoch": 1,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "min_mon_release_name": "reef",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_mons": 1
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "osdmap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "epoch": 1,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_osds": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_up_osds": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "osd_up_since": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_in_osds": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "osd_in_since": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_remapped_pgs": 0
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "pgmap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "pgs_by_state": [],
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_pgs": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_pools": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_objects": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "data_bytes": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "bytes_used": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "bytes_avail": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "bytes_total": 0
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "fsmap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "epoch": 1,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "by_rank": [],
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "up:standby": 0
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "mgrmap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "available": false,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "num_standbys": 0,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "modules": [
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:            "iostat",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:            "nfs",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:            "restful"
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        ],
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "services": {}
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "servicemap": {
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "epoch": 1,
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:        "services": {}
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    },
Oct 14 04:21:49 np0005486808 crazy_moore[74585]:    "progress_events": {}
Oct 14 04:21:49 np0005486808 crazy_moore[74585]: }
Oct 14 04:21:49 np0005486808 systemd[1]: libpod-41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec.scope: Deactivated successfully.
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.939487545 +0000 UTC m=+0.673876906 container died 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:21:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1a52b5e7f0a5afa07d6d8beb287fdd6ab985976f38fc38d14edf6a59f13bde90-merged.mount: Deactivated successfully.
Oct 14 04:21:49 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:49.979+0000 7fb011bc9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:21:49 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'cephadm'
Oct 14 04:21:49 np0005486808 podman[74544]: 2025-10-14 08:21:49.996670599 +0000 UTC m=+0.731059910 container remove 41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec (image=quay.io/ceph/ceph:v18, name=crazy_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:21:50 np0005486808 systemd[1]: libpod-conmon-41a3ed03d5b10b14b093c3cbe598692930e10d0b70d0e165e67f93f4247cd8ec.scope: Deactivated successfully.
Oct 14 04:21:51 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'crash'
Oct 14 04:21:52 np0005486808 podman[74634]: 2025-10-14 08:21:52.113235039 +0000 UTC m=+0.074998167 container create 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:21:52 np0005486808 systemd[1]: Started libpod-conmon-2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4.scope.
Oct 14 04:21:52 np0005486808 podman[74634]: 2025-10-14 08:21:52.081574163 +0000 UTC m=+0.043337361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d214542ca536b198929cdcf656fcfaeaf55b66802a3f87cc248a31acb718599c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d214542ca536b198929cdcf656fcfaeaf55b66802a3f87cc248a31acb718599c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d214542ca536b198929cdcf656fcfaeaf55b66802a3f87cc248a31acb718599c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:52 np0005486808 podman[74634]: 2025-10-14 08:21:52.20629721 +0000 UTC m=+0.168060368 container init 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:21:52 np0005486808 podman[74634]: 2025-10-14 08:21:52.215846424 +0000 UTC m=+0.177609552 container start 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:21:52 np0005486808 podman[74634]: 2025-10-14 08:21:52.220048334 +0000 UTC m=+0.181811502 container attach 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:21:52 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:52.241+0000 7fb011bc9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 14 04:21:52 np0005486808 ceph-mgr[74543]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 14 04:21:52 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'dashboard'
Oct 14 04:21:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:21:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109467077' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]: 
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]: {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "health": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "status": "HEALTH_OK",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "checks": {},
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "mutes": []
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "election_epoch": 5,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "quorum": [
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        0
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    ],
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "quorum_names": [
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "compute-0"
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    ],
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "quorum_age": 5,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "monmap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "epoch": 1,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "min_mon_release_name": "reef",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_mons": 1
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "osdmap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "epoch": 1,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_osds": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_up_osds": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "osd_up_since": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_in_osds": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "osd_in_since": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_remapped_pgs": 0
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "pgmap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "pgs_by_state": [],
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_pgs": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_pools": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_objects": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "data_bytes": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "bytes_used": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "bytes_avail": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "bytes_total": 0
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "fsmap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "epoch": 1,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "by_rank": [],
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "up:standby": 0
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "mgrmap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "available": false,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "num_standbys": 0,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "modules": [
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:            "iostat",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:            "nfs",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:            "restful"
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        ],
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "services": {}
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "servicemap": {
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "epoch": 1,
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:        "services": {}
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    },
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]:    "progress_events": {}
Oct 14 04:21:52 np0005486808 nice_montalcini[74650]: }
Oct 14 04:21:52 np0005486808 systemd[1]: libpod-2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4.scope: Deactivated successfully.
Oct 14 04:21:52 np0005486808 podman[74676]: 2025-10-14 08:21:52.720487598 +0000 UTC m=+0.032996005 container died 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:21:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d214542ca536b198929cdcf656fcfaeaf55b66802a3f87cc248a31acb718599c-merged.mount: Deactivated successfully.
Oct 14 04:21:52 np0005486808 podman[74676]: 2025-10-14 08:21:52.77020576 +0000 UTC m=+0.082714147 container remove 2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4 (image=quay.io/ceph/ceph:v18, name=nice_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:21:52 np0005486808 systemd[1]: libpod-conmon-2a9135955d1146ab5a648174393e427e69320c9142d77af81dc0df3e461be4f4.scope: Deactivated successfully.
Oct 14 04:21:53 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'devicehealth'
Oct 14 04:21:53 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:53.906+0000 7fb011bc9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 14 04:21:53 np0005486808 ceph-mgr[74543]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 14 04:21:53 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'diskprediction_local'
Oct 14 04:21:54 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 14 04:21:54 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 14 04:21:54 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]:  from numpy import show_config as show_numpy_config
Oct 14 04:21:54 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:54.397+0000 7fb011bc9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 14 04:21:54 np0005486808 ceph-mgr[74543]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 14 04:21:54 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'influx'
Oct 14 04:21:54 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:54.619+0000 7fb011bc9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 14 04:21:54 np0005486808 ceph-mgr[74543]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 14 04:21:54 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'insights'
Oct 14 04:21:54 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'iostat'
Oct 14 04:21:54 np0005486808 podman[74691]: 2025-10-14 08:21:54.892134081 +0000 UTC m=+0.079288349 container create 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:21:54 np0005486808 systemd[1]: Started libpod-conmon-5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1.scope.
Oct 14 04:21:54 np0005486808 podman[74691]: 2025-10-14 08:21:54.856712538 +0000 UTC m=+0.043866876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c514a070f408f72eb197d1fcbe2d1de533216d2f91dc96f40aee12818b91d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c514a070f408f72eb197d1fcbe2d1de533216d2f91dc96f40aee12818b91d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c514a070f408f72eb197d1fcbe2d1de533216d2f91dc96f40aee12818b91d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:54 np0005486808 podman[74691]: 2025-10-14 08:21:54.987604022 +0000 UTC m=+0.174758370 container init 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:21:54 np0005486808 podman[74691]: 2025-10-14 08:21:54.998552605 +0000 UTC m=+0.185706883 container start 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:21:55 np0005486808 podman[74691]: 2025-10-14 08:21:55.003732303 +0000 UTC m=+0.190886621 container attach 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:21:55 np0005486808 ceph-mgr[74543]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 14 04:21:55 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'k8sevents'
Oct 14 04:21:55 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:55.056+0000 7fb011bc9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 14 04:21:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:21:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540487088' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:21:55 np0005486808 nice_panini[74708]: 
Oct 14 04:21:55 np0005486808 nice_panini[74708]: {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "health": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "status": "HEALTH_OK",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "checks": {},
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "mutes": []
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "election_epoch": 5,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "quorum": [
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        0
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    ],
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "quorum_names": [
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "compute-0"
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    ],
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "quorum_age": 8,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "monmap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "epoch": 1,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "min_mon_release_name": "reef",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_mons": 1
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "osdmap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "epoch": 1,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_osds": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_up_osds": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "osd_up_since": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_in_osds": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "osd_in_since": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_remapped_pgs": 0
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "pgmap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "pgs_by_state": [],
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_pgs": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_pools": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_objects": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "data_bytes": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "bytes_used": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "bytes_avail": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "bytes_total": 0
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "fsmap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "epoch": 1,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "by_rank": [],
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "up:standby": 0
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "mgrmap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "available": false,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "num_standbys": 0,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "modules": [
Oct 14 04:21:55 np0005486808 nice_panini[74708]:            "iostat",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:            "nfs",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:            "restful"
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        ],
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "services": {}
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "servicemap": {
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "epoch": 1,
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:21:55 np0005486808 nice_panini[74708]:        "services": {}
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    },
Oct 14 04:21:55 np0005486808 nice_panini[74708]:    "progress_events": {}
Oct 14 04:21:55 np0005486808 nice_panini[74708]: }
Oct 14 04:21:55 np0005486808 systemd[1]: libpod-5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1.scope: Deactivated successfully.
Oct 14 04:21:55 np0005486808 podman[74691]: 2025-10-14 08:21:55.429221893 +0000 UTC m=+0.616376171 container died 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:21:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-82c514a070f408f72eb197d1fcbe2d1de533216d2f91dc96f40aee12818b91d0-merged.mount: Deactivated successfully.
Oct 14 04:21:55 np0005486808 podman[74691]: 2025-10-14 08:21:55.484728751 +0000 UTC m=+0.671882999 container remove 5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1 (image=quay.io/ceph/ceph:v18, name=nice_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:55 np0005486808 systemd[1]: libpod-conmon-5911d86e75853790debe156efba760deb86a66e7ee8973ae6d281eacdc26efb1.scope: Deactivated successfully.
Oct 14 04:21:56 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'localpool'
Oct 14 04:21:56 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'mds_autoscaler'
Oct 14 04:21:57 np0005486808 podman[74745]: 2025-10-14 08:21:57.578168167 +0000 UTC m=+0.060781059 container create 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:21:57 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'mirroring'
Oct 14 04:21:57 np0005486808 systemd[1]: Started libpod-conmon-3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c.scope.
Oct 14 04:21:57 np0005486808 podman[74745]: 2025-10-14 08:21:57.556962641 +0000 UTC m=+0.039575533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:21:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:21:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4257b02f635b336ee80f1db119efc2549eb94d1be9498347beb89b4d6ee81dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4257b02f635b336ee80f1db119efc2549eb94d1be9498347beb89b4d6ee81dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4257b02f635b336ee80f1db119efc2549eb94d1be9498347beb89b4d6ee81dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:21:57 np0005486808 podman[74745]: 2025-10-14 08:21:57.673772682 +0000 UTC m=+0.156385634 container init 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:21:57 np0005486808 podman[74745]: 2025-10-14 08:21:57.683050527 +0000 UTC m=+0.165663429 container start 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:21:57 np0005486808 podman[74745]: 2025-10-14 08:21:57.68699951 +0000 UTC m=+0.169612462 container attach 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:21:57 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'nfs'
Oct 14 04:21:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:21:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761242554' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:21:58 np0005486808 pensive_jones[74762]: 
Oct 14 04:21:58 np0005486808 pensive_jones[74762]: {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "health": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "status": "HEALTH_OK",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "checks": {},
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "mutes": []
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "election_epoch": 5,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "quorum": [
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        0
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    ],
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "quorum_names": [
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "compute-0"
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    ],
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "quorum_age": 11,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "monmap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "epoch": 1,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "min_mon_release_name": "reef",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_mons": 1
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "osdmap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "epoch": 1,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_osds": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_up_osds": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "osd_up_since": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_in_osds": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "osd_in_since": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_remapped_pgs": 0
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "pgmap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "pgs_by_state": [],
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_pgs": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_pools": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_objects": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "data_bytes": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "bytes_used": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "bytes_avail": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "bytes_total": 0
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "fsmap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "epoch": 1,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "by_rank": [],
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "up:standby": 0
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "mgrmap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "available": false,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "num_standbys": 0,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "modules": [
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:            "iostat",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:            "nfs",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:            "restful"
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        ],
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "services": {}
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "servicemap": {
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "epoch": 1,
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:        "services": {}
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    },
Oct 14 04:21:58 np0005486808 pensive_jones[74762]:    "progress_events": {}
Oct 14 04:21:58 np0005486808 pensive_jones[74762]: }
Oct 14 04:21:58 np0005486808 systemd[1]: libpod-3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c.scope: Deactivated successfully.
Oct 14 04:21:58 np0005486808 podman[74745]: 2025-10-14 08:21:58.102359891 +0000 UTC m=+0.584972753 container died 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:21:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f4257b02f635b336ee80f1db119efc2549eb94d1be9498347beb89b4d6ee81dc-merged.mount: Deactivated successfully.
Oct 14 04:21:58 np0005486808 podman[74745]: 2025-10-14 08:21:58.162407858 +0000 UTC m=+0.645020750 container remove 3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c (image=quay.io/ceph/ceph:v18, name=pensive_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:21:58 np0005486808 systemd[1]: libpod-conmon-3c9582d171a4ead651ba1299a9a7bbea24e8863e9e6dcdfeb9279a0086f66d3c.scope: Deactivated successfully.
Oct 14 04:21:58 np0005486808 ceph-mgr[74543]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 14 04:21:58 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'orchestrator'
Oct 14 04:21:58 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:58.527+0000 7fb011bc9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'osd_perf_query'
Oct 14 04:21:59 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:59.166+0000 7fb011bc9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'osd_support'
Oct 14 04:21:59 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:59.442+0000 7fb011bc9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:21:59.692+0000 7fb011bc9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 14 04:21:59 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'pg_autoscaler'
Oct 14 04:22:00 np0005486808 ceph-mgr[74543]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 14 04:22:00 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:00.010+0000 7fb011bc9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 14 04:22:00 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'progress'
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.249231947 +0000 UTC m=+0.053364538 container create 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:22:00 np0005486808 ceph-mgr[74543]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 14 04:22:00 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'prometheus'
Oct 14 04:22:00 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:00.250+0000 7fb011bc9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 14 04:22:00 np0005486808 systemd[1]: Started libpod-conmon-29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a.scope.
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.221080041 +0000 UTC m=+0.025212672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258bb8d655e60cf56ab5500ad995270a1b5c80456eae703624e4754ed952bcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258bb8d655e60cf56ab5500ad995270a1b5c80456eae703624e4754ed952bcda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258bb8d655e60cf56ab5500ad995270a1b5c80456eae703624e4754ed952bcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.340896218 +0000 UTC m=+0.145028859 container init 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.357694629 +0000 UTC m=+0.161827220 container start 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.361733304 +0000 UTC m=+0.165865935 container attach 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:22:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423669388' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]: 
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]: {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "health": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "status": "HEALTH_OK",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "checks": {},
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "mutes": []
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "election_epoch": 5,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "quorum": [
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        0
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    ],
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "quorum_names": [
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "compute-0"
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    ],
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "quorum_age": 13,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "monmap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "epoch": 1,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "min_mon_release_name": "reef",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_mons": 1
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "osdmap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "epoch": 1,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_osds": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_up_osds": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "osd_up_since": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_in_osds": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "osd_in_since": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_remapped_pgs": 0
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "pgmap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "pgs_by_state": [],
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_pgs": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_pools": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_objects": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "data_bytes": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "bytes_used": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "bytes_avail": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "bytes_total": 0
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "fsmap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "epoch": 1,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "by_rank": [],
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "up:standby": 0
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "mgrmap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "available": false,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "num_standbys": 0,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "modules": [
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:            "iostat",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:            "nfs",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:            "restful"
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        ],
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "services": {}
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "servicemap": {
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "epoch": 1,
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:        "services": {}
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    },
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]:    "progress_events": {}
Oct 14 04:22:00 np0005486808 affectionate_bardeen[74815]: }
Oct 14 04:22:00 np0005486808 systemd[1]: libpod-29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a.scope: Deactivated successfully.
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.761759535 +0000 UTC m=+0.565892126 container died 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-258bb8d655e60cf56ab5500ad995270a1b5c80456eae703624e4754ed952bcda-merged.mount: Deactivated successfully.
Oct 14 04:22:00 np0005486808 podman[74799]: 2025-10-14 08:22:00.821129783 +0000 UTC m=+0.625262374 container remove 29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a (image=quay.io/ceph/ceph:v18, name=affectionate_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:00 np0005486808 systemd[1]: libpod-conmon-29ba1f4930a45375c15447dcff12d1378c2c8505d335878a6e22effaa0faeb7a.scope: Deactivated successfully.
Oct 14 04:22:01 np0005486808 ceph-mgr[74543]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 14 04:22:01 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rbd_support'
Oct 14 04:22:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:01.278+0000 7fb011bc9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 14 04:22:01 np0005486808 ceph-mgr[74543]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 14 04:22:01 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'restful'
Oct 14 04:22:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:01.615+0000 7fb011bc9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 14 04:22:02 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rgw'
Oct 14 04:22:02 np0005486808 podman[74857]: 2025-10-14 08:22:02.928931062 +0000 UTC m=+0.075178361 container create 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:02 np0005486808 systemd[1]: Started libpod-conmon-96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b.scope.
Oct 14 04:22:02 np0005486808 podman[74857]: 2025-10-14 08:22:02.901203709 +0000 UTC m=+0.047451038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ec48a5f01e849525cc01ebbb4ceb2f1ebfac45caca44bab3f1c874095e3445/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ec48a5f01e849525cc01ebbb4ceb2f1ebfac45caca44bab3f1c874095e3445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ec48a5f01e849525cc01ebbb4ceb2f1ebfac45caca44bab3f1c874095e3445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:03 np0005486808 podman[74857]: 2025-10-14 08:22:03.022758456 +0000 UTC m=+0.169005825 container init 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:22:03 np0005486808 podman[74857]: 2025-10-14 08:22:03.027520472 +0000 UTC m=+0.173767761 container start 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:03 np0005486808 podman[74857]: 2025-10-14 08:22:03.030639501 +0000 UTC m=+0.176886880 container attach 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:22:03 np0005486808 ceph-mgr[74543]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 14 04:22:03 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rook'
Oct 14 04:22:03 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:03.086+0000 7fb011bc9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 14 04:22:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:22:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183462732' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]: 
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]: {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "health": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "status": "HEALTH_OK",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "checks": {},
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "mutes": []
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "election_epoch": 5,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "quorum": [
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        0
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    ],
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "quorum_names": [
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "compute-0"
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    ],
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "quorum_age": 16,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "monmap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "epoch": 1,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "min_mon_release_name": "reef",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_mons": 1
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "osdmap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "epoch": 1,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_osds": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_up_osds": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "osd_up_since": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_in_osds": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "osd_in_since": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_remapped_pgs": 0
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "pgmap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "pgs_by_state": [],
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_pgs": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_pools": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_objects": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "data_bytes": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "bytes_used": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "bytes_avail": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "bytes_total": 0
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "fsmap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "epoch": 1,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "by_rank": [],
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "up:standby": 0
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "mgrmap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "available": false,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "num_standbys": 0,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "modules": [
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:            "iostat",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:            "nfs",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:            "restful"
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        ],
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "services": {}
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "servicemap": {
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "epoch": 1,
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:        "services": {}
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    },
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]:    "progress_events": {}
Oct 14 04:22:03 np0005486808 goofy_mestorf[74874]: }
Oct 14 04:22:03 np0005486808 systemd[1]: libpod-96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b.scope: Deactivated successfully.
Oct 14 04:22:03 np0005486808 podman[74857]: 2025-10-14 08:22:03.422382128 +0000 UTC m=+0.568629417 container died 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f8ec48a5f01e849525cc01ebbb4ceb2f1ebfac45caca44bab3f1c874095e3445-merged.mount: Deactivated successfully.
Oct 14 04:22:03 np0005486808 podman[74857]: 2025-10-14 08:22:03.459250295 +0000 UTC m=+0.605497584 container remove 96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b (image=quay.io/ceph/ceph:v18, name=goofy_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:03 np0005486808 systemd[1]: libpod-conmon-96aacc2f0e8fe0aef1cf0875024c1873f65b451e7e548d41e0a6ae93ecf6742b.scope: Deactivated successfully.
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'selftest'
Oct 14 04:22:05 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:05.283+0000 7fb011bc9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 14 04:22:05 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:05.524+0000 7fb011bc9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'snap_schedule'
Oct 14 04:22:05 np0005486808 podman[74911]: 2025-10-14 08:22:05.575947785 +0000 UTC m=+0.082008633 container create 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:22:05 np0005486808 systemd[1]: Started libpod-conmon-05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd.scope.
Oct 14 04:22:05 np0005486808 podman[74911]: 2025-10-14 08:22:05.535544489 +0000 UTC m=+0.041605387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29de4a8b9a8cfc4a9e2df612a3906d034d0bc2579cb383e7e6b88f0f7766f1bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29de4a8b9a8cfc4a9e2df612a3906d034d0bc2579cb383e7e6b88f0f7766f1bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29de4a8b9a8cfc4a9e2df612a3906d034d0bc2579cb383e7e6b88f0f7766f1bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:05 np0005486808 podman[74911]: 2025-10-14 08:22:05.688907476 +0000 UTC m=+0.194968384 container init 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:05 np0005486808 podman[74911]: 2025-10-14 08:22:05.699748309 +0000 UTC m=+0.205809157 container start 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:05 np0005486808 podman[74911]: 2025-10-14 08:22:05.705283458 +0000 UTC m=+0.211344366 container attach 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 14 04:22:05 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'stats'
Oct 14 04:22:05 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:05.767+0000 7fb011bc9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 14 04:22:06 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'status'
Oct 14 04:22:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:22:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132923281' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]: 
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]: {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "health": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "status": "HEALTH_OK",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "checks": {},
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "mutes": []
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "election_epoch": 5,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "quorum": [
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        0
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    ],
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "quorum_names": [
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "compute-0"
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    ],
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "quorum_age": 19,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "monmap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "epoch": 1,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "min_mon_release_name": "reef",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_mons": 1
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "osdmap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "epoch": 1,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_osds": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_up_osds": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "osd_up_since": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_in_osds": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "osd_in_since": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_remapped_pgs": 0
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "pgmap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "pgs_by_state": [],
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_pgs": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_pools": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_objects": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "data_bytes": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "bytes_used": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "bytes_avail": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "bytes_total": 0
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "fsmap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "epoch": 1,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "by_rank": [],
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "up:standby": 0
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "mgrmap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "available": false,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "num_standbys": 0,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "modules": [
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:            "iostat",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:            "nfs",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:            "restful"
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        ],
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "services": {}
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "servicemap": {
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "epoch": 1,
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:        "services": {}
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    },
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]:    "progress_events": {}
Oct 14 04:22:06 np0005486808 hopeful_mccarthy[74927]: }
Oct 14 04:22:06 np0005486808 systemd[1]: libpod-05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd.scope: Deactivated successfully.
Oct 14 04:22:06 np0005486808 podman[74911]: 2025-10-14 08:22:06.150477644 +0000 UTC m=+0.656538492 container died 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:22:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-29de4a8b9a8cfc4a9e2df612a3906d034d0bc2579cb383e7e6b88f0f7766f1bb-merged.mount: Deactivated successfully.
Oct 14 04:22:06 np0005486808 podman[74911]: 2025-10-14 08:22:06.21755153 +0000 UTC m=+0.723612348 container remove 05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd (image=quay.io/ceph/ceph:v18, name=hopeful_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:22:06 np0005486808 systemd[1]: libpod-conmon-05fd60c99981198b15ec3cae0a971657d62cc5a2b202104bc22980300099c1dd.scope: Deactivated successfully.
Oct 14 04:22:06 np0005486808 ceph-mgr[74543]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 14 04:22:06 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'telegraf'
Oct 14 04:22:06 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:06.272+0000 7fb011bc9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 14 04:22:06 np0005486808 ceph-mgr[74543]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 14 04:22:06 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'telemetry'
Oct 14 04:22:06 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:06.522+0000 7fb011bc9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 14 04:22:07 np0005486808 ceph-mgr[74543]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 14 04:22:07 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'test_orchestrator'
Oct 14 04:22:07 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:07.095+0000 7fb011bc9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 14 04:22:07 np0005486808 ceph-mgr[74543]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:07 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:07.712+0000 7fb011bc9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:07 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'volumes'
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.293362002 +0000 UTC m=+0.046719546 container create 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:22:08 np0005486808 systemd[1]: Started libpod-conmon-668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7.scope.
Oct 14 04:22:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e03646dc425d4b3c73e597585157f5e7bb991d77614031e4a92b8390c6a40f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e03646dc425d4b3c73e597585157f5e7bb991d77614031e4a92b8390c6a40f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e03646dc425d4b3c73e597585157f5e7bb991d77614031e4a92b8390c6a40f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.270653511 +0000 UTC m=+0.024011095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.390871504 +0000 UTC m=+0.144229058 container init 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.401627055 +0000 UTC m=+0.154984589 container start 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.405033081 +0000 UTC m=+0.158390615 container attach 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'zabbix'
Oct 14 04:22:08 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:08.409+0000 7fb011bc9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 14 04:22:08 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:08.646+0000 7fb011bc9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: ms_deliver_dispatch: unhandled message 0x5623786e91e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.euuwqu
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.euuwqu(active, starting, since 0.0163541s)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr handle_mgr_map Activating!
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr handle_mgr_map I am now activating
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e1 all = 1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.euuwqu", "id": "compute-0.euuwqu"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.euuwqu", "id": "compute-0.euuwqu"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Manager daemon compute-0.euuwqu is now available
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: balancer
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer INFO root] Starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: crash
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:22:08
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [balancer INFO root] No pools available
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: devicehealth
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: iostat
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: nfs
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: orchestrator
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: pg_autoscaler
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: progress
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [progress INFO root] Loading...
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [progress INFO root] No stored events to load
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [progress INFO root] Loaded [] historic events
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [progress INFO root] Loaded OSDMap, ready.
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] recovery thread starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] starting setup
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: rbd_support
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: Activating manager daemon compute-0.euuwqu
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: Manager daemon compute-0.euuwqu is now available
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: restful
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: status
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [restful INFO root] server_addr: :: server_port: 8003
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: telemetry
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [restful WARNING root] server not running: no certificate configured
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] PerfHandler: starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TaskHandler: starting
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"}]: dispatch
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] setup complete
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: volumes
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:22:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405288966' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]: 
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]: {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "health": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "status": "HEALTH_OK",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "checks": {},
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "mutes": []
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "election_epoch": 5,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "quorum": [
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        0
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    ],
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "quorum_names": [
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "compute-0"
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    ],
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "quorum_age": 21,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "monmap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "epoch": 1,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "min_mon_release_name": "reef",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_mons": 1
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "osdmap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "epoch": 1,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_osds": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_up_osds": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "osd_up_since": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_in_osds": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "osd_in_since": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_remapped_pgs": 0
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "pgmap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "pgs_by_state": [],
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_pgs": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_pools": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_objects": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "data_bytes": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "bytes_used": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "bytes_avail": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "bytes_total": 0
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "fsmap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "epoch": 1,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "by_rank": [],
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "up:standby": 0
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "mgrmap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "available": false,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "num_standbys": 0,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "modules": [
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:            "iostat",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:            "nfs",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:            "restful"
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        ],
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "services": {}
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "servicemap": {
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "epoch": 1,
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:        "services": {}
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    },
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]:    "progress_events": {}
Oct 14 04:22:08 np0005486808 blissful_cannon[74982]: }
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.804414164 +0000 UTC m=+0.557771708 container died 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:22:08 np0005486808 systemd[1]: libpod-668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7.scope: Deactivated successfully.
Oct 14 04:22:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-51e03646dc425d4b3c73e597585157f5e7bb991d77614031e4a92b8390c6a40f-merged.mount: Deactivated successfully.
Oct 14 04:22:08 np0005486808 podman[74965]: 2025-10-14 08:22:08.844825291 +0000 UTC m=+0.598182855 container remove 668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7 (image=quay.io/ceph/ceph:v18, name=blissful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 04:22:08 np0005486808 systemd[1]: libpod-conmon-668ac992596e85b043d4a1d5e2c03883c69f944e18d74d909955dc7eb106a5f7.scope: Deactivated successfully.
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.euuwqu(active, since 1.0323s)
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"}]: dispatch
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"}]: dispatch
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:09 np0005486808 ceph-mon[74249]: from='mgr.14102 192.168.122.100:0/4123620900' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:10 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.euuwqu(active, since 2s)
Oct 14 04:22:10 np0005486808 podman[75099]: 2025-10-14 08:22:10.937116238 +0000 UTC m=+0.061332134 container create 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:22:10 np0005486808 systemd[1]: Started libpod-conmon-3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95.scope.
Oct 14 04:22:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:10.912609791 +0000 UTC m=+0.036825727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55750a9dccae8fbc06a02f8ff695c8c365977dceb8a6cc79296bfbc51fcffa84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55750a9dccae8fbc06a02f8ff695c8c365977dceb8a6cc79296bfbc51fcffa84/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55750a9dccae8fbc06a02f8ff695c8c365977dceb8a6cc79296bfbc51fcffa84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:11.0262562 +0000 UTC m=+0.150472056 container init 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:11.035103842 +0000 UTC m=+0.159319698 container start 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:11.038564489 +0000 UTC m=+0.162780345 container attach 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:22:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 14 04:22:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776918951' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]: 
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]: {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "health": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "status": "HEALTH_OK",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "checks": {},
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "mutes": []
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "election_epoch": 5,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "quorum": [
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        0
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    ],
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "quorum_names": [
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "compute-0"
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    ],
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "quorum_age": 24,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "monmap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "epoch": 1,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "min_mon_release_name": "reef",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_mons": 1
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "osdmap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "epoch": 1,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_osds": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_up_osds": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "osd_up_since": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_in_osds": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "osd_in_since": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_remapped_pgs": 0
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "pgmap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "pgs_by_state": [],
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_pgs": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_pools": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_objects": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "data_bytes": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "bytes_used": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "bytes_avail": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "bytes_total": 0
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "fsmap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "epoch": 1,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "by_rank": [],
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "up:standby": 0
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "mgrmap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "available": true,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "num_standbys": 0,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "modules": [
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:            "iostat",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:            "nfs",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:            "restful"
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        ],
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "services": {}
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "servicemap": {
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "epoch": 1,
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "modified": "2025-10-14T08:21:43.857901+0000",
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:        "services": {}
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    },
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]:    "progress_events": {}
Oct 14 04:22:11 np0005486808 sweet_robinson[75115]: }
Oct 14 04:22:11 np0005486808 systemd[1]: libpod-3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95.scope: Deactivated successfully.
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:11.663149485 +0000 UTC m=+0.787365371 container died 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-55750a9dccae8fbc06a02f8ff695c8c365977dceb8a6cc79296bfbc51fcffa84-merged.mount: Deactivated successfully.
Oct 14 04:22:11 np0005486808 podman[75099]: 2025-10-14 08:22:11.709492871 +0000 UTC m=+0.833708727 container remove 3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95 (image=quay.io/ceph/ceph:v18, name=sweet_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:11 np0005486808 systemd[1]: libpod-conmon-3c06e56fca572ae4d17991331c763713f2bdefd97bb36bf320e168ab013b0b95.scope: Deactivated successfully.
Oct 14 04:22:11 np0005486808 podman[75153]: 2025-10-14 08:22:11.791320189 +0000 UTC m=+0.054806600 container create a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:22:11 np0005486808 systemd[1]: Started libpod-conmon-a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0.scope.
Oct 14 04:22:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f099adf97f8362a28916640647475080ab672b10a7de49fdbd332bfa910fc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f099adf97f8362a28916640647475080ab672b10a7de49fdbd332bfa910fc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f099adf97f8362a28916640647475080ab672b10a7de49fdbd332bfa910fc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f099adf97f8362a28916640647475080ab672b10a7de49fdbd332bfa910fc3/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:11 np0005486808 podman[75153]: 2025-10-14 08:22:11.771362937 +0000 UTC m=+0.034849348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:11 np0005486808 podman[75153]: 2025-10-14 08:22:11.937886735 +0000 UTC m=+0.201373166 container init a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:11 np0005486808 podman[75153]: 2025-10-14 08:22:11.946975603 +0000 UTC m=+0.210461974 container start a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 14 04:22:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1290695050' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:22:12 np0005486808 systemd[1]: libpod-a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0.scope: Deactivated successfully.
Oct 14 04:22:12 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:12 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1290695050' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:22:12 np0005486808 podman[75153]: 2025-10-14 08:22:12.823502446 +0000 UTC m=+1.086988857 container attach a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:12 np0005486808 podman[75153]: 2025-10-14 08:22:12.824194004 +0000 UTC m=+1.087680455 container died a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:22:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-34f099adf97f8362a28916640647475080ab672b10a7de49fdbd332bfa910fc3-merged.mount: Deactivated successfully.
Oct 14 04:22:12 np0005486808 podman[75153]: 2025-10-14 08:22:12.894475871 +0000 UTC m=+1.157962292 container remove a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0 (image=quay.io/ceph/ceph:v18, name=focused_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:12 np0005486808 podman[75208]: 2025-10-14 08:22:12.982729331 +0000 UTC m=+0.062457512 container create 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:12 np0005486808 systemd[1]: libpod-conmon-a44703b348af233c2a9f072820a942d56a48bee30dab57d43b31d4aa1e1abac0.scope: Deactivated successfully.
Oct 14 04:22:13 np0005486808 systemd[1]: Started libpod-conmon-1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5.scope.
Oct 14 04:22:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:13 np0005486808 podman[75208]: 2025-10-14 08:22:12.955464685 +0000 UTC m=+0.035192966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f13ad19c58ec3c0c801af59d2468f0ebcf88e46f07461f918579532b94033c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f13ad19c58ec3c0c801af59d2468f0ebcf88e46f07461f918579532b94033c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f13ad19c58ec3c0c801af59d2468f0ebcf88e46f07461f918579532b94033c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:13 np0005486808 podman[75208]: 2025-10-14 08:22:13.071962195 +0000 UTC m=+0.151690396 container init 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 14 04:22:13 np0005486808 podman[75208]: 2025-10-14 08:22:13.08252375 +0000 UTC m=+0.162251931 container start 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:13 np0005486808 podman[75208]: 2025-10-14 08:22:13.085994797 +0000 UTC m=+0.165723008 container attach 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:22:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 14 04:22:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3655611518' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 14 04:22:13 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3655611518' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 14 04:22:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3655611518' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  1: '-n'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  2: 'mgr.compute-0.euuwqu'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  3: '-f'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  4: '--setuser'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  5: 'ceph'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  6: '--setgroup'
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: mgr respawn  7: 'ceph'
Oct 14 04:22:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.euuwqu(active, since 5s)
Oct 14 04:22:13 np0005486808 systemd[1]: libpod-1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5.scope: Deactivated successfully.
Oct 14 04:22:13 np0005486808 conmon[75224]: conmon 1a868ed397d1c0505701 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5.scope/container/memory.events
Oct 14 04:22:13 np0005486808 podman[75250]: 2025-10-14 08:22:13.870683821 +0000 UTC m=+0.031009001 container died 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a2f13ad19c58ec3c0c801af59d2468f0ebcf88e46f07461f918579532b94033c-merged.mount: Deactivated successfully.
Oct 14 04:22:13 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: ignoring --setuser ceph since I am not root
Oct 14 04:22:13 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: ignoring --setgroup ceph since I am not root
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 14 04:22:13 np0005486808 ceph-mgr[74543]: pidfile_write: ignore empty --pid-file
Oct 14 04:22:13 np0005486808 podman[75250]: 2025-10-14 08:22:13.924643678 +0000 UTC m=+0.084968758 container remove 1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5 (image=quay.io/ceph/ceph:v18, name=pensive_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:13 np0005486808 systemd[1]: libpod-conmon-1a868ed397d1c0505701f0db5ce4b3261cffa4a185e3e83282e9a37dbe55a5d5.scope: Deactivated successfully.
Oct 14 04:22:14 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'alerts'
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.030157271 +0000 UTC m=+0.066335869 container create b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:22:14 np0005486808 systemd[1]: Started libpod-conmon-b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342.scope.
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:13.99869267 +0000 UTC m=+0.034871318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf5175caddf50978153f7331a85c5247a3104b9f42db9c7e705f3d3b6cf0ee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf5175caddf50978153f7331a85c5247a3104b9f42db9c7e705f3d3b6cf0ee8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf5175caddf50978153f7331a85c5247a3104b9f42db9c7e705f3d3b6cf0ee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.125416057 +0000 UTC m=+0.161594695 container init b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.135184393 +0000 UTC m=+0.171362991 container start b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.139688116 +0000 UTC m=+0.175866714 container attach b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:14 np0005486808 ceph-mgr[74543]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:22:14 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'balancer'
Oct 14 04:22:14 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:14.302+0000 7f6be5126140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:22:14 np0005486808 ceph-mgr[74543]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:22:14 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:14.536+0000 7f6be5126140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:22:14 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'cephadm'
Oct 14 04:22:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 14 04:22:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840966246' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]: {
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]:    "epoch": 5,
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]:    "available": true,
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]:    "active_name": "compute-0.euuwqu",
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]:    "num_standby": 0
Oct 14 04:22:14 np0005486808 crazy_lalande[75305]: }
Oct 14 04:22:14 np0005486808 systemd[1]: libpod-b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342.scope: Deactivated successfully.
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.748721102 +0000 UTC m=+0.784899700 container died b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9bf5175caddf50978153f7331a85c5247a3104b9f42db9c7e705f3d3b6cf0ee8-merged.mount: Deactivated successfully.
Oct 14 04:22:14 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3655611518' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 14 04:22:14 np0005486808 podman[75289]: 2025-10-14 08:22:14.795654132 +0000 UTC m=+0.831832690 container remove b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:22:14 np0005486808 systemd[1]: libpod-conmon-b17a1062b6879adb7127c000bf0dc3e3901981b184c9826c4c4d98040e615342.scope: Deactivated successfully.
Oct 14 04:22:14 np0005486808 podman[75343]: 2025-10-14 08:22:14.872514315 +0000 UTC m=+0.046664344 container create 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:22:14 np0005486808 systemd[1]: Started libpod-conmon-53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f.scope.
Oct 14 04:22:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:14 np0005486808 podman[75343]: 2025-10-14 08:22:14.850864971 +0000 UTC m=+0.025015030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127a4426436926638648586f71c2115ef4766e8c0733977f2e9b3fd5e86d1bc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127a4426436926638648586f71c2115ef4766e8c0733977f2e9b3fd5e86d1bc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/127a4426436926638648586f71c2115ef4766e8c0733977f2e9b3fd5e86d1bc0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:14 np0005486808 podman[75343]: 2025-10-14 08:22:14.976113731 +0000 UTC m=+0.150263860 container init 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:14 np0005486808 podman[75343]: 2025-10-14 08:22:14.985740433 +0000 UTC m=+0.159890492 container start 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:14 np0005486808 podman[75343]: 2025-10-14 08:22:14.989932678 +0000 UTC m=+0.164082737 container attach 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:22:16 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'crash'
Oct 14 04:22:16 np0005486808 ceph-mgr[74543]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 14 04:22:16 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'dashboard'
Oct 14 04:22:16 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:16.596+0000 7f6be5126140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 14 04:22:17 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'devicehealth'
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'diskprediction_local'
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:18.198+0000 7f6be5126140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]:  from numpy import show_config as show_numpy_config
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:18.698+0000 7f6be5126140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'influx'
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 14 04:22:18 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'insights'
Oct 14 04:22:18 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:18.925+0000 7f6be5126140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 14 04:22:19 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'iostat'
Oct 14 04:22:19 np0005486808 ceph-mgr[74543]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 14 04:22:19 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'k8sevents'
Oct 14 04:22:19 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:19.382+0000 7f6be5126140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 14 04:22:21 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'localpool'
Oct 14 04:22:21 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'mds_autoscaler'
Oct 14 04:22:21 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'mirroring'
Oct 14 04:22:22 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'nfs'
Oct 14 04:22:22 np0005486808 ceph-mgr[74543]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 14 04:22:22 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'orchestrator'
Oct 14 04:22:22 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:22.853+0000 7f6be5126140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'osd_perf_query'
Oct 14 04:22:23 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:23.490+0000 7f6be5126140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'osd_support'
Oct 14 04:22:23 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:23.740+0000 7f6be5126140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 14 04:22:23 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'pg_autoscaler'
Oct 14 04:22:23 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:23.956+0000 7f6be5126140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 14 04:22:24 np0005486808 ceph-mgr[74543]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 14 04:22:24 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'progress'
Oct 14 04:22:24 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:24.213+0000 7f6be5126140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 14 04:22:24 np0005486808 ceph-mgr[74543]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 14 04:22:24 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'prometheus'
Oct 14 04:22:24 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:24.452+0000 7f6be5126140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 14 04:22:25 np0005486808 ceph-mgr[74543]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 14 04:22:25 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rbd_support'
Oct 14 04:22:25 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:25.426+0000 7f6be5126140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 14 04:22:25 np0005486808 ceph-mgr[74543]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 14 04:22:25 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'restful'
Oct 14 04:22:25 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:25.724+0000 7f6be5126140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 14 04:22:26 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rgw'
Oct 14 04:22:27 np0005486808 ceph-mgr[74543]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 14 04:22:27 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'rook'
Oct 14 04:22:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:27.142+0000 7f6be5126140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'selftest'
Oct 14 04:22:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:29.204+0000 7f6be5126140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'snap_schedule'
Oct 14 04:22:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:29.442+0000 7f6be5126140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'stats'
Oct 14 04:22:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:29.679+0000 7f6be5126140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 14 04:22:29 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'status'
Oct 14 04:22:30 np0005486808 ceph-mgr[74543]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 14 04:22:30 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'telegraf'
Oct 14 04:22:30 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:30.195+0000 7f6be5126140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 14 04:22:30 np0005486808 ceph-mgr[74543]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 14 04:22:30 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'telemetry'
Oct 14 04:22:30 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:30.430+0000 7f6be5126140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 14 04:22:31 np0005486808 ceph-mgr[74543]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 14 04:22:31 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'test_orchestrator'
Oct 14 04:22:31 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:31.002+0000 7f6be5126140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 14 04:22:31 np0005486808 ceph-mgr[74543]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:31 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'volumes'
Oct 14 04:22:31 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:31.636+0000 7f6be5126140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 14 04:22:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:32.318+0000 7f6be5126140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr[py] Loading python module 'zabbix'
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 14 04:22:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T08:22:32.563+0000 7f6be5126140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Active manager daemon compute-0.euuwqu restarted
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: ms_deliver_dispatch: unhandled message 0x5583c6e30dc0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.euuwqu
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr handle_mgr_map Activating!
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr handle_mgr_map I am now activating
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.euuwqu(active, starting, since 0.0160268s)
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.euuwqu", "id": "compute-0.euuwqu"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.euuwqu", "id": "compute-0.euuwqu"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e1 all = 1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: balancer
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Manager daemon compute-0.euuwqu is now available
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:22:32
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] No pools available
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: Active manager daemon compute-0.euuwqu restarted
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: Activating manager daemon compute-0.euuwqu
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: Manager daemon compute-0.euuwqu is now available
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: cephadm
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: crash
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: devicehealth
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: iostat
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: nfs
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: orchestrator
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: pg_autoscaler
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: progress
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [progress INFO root] Loading...
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [progress INFO root] No stored events to load
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [progress INFO root] Loaded [] historic events
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [progress INFO root] Loaded OSDMap, ready.
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] recovery thread starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] starting setup
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: rbd_support
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: restful
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [restful INFO root] server_addr: :: server_port: 8003
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: status
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] PerfHandler: starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [restful WARNING root] server not running: no certificate configured
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TaskHandler: starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: telemetry
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"} v 0) v1
Oct 14 04:22:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"}]: dispatch
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] setup complete
Oct 14 04:22:32 np0005486808 ceph-mgr[74543]: mgr load Constructed class from module: volumes
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.euuwqu(active, since 1.02735s)
Oct 14 04:22:33 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 14 04:22:33 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 14 04:22:33 np0005486808 condescending_cohen[75359]: {
Oct 14 04:22:33 np0005486808 condescending_cohen[75359]:    "mgrmap_epoch": 7,
Oct 14 04:22:33 np0005486808 condescending_cohen[75359]:    "initialized": true
Oct 14 04:22:33 np0005486808 condescending_cohen[75359]: }
Oct 14 04:22:33 np0005486808 systemd[1]: libpod-53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f.scope: Deactivated successfully.
Oct 14 04:22:33 np0005486808 podman[75343]: 2025-10-14 08:22:33.635777813 +0000 UTC m=+18.809927862 container died 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: Found migration_current of "None". Setting to last migration.
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/mirror_snapshot_schedule"}]: dispatch
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.euuwqu/trash_purge_schedule"}]: dispatch
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-127a4426436926638648586f71c2115ef4766e8c0733977f2e9b3fd5e86d1bc0-merged.mount: Deactivated successfully.
Oct 14 04:22:33 np0005486808 podman[75343]: 2025-10-14 08:22:33.690359545 +0000 UTC m=+18.864509594 container remove 53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f (image=quay.io/ceph/ceph:v18, name=condescending_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 04:22:33 np0005486808 systemd[1]: libpod-conmon-53ebb913e87eecd5cf7d37933b47d85f16bcc4d9d978197ce61280d8b02ddf3f.scope: Deactivated successfully.
Oct 14 04:22:33 np0005486808 podman[75518]: 2025-10-14 08:22:33.775787883 +0000 UTC m=+0.059596209 container create 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:33 np0005486808 systemd[1]: Started libpod-conmon-25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7.scope.
Oct 14 04:22:33 np0005486808 podman[75518]: 2025-10-14 08:22:33.754067307 +0000 UTC m=+0.037875653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd12fa41ba882e29563d1df6c0f6a6709446d4ad75dd8c2770f0e29409104b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd12fa41ba882e29563d1df6c0f6a6709446d4ad75dd8c2770f0e29409104b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dd12fa41ba882e29563d1df6c0f6a6709446d4ad75dd8c2770f0e29409104b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:33 np0005486808 podman[75518]: 2025-10-14 08:22:33.904116471 +0000 UTC m=+0.187924777 container init 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:22:33 np0005486808 podman[75518]: 2025-10-14 08:22:33.908486961 +0000 UTC m=+0.192295257 container start 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:22:33 np0005486808 podman[75518]: 2025-10-14 08:22:33.911206599 +0000 UTC m=+0.195014915 container attach 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:22:34 np0005486808 systemd[1]: libpod-25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7.scope: Deactivated successfully.
Oct 14 04:22:34 np0005486808 podman[75518]: 2025-10-14 08:22:34.451733732 +0000 UTC m=+0.735542028 container died 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:22:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-82dd12fa41ba882e29563d1df6c0f6a6709446d4ad75dd8c2770f0e29409104b-merged.mount: Deactivated successfully.
Oct 14 04:22:34 np0005486808 podman[75518]: 2025-10-14 08:22:34.487627885 +0000 UTC m=+0.771436181 container remove 25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7 (image=quay.io/ceph/ceph:v18, name=vibrant_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:22:34 np0005486808 systemd[1]: libpod-conmon-25fbf87f018f0e766e7bfd6265d4c3526e51faf04ae6071ceaedcafbcf8437b7.scope: Deactivated successfully.
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: [cephadm INFO cherrypy.error] [14/Oct/2025:08:22:34] ENGINE Bus STARTING
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : [14/Oct/2025:08:22:34] ENGINE Bus STARTING
Oct 14 04:22:34 np0005486808 podman[75573]: 2025-10-14 08:22:34.559642486 +0000 UTC m=+0.051240630 container create 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:34 np0005486808 systemd[1]: Started libpod-conmon-33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6.scope.
Oct 14 04:22:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd18fa6d61d65814a32f85dd0bdb57d89c7d327d2a6d576cce4ffe25c3832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd18fa6d61d65814a32f85dd0bdb57d89c7d327d2a6d576cce4ffe25c3832/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd18fa6d61d65814a32f85dd0bdb57d89c7d327d2a6d576cce4ffe25c3832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:34 np0005486808 podman[75573]: 2025-10-14 08:22:34.532196416 +0000 UTC m=+0.023794590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:34 np0005486808 podman[75573]: 2025-10-14 08:22:34.639452563 +0000 UTC m=+0.131050737 container init 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:22:34 np0005486808 podman[75573]: 2025-10-14 08:22:34.648542702 +0000 UTC m=+0.140140876 container start 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:34 np0005486808 podman[75573]: 2025-10-14 08:22:34.655810345 +0000 UTC m=+0.147408529 container attach 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: [cephadm INFO cherrypy.error] [14/Oct/2025:08:22:34] ENGINE Serving on https://192.168.122.100:7150
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : [14/Oct/2025:08:22:34] ENGINE Serving on https://192.168.122.100:7150
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: [cephadm INFO cherrypy.error] [14/Oct/2025:08:22:34] ENGINE Client ('192.168.122.100', 34456) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : [14/Oct/2025:08:22:34] ENGINE Client ('192.168.122.100', 34456) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: [cephadm INFO cherrypy.error] [14/Oct/2025:08:22:34] ENGINE Serving on http://192.168.122.100:8765
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : [14/Oct/2025:08:22:34] ENGINE Serving on http://192.168.122.100:8765
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: [cephadm INFO cherrypy.error] [14/Oct/2025:08:22:34] ENGINE Bus STARTED
Oct 14 04:22:34 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : [14/Oct/2025:08:22:34] ENGINE Bus STARTED
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:22:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Set ssh ssh_user
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Set ssh ssh_config
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 14 04:22:35 np0005486808 dreamy_poincare[75602]: ssh user set to ceph-admin. sudo will be used
Oct 14 04:22:35 np0005486808 systemd[1]: libpod-33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6.scope: Deactivated successfully.
Oct 14 04:22:35 np0005486808 podman[75573]: 2025-10-14 08:22:35.234102058 +0000 UTC m=+0.725700232 container died 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:22:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-988bd18fa6d61d65814a32f85dd0bdb57d89c7d327d2a6d576cce4ffe25c3832-merged.mount: Deactivated successfully.
Oct 14 04:22:35 np0005486808 podman[75573]: 2025-10-14 08:22:35.292215659 +0000 UTC m=+0.783813843 container remove 33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6 (image=quay.io/ceph/ceph:v18, name=dreamy_poincare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:35 np0005486808 systemd[1]: libpod-conmon-33c4f29475facf93417600bd9b213ff235328d5f3d73ab77547f91c7ac0515a6.scope: Deactivated successfully.
Oct 14 04:22:35 np0005486808 podman[75653]: 2025-10-14 08:22:35.360004614 +0000 UTC m=+0.046907621 container create e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:35 np0005486808 systemd[1]: Started libpod-conmon-e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23.scope.
Oct 14 04:22:35 np0005486808 podman[75653]: 2025-10-14 08:22:35.334746409 +0000 UTC m=+0.021649466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.euuwqu(active, since 2s)
Oct 14 04:22:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:35 np0005486808 podman[75653]: 2025-10-14 08:22:35.462188344 +0000 UTC m=+0.149091351 container init e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:35 np0005486808 podman[75653]: 2025-10-14 08:22:35.474795971 +0000 UTC m=+0.161698968 container start e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:22:35 np0005486808 podman[75653]: 2025-10-14 08:22:35.478682418 +0000 UTC m=+0.165585425 container attach e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: [14/Oct/2025:08:22:34] ENGINE Bus STARTING
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: [14/Oct/2025:08:22:34] ENGINE Serving on https://192.168.122.100:7150
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: [14/Oct/2025:08:22:34] ENGINE Client ('192.168.122.100', 34456) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: [14/Oct/2025:08:22:34] ENGINE Serving on http://192.168.122.100:8765
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: [14/Oct/2025:08:22:34] ENGINE Bus STARTED
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 14 04:22:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Set ssh private key
Oct 14 04:22:35 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 14 04:22:36 np0005486808 systemd[1]: libpod-e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23.scope: Deactivated successfully.
Oct 14 04:22:36 np0005486808 podman[75653]: 2025-10-14 08:22:36.015785206 +0000 UTC m=+0.702688173 container died e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-321151615fc215bef4fa8be1383430e6ae1e636c30acdd27133e073a80c92ee1-merged.mount: Deactivated successfully.
Oct 14 04:22:36 np0005486808 podman[75653]: 2025-10-14 08:22:36.052891069 +0000 UTC m=+0.739794036 container remove e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23 (image=quay.io/ceph/ceph:v18, name=epic_chaum, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:22:36 np0005486808 systemd[1]: libpod-conmon-e9239ca3348ab56540a209bd6444a0b049a73444d6b6fbcf4b90eb637d1bca23.scope: Deactivated successfully.
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.107388049 +0000 UTC m=+0.037491514 container create cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:36 np0005486808 systemd[1]: Started libpod-conmon-cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492.scope.
Oct 14 04:22:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.090054133 +0000 UTC m=+0.020157628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.204325617 +0000 UTC m=+0.134429172 container init cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.211168469 +0000 UTC m=+0.141271934 container start cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.214691498 +0000 UTC m=+0.144795043 container attach cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:36 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: Set ssh ssh_user
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: Set ssh ssh_config
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: ssh user set to ceph-admin. sudo will be used
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:36 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:36 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 14 04:22:36 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 14 04:22:36 np0005486808 systemd[1]: libpod-cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492.scope: Deactivated successfully.
Oct 14 04:22:36 np0005486808 conmon[75723]: conmon cc5ed2124c71ca90ac3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492.scope/container/memory.events
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.767481988 +0000 UTC m=+0.697585493 container died cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 04:22:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9647f70f1fdd41cce6d48d1fbcf358b4f495220f0fe8183cf6dc2ccce3188a4c-merged.mount: Deactivated successfully.
Oct 14 04:22:36 np0005486808 podman[75706]: 2025-10-14 08:22:36.819439575 +0000 UTC m=+0.749543080 container remove cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492 (image=quay.io/ceph/ceph:v18, name=competent_shtern, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:36 np0005486808 systemd[1]: libpod-conmon-cc5ed2124c71ca90ac3b2d02b1780a75fa3f8fa8795f389be011cb1ec3a21492.scope: Deactivated successfully.
Oct 14 04:22:36 np0005486808 podman[75761]: 2025-10-14 08:22:36.920665191 +0000 UTC m=+0.070063123 container create 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922327 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:22:36 np0005486808 systemd[1]: Started libpod-conmon-83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08.scope.
Oct 14 04:22:36 np0005486808 podman[75761]: 2025-10-14 08:22:36.890282747 +0000 UTC m=+0.039680729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71ccf3b21a50f9920566ad38130b2e18bb3a31b48f57ebd1e8952157a8eadb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71ccf3b21a50f9920566ad38130b2e18bb3a31b48f57ebd1e8952157a8eadb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c71ccf3b21a50f9920566ad38130b2e18bb3a31b48f57ebd1e8952157a8eadb6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:37 np0005486808 podman[75761]: 2025-10-14 08:22:37.016399668 +0000 UTC m=+0.165797620 container init 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:37 np0005486808 podman[75761]: 2025-10-14 08:22:37.028254376 +0000 UTC m=+0.177652278 container start 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:22:37 np0005486808 podman[75761]: 2025-10-14 08:22:37.031449107 +0000 UTC m=+0.180847009 container attach 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:37 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:37 np0005486808 dreamy_yalow[75778]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp00JD9O7DobI3+gS+PeQO36N/wrF1psrzm0igt6pT4bpfNH1QIxnRGmrMfSW9pbBGACvxgdHY6XVERTI/SjzmMU/n8ic/CMvqCY/UqZYnDBlkkP9GgfRRl1U8GzyZkKEVU32uMOh/jlhUspicCaRE+8m4tOg3TFhgqpdvBjtpkzII3B7/C8RDwpBuker78NQx4fwseIkXoYerRUgqMV4nFEi4e8vwaaxoRYv6G+Y9500raVszgwrahthkFpNWol8/qWkYD/QkBWZIUjCFr7qrWu2QEL2NyaUgUQHGRqYdcpoum5FRBRKh8sy3VOCIvNvvgFZhrOWFY+zzslNUn2KUhwFwXXXSLPHPx6rgk6jCURTto/eXzzJJhAeOo1MVlMjNVGGVUym+R9dvTTzBox4nSlDbqVvmZO91dXmjfCoWXSZiq0EkXx4CyA8h1ZKY1hOosyQqzBy7hXwfNpROmB7WRuajap/XAqX+JUf9ns6mbrusgNcJliwe6qwSnnhFiG8= zuul@controller
Oct 14 04:22:37 np0005486808 systemd[1]: libpod-83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08.scope: Deactivated successfully.
Oct 14 04:22:37 np0005486808 podman[75761]: 2025-10-14 08:22:37.575642682 +0000 UTC m=+0.725040584 container died 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:22:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c71ccf3b21a50f9920566ad38130b2e18bb3a31b48f57ebd1e8952157a8eadb6-merged.mount: Deactivated successfully.
Oct 14 04:22:37 np0005486808 podman[75761]: 2025-10-14 08:22:37.626572543 +0000 UTC m=+0.775970465 container remove 83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08 (image=quay.io/ceph/ceph:v18, name=dreamy_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:22:37 np0005486808 systemd[1]: libpod-conmon-83e5c584a8a474e56f2950e202405b33f8f9135e190f497298b3e2eafeafbe08.scope: Deactivated successfully.
Oct 14 04:22:37 np0005486808 podman[75816]: 2025-10-14 08:22:37.689567797 +0000 UTC m=+0.040920970 container create ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:37 np0005486808 systemd[1]: Started libpod-conmon-ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da.scope.
Oct 14 04:22:37 np0005486808 ceph-mon[74249]: Set ssh ssh_identity_key
Oct 14 04:22:37 np0005486808 ceph-mon[74249]: Set ssh private key
Oct 14 04:22:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:37 np0005486808 ceph-mon[74249]: Set ssh ssh_identity_pub
Oct 14 04:22:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1404c81ef299dc5ebbd1212f94da7687e015e0a2183888f96c43c1c410e9ed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1404c81ef299dc5ebbd1212f94da7687e015e0a2183888f96c43c1c410e9ed8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1404c81ef299dc5ebbd1212f94da7687e015e0a2183888f96c43c1c410e9ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:37 np0005486808 podman[75816]: 2025-10-14 08:22:37.675517694 +0000 UTC m=+0.026870897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:37 np0005486808 podman[75816]: 2025-10-14 08:22:37.771906388 +0000 UTC m=+0.123259651 container init ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:37 np0005486808 podman[75816]: 2025-10-14 08:22:37.776703899 +0000 UTC m=+0.128057092 container start ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:22:37 np0005486808 podman[75816]: 2025-10-14 08:22:37.780547735 +0000 UTC m=+0.131901018 container attach ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:38 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:38 np0005486808 systemd-logind[799]: New session 22 of user ceph-admin.
Oct 14 04:22:38 np0005486808 systemd[1]: Created slice User Slice of UID 42477.
Oct 14 04:22:38 np0005486808 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 14 04:22:38 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:38 np0005486808 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 14 04:22:38 np0005486808 systemd[1]: Starting User Manager for UID 42477...
Oct 14 04:22:38 np0005486808 systemd-logind[799]: New session 24 of user ceph-admin.
Oct 14 04:22:38 np0005486808 systemd[75862]: Queued start job for default target Main User Target.
Oct 14 04:22:38 np0005486808 systemd[75862]: Created slice User Application Slice.
Oct 14 04:22:38 np0005486808 systemd[75862]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 14 04:22:38 np0005486808 systemd[75862]: Started Daily Cleanup of User's Temporary Directories.
Oct 14 04:22:38 np0005486808 systemd[75862]: Reached target Paths.
Oct 14 04:22:38 np0005486808 systemd[75862]: Reached target Timers.
Oct 14 04:22:38 np0005486808 systemd[75862]: Starting D-Bus User Message Bus Socket...
Oct 14 04:22:38 np0005486808 systemd[75862]: Starting Create User's Volatile Files and Directories...
Oct 14 04:22:38 np0005486808 systemd[75862]: Listening on D-Bus User Message Bus Socket.
Oct 14 04:22:38 np0005486808 systemd[75862]: Reached target Sockets.
Oct 14 04:22:38 np0005486808 systemd[75862]: Finished Create User's Volatile Files and Directories.
Oct 14 04:22:38 np0005486808 systemd[75862]: Reached target Basic System.
Oct 14 04:22:38 np0005486808 systemd[75862]: Reached target Main User Target.
Oct 14 04:22:38 np0005486808 systemd[75862]: Startup finished in 196ms.
Oct 14 04:22:38 np0005486808 systemd[1]: Started User Manager for UID 42477.
Oct 14 04:22:38 np0005486808 systemd[1]: Started Session 22 of User ceph-admin.
Oct 14 04:22:38 np0005486808 systemd[1]: Started Session 24 of User ceph-admin.
Oct 14 04:22:39 np0005486808 systemd-logind[799]: New session 25 of user ceph-admin.
Oct 14 04:22:39 np0005486808 systemd[1]: Started Session 25 of User ceph-admin.
Oct 14 04:22:39 np0005486808 systemd-logind[799]: New session 26 of user ceph-admin.
Oct 14 04:22:39 np0005486808 systemd[1]: Started Session 26 of User ceph-admin.
Oct 14 04:22:40 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 14 04:22:40 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 14 04:22:40 np0005486808 systemd-logind[799]: New session 27 of user ceph-admin.
Oct 14 04:22:40 np0005486808 systemd[1]: Started Session 27 of User ceph-admin.
Oct 14 04:22:40 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:40 np0005486808 systemd-logind[799]: New session 28 of user ceph-admin.
Oct 14 04:22:40 np0005486808 systemd[1]: Started Session 28 of User ceph-admin.
Oct 14 04:22:40 np0005486808 ceph-mon[74249]: Deploying cephadm binary to compute-0
Oct 14 04:22:41 np0005486808 systemd-logind[799]: New session 29 of user ceph-admin.
Oct 14 04:22:41 np0005486808 systemd[1]: Started Session 29 of User ceph-admin.
Oct 14 04:22:41 np0005486808 systemd-logind[799]: New session 30 of user ceph-admin.
Oct 14 04:22:41 np0005486808 systemd[1]: Started Session 30 of User ceph-admin.
Oct 14 04:22:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053050 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:22:42 np0005486808 systemd-logind[799]: New session 31 of user ceph-admin.
Oct 14 04:22:42 np0005486808 systemd[1]: Started Session 31 of User ceph-admin.
Oct 14 04:22:42 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:42 np0005486808 systemd-logind[799]: New session 32 of user ceph-admin.
Oct 14 04:22:42 np0005486808 systemd[1]: Started Session 32 of User ceph-admin.
Oct 14 04:22:43 np0005486808 systemd-logind[799]: New session 33 of user ceph-admin.
Oct 14 04:22:43 np0005486808 systemd[1]: Started Session 33 of User ceph-admin.
Oct 14 04:22:43 np0005486808 systemd-logind[799]: New session 34 of user ceph-admin.
Oct 14 04:22:43 np0005486808 systemd[1]: Started Session 34 of User ceph-admin.
Oct 14 04:22:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:22:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:44 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Added host compute-0
Oct 14 04:22:44 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 14 04:22:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:22:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:22:44 np0005486808 happy_hugle[75832]: Added host 'compute-0' with addr '192.168.122.100'
Oct 14 04:22:44 np0005486808 systemd[1]: libpod-ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da.scope: Deactivated successfully.
Oct 14 04:22:44 np0005486808 podman[76473]: 2025-10-14 08:22:44.394507713 +0000 UTC m=+0.029802991 container died ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d1404c81ef299dc5ebbd1212f94da7687e015e0a2183888f96c43c1c410e9ed8-merged.mount: Deactivated successfully.
Oct 14 04:22:44 np0005486808 podman[76473]: 2025-10-14 08:22:44.447256749 +0000 UTC m=+0.082551997 container remove ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da (image=quay.io/ceph/ceph:v18, name=happy_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:44 np0005486808 systemd[1]: libpod-conmon-ea6ed279eeb2b16642acb64626f0decccc3494f3c8996a03bb651fc6525316da.scope: Deactivated successfully.
Oct 14 04:22:44 np0005486808 podman[76527]: 2025-10-14 08:22:44.524496322 +0000 UTC m=+0.044096900 container create 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:22:44 np0005486808 systemd[1]: Started libpod-conmon-5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60.scope.
Oct 14 04:22:44 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:44 np0005486808 podman[76527]: 2025-10-14 08:22:44.505428142 +0000 UTC m=+0.025028720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ecfece2c8f0127b814cac2a2fd592600f4aa69c64ee8affe5d31b89bea441a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ecfece2c8f0127b814cac2a2fd592600f4aa69c64ee8affe5d31b89bea441a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ecfece2c8f0127b814cac2a2fd592600f4aa69c64ee8affe5d31b89bea441a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:44 np0005486808 podman[76527]: 2025-10-14 08:22:44.622274571 +0000 UTC m=+0.141875159 container init 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:44 np0005486808 podman[76527]: 2025-10-14 08:22:44.630775584 +0000 UTC m=+0.150376142 container start 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:22:44 np0005486808 podman[76527]: 2025-10-14 08:22:44.634402356 +0000 UTC m=+0.154003184 container attach 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:22:44 np0005486808 podman[76626]: 2025-10-14 08:22:44.870543164 +0000 UTC m=+0.050960232 container create e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:22:44 np0005486808 systemd[1]: Started libpod-conmon-e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0.scope.
Oct 14 04:22:44 np0005486808 podman[76626]: 2025-10-14 08:22:44.84374733 +0000 UTC m=+0.024164398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:44 np0005486808 podman[76626]: 2025-10-14 08:22:44.951650254 +0000 UTC m=+0.132067332 container init e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:22:44 np0005486808 podman[76626]: 2025-10-14 08:22:44.958084376 +0000 UTC m=+0.138501484 container start e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:22:44 np0005486808 podman[76626]: 2025-10-14 08:22:44.962465306 +0000 UTC m=+0.142882404 container attach e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:45 np0005486808 objective_matsumoto[76576]: Scheduled mon update...
Oct 14 04:22:45 np0005486808 systemd[1]: libpod-5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60.scope: Deactivated successfully.
Oct 14 04:22:45 np0005486808 podman[76527]: 2025-10-14 08:22:45.197825924 +0000 UTC m=+0.717426522 container died 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a2ecfece2c8f0127b814cac2a2fd592600f4aa69c64ee8affe5d31b89bea441a-merged.mount: Deactivated successfully.
Oct 14 04:22:45 np0005486808 podman[76527]: 2025-10-14 08:22:45.236754343 +0000 UTC m=+0.756354901 container remove 5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60 (image=quay.io/ceph/ceph:v18, name=objective_matsumoto, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:45 np0005486808 systemd[1]: libpod-conmon-5b1893e73efaa9d939f1266bda1a16a2f06e28e9d40bd9373d6f5ca34bb29f60.scope: Deactivated successfully.
Oct 14 04:22:45 np0005486808 cool_hellman[76642]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 14 04:22:45 np0005486808 systemd[1]: libpod-e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0.scope: Deactivated successfully.
Oct 14 04:22:45 np0005486808 podman[76626]: 2025-10-14 08:22:45.287986982 +0000 UTC m=+0.468404050 container died e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.303220965 +0000 UTC m=+0.047226629 container create ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: Added host compute-0
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:45 np0005486808 systemd[1]: Started libpod-conmon-ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89.scope.
Oct 14 04:22:45 np0005486808 podman[76626]: 2025-10-14 08:22:45.346734769 +0000 UTC m=+0.527151837 container remove e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0 (image=quay.io/ceph/ceph:v18, name=cool_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:22:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:45 np0005486808 systemd[1]: libpod-conmon-e7715a1de484b8afdd7edb2df1d746adb52f89cbc0b00786c157d0e20d239bc0.scope: Deactivated successfully.
Oct 14 04:22:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c2c28115ab6738ebe95495f320244ebf27f10538a637656759cdfd05b56b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c2c28115ab6738ebe95495f320244ebf27f10538a637656759cdfd05b56b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c2c28115ab6738ebe95495f320244ebf27f10538a637656759cdfd05b56b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.280540665 +0000 UTC m=+0.024546349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.394661234 +0000 UTC m=+0.138666908 container init ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.40205186 +0000 UTC m=+0.146057514 container start ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.405109697 +0000 UTC m=+0.149115351 container attach ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6ca7787b285cb30b15f300d62e6adc7124d044a4b7037a83813c28490607cdee-merged.mount: Deactivated successfully.
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 14 04:22:45 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:22:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:45 np0005486808 quirky_thompson[76708]: Scheduled mgr update...
Oct 14 04:22:45 np0005486808 systemd[1]: libpod-ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89.scope: Deactivated successfully.
Oct 14 04:22:45 np0005486808 podman[76680]: 2025-10-14 08:22:45.9682807 +0000 UTC m=+0.712286374 container died ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5a1c2c28115ab6738ebe95495f320244ebf27f10538a637656759cdfd05b56b4-merged.mount: Deactivated successfully.
Oct 14 04:22:46 np0005486808 podman[76680]: 2025-10-14 08:22:46.02193989 +0000 UTC m=+0.765945584 container remove ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89 (image=quay.io/ceph/ceph:v18, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:22:46 np0005486808 systemd[1]: libpod-conmon-ecd22d8df2bad83c379747739f12e61c88416126c51057f7f9568471a8494e89.scope: Deactivated successfully.
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.118748724 +0000 UTC m=+0.067973000 container create 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:22:46 np0005486808 systemd[1]: Started libpod-conmon-8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96.scope.
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.088533114 +0000 UTC m=+0.037757440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6168a3be6b98828b2a72d97f1bcdb2aad18384ef7b6fc0f28db2f78c94660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6168a3be6b98828b2a72d97f1bcdb2aad18384ef7b6fc0f28db2f78c94660/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d6168a3be6b98828b2a72d97f1bcdb2aad18384ef7b6fc0f28db2f78c94660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.242581088 +0000 UTC m=+0.191805354 container init 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.254227961 +0000 UTC m=+0.203452237 container start 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.263062623 +0000 UTC m=+0.212286889 container attach 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: Saving service mon spec with placement count:5
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:46 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:46 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:46 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service crash spec with placement *
Oct 14 04:22:46 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:46 np0005486808 heuristic_elion[76916]: Scheduled crash update...
Oct 14 04:22:46 np0005486808 systemd[1]: libpod-8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96.scope: Deactivated successfully.
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.860379115 +0000 UTC m=+0.809603391 container died 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:46 np0005486808 podman[77076]: 2025-10-14 08:22:46.888273706 +0000 UTC m=+0.088737592 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c2d6168a3be6b98828b2a72d97f1bcdb2aad18384ef7b6fc0f28db2f78c94660-merged.mount: Deactivated successfully.
Oct 14 04:22:46 np0005486808 podman[76866]: 2025-10-14 08:22:46.916515087 +0000 UTC m=+0.865739323 container remove 8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96 (image=quay.io/ceph/ceph:v18, name=heuristic_elion, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:46 np0005486808 systemd[1]: libpod-conmon-8678022fcc7494c31aa2e40504821005105e1ab4be5c61353e679200d58e0e96.scope: Deactivated successfully.
Oct 14 04:22:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.022136593 +0000 UTC m=+0.075131580 container create a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:22:47 np0005486808 systemd[1]: Started libpod-conmon-a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f.scope.
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:46.992951679 +0000 UTC m=+0.045946706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7972f5852d12e77a8824123551500017cf7eb4eefcfc3ad720ab97be3321c70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7972f5852d12e77a8824123551500017cf7eb4eefcfc3ad720ab97be3321c70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7972f5852d12e77a8824123551500017cf7eb4eefcfc3ad720ab97be3321c70/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.114751192 +0000 UTC m=+0.167746189 container init a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.125762059 +0000 UTC m=+0.178757046 container start a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.130087808 +0000 UTC m=+0.183082845 container attach a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:47 np0005486808 podman[77076]: 2025-10-14 08:22:47.174416412 +0000 UTC m=+0.374880338 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2707692160' entity='client.admin' 
Oct 14 04:22:47 np0005486808 systemd[1]: libpod-a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f.scope: Deactivated successfully.
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.716328139 +0000 UTC m=+0.769323106 container died a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e7972f5852d12e77a8824123551500017cf7eb4eefcfc3ad720ab97be3321c70-merged.mount: Deactivated successfully.
Oct 14 04:22:47 np0005486808 podman[77109]: 2025-10-14 08:22:47.762768537 +0000 UTC m=+0.815763494 container remove a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f (image=quay.io/ceph/ceph:v18, name=loving_spence, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:22:47 np0005486808 systemd[1]: libpod-conmon-a6a6fa2238aad33b980fb8d0c2ee9f8c4ffa8bbc404be1c4ad3c06808dcb283f.scope: Deactivated successfully.
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: Saving service mgr spec with placement count:2
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: Saving service crash spec with placement *
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:47 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2707692160' entity='client.admin' 
Oct 14 04:22:47 np0005486808 podman[77295]: 2025-10-14 08:22:47.843162699 +0000 UTC m=+0.054578683 container create 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:47 np0005486808 systemd[1]: Started libpod-conmon-3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03.scope.
Oct 14 04:22:47 np0005486808 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77323 (sysctl)
Oct 14 04:22:47 np0005486808 podman[77295]: 2025-10-14 08:22:47.81693704 +0000 UTC m=+0.028353034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:47 np0005486808 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 14 04:22:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7421392f4c102278dbbf3d49383d1af7e1fb487d11b15e721d8b281623024d6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7421392f4c102278dbbf3d49383d1af7e1fb487d11b15e721d8b281623024d6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7421392f4c102278dbbf3d49383d1af7e1fb487d11b15e721d8b281623024d6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:47 np0005486808 podman[77295]: 2025-10-14 08:22:47.953129655 +0000 UTC m=+0.164545689 container init 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:22:47 np0005486808 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 14 04:22:47 np0005486808 podman[77295]: 2025-10-14 08:22:47.963935156 +0000 UTC m=+0.175351140 container start 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:22:47 np0005486808 podman[77295]: 2025-10-14 08:22:47.968140442 +0000 UTC m=+0.179556426 container attach 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:22:48 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 14 04:22:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:48 np0005486808 systemd[1]: libpod-3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03.scope: Deactivated successfully.
Oct 14 04:22:48 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:48 np0005486808 podman[77427]: 2025-10-14 08:22:48.601334956 +0000 UTC m=+0.031746350 container died 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:22:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7421392f4c102278dbbf3d49383d1af7e1fb487d11b15e721d8b281623024d6d-merged.mount: Deactivated successfully.
Oct 14 04:22:48 np0005486808 podman[77427]: 2025-10-14 08:22:48.645612189 +0000 UTC m=+0.076023493 container remove 3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03 (image=quay.io/ceph/ceph:v18, name=gifted_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:48 np0005486808 systemd[1]: libpod-conmon-3a08ed2fe44f6252d8c0366d515cceb3f55c317e1bca4c23badfed52aecd7c03.scope: Deactivated successfully.
Oct 14 04:22:48 np0005486808 podman[77486]: 2025-10-14 08:22:48.74026147 +0000 UTC m=+0.061894108 container create c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:48 np0005486808 systemd[1]: Started libpod-conmon-c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865.scope.
Oct 14 04:22:48 np0005486808 podman[77486]: 2025-10-14 08:22:48.70887198 +0000 UTC m=+0.030504658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b46d4c4493b298e2cb1c679bbb355569e620b222525cf130ca075f6736f32d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b46d4c4493b298e2cb1c679bbb355569e620b222525cf130ca075f6736f32d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b46d4c4493b298e2cb1c679bbb355569e620b222525cf130ca075f6736f32d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:48 np0005486808 podman[77486]: 2025-10-14 08:22:48.848401049 +0000 UTC m=+0.170033667 container init c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:48 np0005486808 podman[77486]: 2025-10-14 08:22:48.854126593 +0000 UTC m=+0.175759191 container start c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:22:48 np0005486808 podman[77486]: 2025-10-14 08:22:48.857257912 +0000 UTC m=+0.178890510 container attach c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:22:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:49 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:22:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:22:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:49 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Added label _admin to host compute-0
Oct 14 04:22:49 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 14 04:22:49 np0005486808 modest_montalcini[77510]: Added label _admin to host compute-0
Oct 14 04:22:49 np0005486808 podman[77486]: 2025-10-14 08:22:49.419876741 +0000 UTC m=+0.741509349 container died c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:49 np0005486808 systemd[1]: libpod-c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865.scope: Deactivated successfully.
Oct 14 04:22:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-65b46d4c4493b298e2cb1c679bbb355569e620b222525cf130ca075f6736f32d-merged.mount: Deactivated successfully.
Oct 14 04:22:49 np0005486808 podman[77486]: 2025-10-14 08:22:49.476432003 +0000 UTC m=+0.798064611 container remove c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865 (image=quay.io/ceph/ceph:v18, name=modest_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:22:49 np0005486808 systemd[1]: libpod-conmon-c2902a0fc548335316e7eccdd28f1ead0d399c5959380ed59d468606fdf2a865.scope: Deactivated successfully.
Oct 14 04:22:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:49 np0005486808 podman[77683]: 2025-10-14 08:22:49.572119509 +0000 UTC m=+0.064392410 container create 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:49 np0005486808 systemd[1]: Started libpod-conmon-9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295.scope.
Oct 14 04:22:49 np0005486808 podman[77683]: 2025-10-14 08:22:49.544409552 +0000 UTC m=+0.036682503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.638103759 +0000 UTC m=+0.050987663 container create 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:22:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1850f79fcde717676690b39ae747835edde1df0937a3c42d7d2b6ecec1d30f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1850f79fcde717676690b39ae747835edde1df0937a3c42d7d2b6ecec1d30f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1850f79fcde717676690b39ae747835edde1df0937a3c42d7d2b6ecec1d30f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:49 np0005486808 podman[77683]: 2025-10-14 08:22:49.663243341 +0000 UTC m=+0.155516302 container init 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:22:49 np0005486808 systemd[1]: Started libpod-conmon-3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3.scope.
Oct 14 04:22:49 np0005486808 podman[77683]: 2025-10-14 08:22:49.674616957 +0000 UTC m=+0.166889838 container start 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:22:49 np0005486808 podman[77683]: 2025-10-14 08:22:49.678141375 +0000 UTC m=+0.170414286 container attach 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.614210918 +0000 UTC m=+0.027094862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.707471683 +0000 UTC m=+0.120355547 container init 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.716442219 +0000 UTC m=+0.129326083 container start 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.719411983 +0000 UTC m=+0.132295857 container attach 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:22:49 np0005486808 loving_mcnulty[77732]: 167 167
Oct 14 04:22:49 np0005486808 systemd[1]: libpod-3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3.scope: Deactivated successfully.
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.723847075 +0000 UTC m=+0.136730949 container died 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:22:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9e32c6a68187015730f49a36176743c31d623e874065fa687c7f289499ffa369-merged.mount: Deactivated successfully.
Oct 14 04:22:49 np0005486808 podman[77710]: 2025-10-14 08:22:49.767635126 +0000 UTC m=+0.180519030 container remove 3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:49 np0005486808 systemd[1]: libpod-conmon-3f92f7f74207f44b0031f89be183a84f81984878b8b8434232e2394b46648cb3.scope: Deactivated successfully.
Oct 14 04:22:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 14 04:22:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523180777' entity='client.admin' 
Oct 14 04:22:50 np0005486808 systemd[1]: libpod-9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295.scope: Deactivated successfully.
Oct 14 04:22:50 np0005486808 podman[77683]: 2025-10-14 08:22:50.264806759 +0000 UTC m=+0.757079670 container died 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 14 04:22:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c1850f79fcde717676690b39ae747835edde1df0937a3c42d7d2b6ecec1d30f6-merged.mount: Deactivated successfully.
Oct 14 04:22:50 np0005486808 podman[77683]: 2025-10-14 08:22:50.305578485 +0000 UTC m=+0.797851366 container remove 9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295 (image=quay.io/ceph/ceph:v18, name=frosty_noether, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:50 np0005486808 systemd[1]: libpod-conmon-9f01257aaf1e559dc50d8cac334f7e6bc83a9814ade17cf1c407c11a240ff295.scope: Deactivated successfully.
Oct 14 04:22:50 np0005486808 podman[77779]: 2025-10-14 08:22:50.371613085 +0000 UTC m=+0.047401853 container create 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:22:50 np0005486808 systemd[1]: Started libpod-conmon-5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4.scope.
Oct 14 04:22:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c5de0b89a5eb30ab4ae5db087f267c008b27f72190f5d6224975390d0e0bba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c5de0b89a5eb30ab4ae5db087f267c008b27f72190f5d6224975390d0e0bba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c5de0b89a5eb30ab4ae5db087f267c008b27f72190f5d6224975390d0e0bba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:50 np0005486808 podman[77779]: 2025-10-14 08:22:50.344093473 +0000 UTC m=+0.019882221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:50 np0005486808 podman[77779]: 2025-10-14 08:22:50.445160014 +0000 UTC m=+0.120948762 container init 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:50 np0005486808 podman[77779]: 2025-10-14 08:22:50.449282018 +0000 UTC m=+0.125070746 container start 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:22:50 np0005486808 podman[77779]: 2025-10-14 08:22:50.452115799 +0000 UTC m=+0.127904547 container attach 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 04:22:50 np0005486808 ceph-mon[74249]: Added label _admin to host compute-0
Oct 14 04:22:50 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1523180777' entity='client.admin' 
Oct 14 04:22:50 np0005486808 ceph-mgr[74543]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 14 04:22:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 14 04:22:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/948265339' entity='client.admin' 
Oct 14 04:22:51 np0005486808 strange_kowalevski[77796]: set mgr/dashboard/cluster/status
Oct 14 04:22:51 np0005486808 systemd[1]: libpod-5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4.scope: Deactivated successfully.
Oct 14 04:22:51 np0005486808 podman[77779]: 2025-10-14 08:22:51.075709231 +0000 UTC m=+0.751497989 container died 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-68c5de0b89a5eb30ab4ae5db087f267c008b27f72190f5d6224975390d0e0bba-merged.mount: Deactivated successfully.
Oct 14 04:22:51 np0005486808 podman[77779]: 2025-10-14 08:22:51.131618257 +0000 UTC m=+0.807407025 container remove 5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4 (image=quay.io/ceph/ceph:v18, name=strange_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:22:51 np0005486808 systemd[1]: libpod-conmon-5949f11033339b5b1c3218be0a951f620e14c8eaf5b538213727ea13c2d232f4.scope: Deactivated successfully.
Oct 14 04:22:51 np0005486808 podman[77843]: 2025-10-14 08:22:51.38346189 +0000 UTC m=+0.059619400 container create b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:22:51 np0005486808 systemd[1]: Started libpod-conmon-b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f.scope.
Oct 14 04:22:51 np0005486808 podman[77843]: 2025-10-14 08:22:51.359132959 +0000 UTC m=+0.035290529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:22:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5339c6db4ed755f9fc3af6fc5bb6ea13851dd66183cd99734ac2eb35d7fb7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5339c6db4ed755f9fc3af6fc5bb6ea13851dd66183cd99734ac2eb35d7fb7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5339c6db4ed755f9fc3af6fc5bb6ea13851dd66183cd99734ac2eb35d7fb7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5339c6db4ed755f9fc3af6fc5bb6ea13851dd66183cd99734ac2eb35d7fb7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 podman[77843]: 2025-10-14 08:22:51.479245689 +0000 UTC m=+0.155403229 container init b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:51 np0005486808 podman[77843]: 2025-10-14 08:22:51.495765575 +0000 UTC m=+0.171923115 container start b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:22:51 np0005486808 podman[77843]: 2025-10-14 08:22:51.499824467 +0000 UTC m=+0.175981997 container attach b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:51 np0005486808 python3[77889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:22:51 np0005486808 podman[77890]: 2025-10-14 08:22:51.790370533 +0000 UTC m=+0.069049417 container create 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:22:51 np0005486808 systemd[1]: Started libpod-conmon-66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06.scope.
Oct 14 04:22:51 np0005486808 podman[77890]: 2025-10-14 08:22:51.759348253 +0000 UTC m=+0.038027207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c469a37ecc36c20ad88d21c94f230218473f0750dce33b37e6040e62a421676/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c469a37ecc36c20ad88d21c94f230218473f0750dce33b37e6040e62a421676/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:51 np0005486808 podman[77890]: 2025-10-14 08:22:51.892348618 +0000 UTC m=+0.171027542 container init 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:51 np0005486808 podman[77890]: 2025-10-14 08:22:51.902252777 +0000 UTC m=+0.180931671 container start 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:51 np0005486808 podman[77890]: 2025-10-14 08:22:51.906776061 +0000 UTC m=+0.185454965 container attach 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:22:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:22:52 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/948265339' entity='client.admin' 
Oct 14 04:22:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 14 04:22:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3769285315' entity='client.admin' 
Oct 14 04:22:52 np0005486808 systemd[1]: libpod-66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06.scope: Deactivated successfully.
Oct 14 04:22:52 np0005486808 podman[77890]: 2025-10-14 08:22:52.502725238 +0000 UTC m=+0.781404142 container died 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:22:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8c469a37ecc36c20ad88d21c94f230218473f0750dce33b37e6040e62a421676-merged.mount: Deactivated successfully.
Oct 14 04:22:52 np0005486808 podman[77890]: 2025-10-14 08:22:52.558234874 +0000 UTC m=+0.836913738 container remove 66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06 (image=quay.io/ceph/ceph:v18, name=goofy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:52 np0005486808 systemd[1]: libpod-conmon-66a780ade7aa9b61d720bf29a4ed75fb4bd075dcea3cbf80678d829ebd442d06.scope: Deactivated successfully.
Oct 14 04:22:52 np0005486808 ceph-mgr[74543]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 14 04:22:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:22:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 14 04:22:52 np0005486808 silly_leakey[77859]: [
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:    {
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "available": false,
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "ceph_device": false,
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "lsm_data": {},
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "lvs": [],
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "path": "/dev/sr0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "rejected_reasons": [
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "Insufficient space (<5GB)",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "Has a FileSystem"
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        ],
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        "sys_api": {
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "actuators": null,
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "device_nodes": "sr0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "devname": "sr0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "human_readable_size": "482.00 KB",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "id_bus": "ata",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "model": "QEMU DVD-ROM",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "nr_requests": "2",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "parent": "/dev/sr0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "partitions": {},
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "path": "/dev/sr0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "removable": "1",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "rev": "2.5+",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "ro": "0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "rotational": "0",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "sas_address": "",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "sas_device_handle": "",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "scheduler_mode": "mq-deadline",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "sectors": 0,
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "sectorsize": "2048",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "size": 493568.0,
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "support_discard": "2048",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "type": "disk",
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:            "vendor": "QEMU"
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:        }
Oct 14 04:22:53 np0005486808 silly_leakey[77859]:    }
Oct 14 04:22:53 np0005486808 silly_leakey[77859]: ]
Oct 14 04:22:53 np0005486808 systemd[1]: libpod-b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f.scope: Deactivated successfully.
Oct 14 04:22:53 np0005486808 podman[77843]: 2025-10-14 08:22:53.023036643 +0000 UTC m=+1.699194143 container died b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:53 np0005486808 systemd[1]: libpod-b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f.scope: Consumed 1.547s CPU time.
Oct 14 04:22:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a5339c6db4ed755f9fc3af6fc5bb6ea13851dd66183cd99734ac2eb35d7fb7bc-merged.mount: Deactivated successfully.
Oct 14 04:22:53 np0005486808 podman[77843]: 2025-10-14 08:22:53.078954379 +0000 UTC m=+1.755111889 container remove b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:22:53 np0005486808 systemd[1]: libpod-conmon-b6a091a23790f8c53b70dd9e6e71e0ccb8ef9ce64512d14820b7232e4140764f.scope: Deactivated successfully.
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:22:53 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 14 04:22:53 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3769285315' entity='client.admin' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:22:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:22:53 np0005486808 ansible-async_wrapper.py[79909]: Invoked with j942936553959 30 /home/zuul/.ansible/tmp/ansible-tmp-1760430172.9538832-33089-215607478542300/AnsiballZ_command.py _
Oct 14 04:22:53 np0005486808 ansible-async_wrapper.py[79959]: Starting module and watcher
Oct 14 04:22:53 np0005486808 ansible-async_wrapper.py[79961]: Start module (79961)
Oct 14 04:22:53 np0005486808 ansible-async_wrapper.py[79959]: Start watching 79961 (30)
Oct 14 04:22:53 np0005486808 ansible-async_wrapper.py[79909]: Return async_wrapper task started.
Oct 14 04:22:53 np0005486808 python3[79964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:22:53 np0005486808 podman[80015]: 2025-10-14 08:22:53.861462518 +0000 UTC m=+0.058658556 container create 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:53 np0005486808 systemd[1]: Started libpod-conmon-1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc.scope.
Oct 14 04:22:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:53 np0005486808 podman[80015]: 2025-10-14 08:22:53.841835974 +0000 UTC m=+0.039032042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0d9a5256e6bdbf57c2363be2deb2370dfdbda398fcbb711b032177113c8632/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0d9a5256e6bdbf57c2363be2deb2370dfdbda398fcbb711b032177113c8632/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:53 np0005486808 podman[80015]: 2025-10-14 08:22:53.963559435 +0000 UTC m=+0.160755533 container init 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:22:53 np0005486808 podman[80015]: 2025-10-14 08:22:53.975967677 +0000 UTC m=+0.173163755 container start 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:22:53 np0005486808 podman[80015]: 2025-10-14 08:22:53.981275051 +0000 UTC m=+0.178471129 container attach 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:54 np0005486808 ceph-mon[74249]: Updating compute-0:/etc/ceph/ceph.conf
Oct 14 04:22:54 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:22:54 np0005486808 wizardly_brahmagupta[80067]: 
Oct 14 04:22:54 np0005486808 wizardly_brahmagupta[80067]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 14 04:22:54 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.conf
Oct 14 04:22:54 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.conf
Oct 14 04:22:54 np0005486808 systemd[1]: libpod-1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc.scope: Deactivated successfully.
Oct 14 04:22:54 np0005486808 podman[80015]: 2025-10-14 08:22:54.53082732 +0000 UTC m=+0.728023398 container died 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:22:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-da0d9a5256e6bdbf57c2363be2deb2370dfdbda398fcbb711b032177113c8632-merged.mount: Deactivated successfully.
Oct 14 04:22:54 np0005486808 podman[80015]: 2025-10-14 08:22:54.589360322 +0000 UTC m=+0.786556380 container remove 1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc (image=quay.io/ceph/ceph:v18, name=wizardly_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:22:54 np0005486808 systemd[1]: libpod-conmon-1e193c7115410a576cecf598e4fc7a484c585288ae98c93f3a71f5f256ea66dc.scope: Deactivated successfully.
Oct 14 04:22:54 np0005486808 ansible-async_wrapper.py[79961]: Module complete (79961)
Oct 14 04:22:55 np0005486808 python3[80439]: ansible-ansible.legacy.async_status Invoked with jid=j942936553959.79909 mode=status _async_dir=/root/.ansible_async
Oct 14 04:22:55 np0005486808 python3[80591]: ansible-ansible.legacy.async_status Invoked with jid=j942936553959.79909 mode=cleanup _async_dir=/root/.ansible_async
Oct 14 04:22:55 np0005486808 ceph-mon[74249]: Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.conf
Oct 14 04:22:56 np0005486808 python3[80764]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:22:56 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 14 04:22:56 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 14 04:22:56 np0005486808 python3[80940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:22:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:22:56 np0005486808 podman[80973]: 2025-10-14 08:22:56.609970007 +0000 UTC m=+0.042263014 container create 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:56 np0005486808 systemd[1]: Started libpod-conmon-023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa.scope.
Oct 14 04:22:56 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c343f62694a25e607630d40738d736015787fdc43fad030b97c3c0b90cc0918/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c343f62694a25e607630d40738d736015787fdc43fad030b97c3c0b90cc0918/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c343f62694a25e607630d40738d736015787fdc43fad030b97c3c0b90cc0918/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:56 np0005486808 podman[80973]: 2025-10-14 08:22:56.588297022 +0000 UTC m=+0.020590019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:56 np0005486808 podman[80973]: 2025-10-14 08:22:56.688152833 +0000 UTC m=+0.120445820 container init 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:22:56 np0005486808 podman[80973]: 2025-10-14 08:22:56.697424526 +0000 UTC m=+0.129717503 container start 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:22:56 np0005486808 podman[80973]: 2025-10-14 08:22:56.70194283 +0000 UTC m=+0.134235807 container attach 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:22:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:22:57 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:22:57 np0005486808 goofy_colden[81022]: 
Oct 14 04:22:57 np0005486808 goofy_colden[81022]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 14 04:22:57 np0005486808 systemd[1]: libpod-023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa.scope: Deactivated successfully.
Oct 14 04:22:57 np0005486808 podman[80973]: 2025-10-14 08:22:57.280854369 +0000 UTC m=+0.713147386 container died 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:22:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4c343f62694a25e607630d40738d736015787fdc43fad030b97c3c0b90cc0918-merged.mount: Deactivated successfully.
Oct 14 04:22:57 np0005486808 podman[80973]: 2025-10-14 08:22:57.338285373 +0000 UTC m=+0.770578350 container remove 023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa (image=quay.io/ceph/ceph:v18, name=goofy_colden, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:22:57 np0005486808 systemd[1]: libpod-conmon-023e8fbc319d536d4191bc9d563552d877662139d929b5aa07fd6cffda04b7aa.scope: Deactivated successfully.
Oct 14 04:22:57 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.client.admin.keyring
Oct 14 04:22:57 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.client.admin.keyring
Oct 14 04:22:57 np0005486808 ceph-mon[74249]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 14 04:22:57 np0005486808 python3[81391]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.028599032 +0000 UTC m=+0.072856702 container create 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:22:58 np0005486808 systemd[1]: Started libpod-conmon-6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068.scope.
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:57.998836394 +0000 UTC m=+0.043094154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a586f8423244019832a1c4a690e06226fc67594ea5fa4df85d626ad5c28bb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a586f8423244019832a1c4a690e06226fc67594ea5fa4df85d626ad5c28bb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a586f8423244019832a1c4a690e06226fc67594ea5fa4df85d626ad5c28bb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.13907192 +0000 UTC m=+0.183329600 container init 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.151233916 +0000 UTC m=+0.195491576 container start 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.154838677 +0000 UTC m=+0.199096337 container attach 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:22:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:22:58 np0005486808 ansible-async_wrapper.py[79959]: Done in kid B.
Oct 14 04:22:58 np0005486808 ceph-mon[74249]: Updating compute-0:/var/lib/ceph/c49aadb6-9b04-5cb1-8f5f-4c91676c568e/config/ceph.client.admin.keyring
Oct 14 04:22:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 14 04:22:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1887855591' entity='client.admin' 
Oct 14 04:22:58 np0005486808 systemd[1]: libpod-6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068.scope: Deactivated successfully.
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.709822253 +0000 UTC m=+0.754079963 container died 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:22:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-12a586f8423244019832a1c4a690e06226fc67594ea5fa4df85d626ad5c28bb2-merged.mount: Deactivated successfully.
Oct 14 04:22:58 np0005486808 podman[81443]: 2025-10-14 08:22:58.761468242 +0000 UTC m=+0.805725902 container remove 6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068 (image=quay.io/ceph/ceph:v18, name=keen_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:58 np0005486808 systemd[1]: libpod-conmon-6b38ad487099e56375ee2a06f5d0fcfa945f555d7e552716c3db30fd6bfdf068.scope: Deactivated successfully.
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 2d135922-62cb-447e-af7f-dbc8a4fc8058 (Updating crash deployment (+1 -> 1))
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:22:59 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 14 04:22:59 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 14 04:22:59 np0005486808 python3[81843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.247732851 +0000 UTC m=+0.043954326 container create 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:22:59 np0005486808 systemd[1]: Started libpod-conmon-11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01.scope.
Oct 14 04:22:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52801474567b963981a0cb71c627e39474bc8815daf5f77deb44ae776709aa01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52801474567b963981a0cb71c627e39474bc8815daf5f77deb44ae776709aa01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52801474567b963981a0cb71c627e39474bc8815daf5f77deb44ae776709aa01/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.226542398 +0000 UTC m=+0.022763913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.332629196 +0000 UTC m=+0.128850741 container init 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.344539836 +0000 UTC m=+0.140761301 container start 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.348387702 +0000 UTC m=+0.144609257 container attach 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1887855591' entity='client.admin' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.783140056 +0000 UTC m=+0.037901735 container create a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:22:59 np0005486808 systemd[1]: Started libpod-conmon-a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600.scope.
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.763322177 +0000 UTC m=+0.018083876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:22:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.873935669 +0000 UTC m=+0.128697408 container init a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.880304599 +0000 UTC m=+0.135066268 container start a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:22:59 np0005486808 stupefied_hopper[82039]: 167 167
Oct 14 04:22:59 np0005486808 systemd[1]: libpod-a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600.scope: Deactivated successfully.
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.886592187 +0000 UTC m=+0.141354046 container attach a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.88710993 +0000 UTC m=+0.141871629 container died a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 14 04:22:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f24d17aca91ef5c1f6f15c62a918f06edb3975d8132fb4cff1c6a456ff0876df-merged.mount: Deactivated successfully.
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 14 04:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1305393051' entity='client.admin' 
Oct 14 04:22:59 np0005486808 podman[82023]: 2025-10-14 08:22:59.935552748 +0000 UTC m=+0.190314417 container remove a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:22:59 np0005486808 systemd[1]: libpod-conmon-a046df2124aa62af0b79fbe2ed1ca076685f583d23e5f4fd869cfe2c5d32e600.scope: Deactivated successfully.
Oct 14 04:22:59 np0005486808 systemd[1]: libpod-11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01.scope: Deactivated successfully.
Oct 14 04:22:59 np0005486808 podman[81894]: 2025-10-14 08:22:59.958430854 +0000 UTC m=+0.754652319 container died 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:22:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-52801474567b963981a0cb71c627e39474bc8815daf5f77deb44ae776709aa01-merged.mount: Deactivated successfully.
Oct 14 04:22:59 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:00 np0005486808 podman[81894]: 2025-10-14 08:23:00.011036727 +0000 UTC m=+0.807258192 container remove 11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01 (image=quay.io/ceph/ceph:v18, name=vigorous_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:23:00 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:00 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:00 np0005486808 systemd[1]: libpod-conmon-11e61b510300abd9535f01f796447afa1a622f05dec24567a81ef86a0faf8f01.scope: Deactivated successfully.
Oct 14 04:23:00 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:00 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:00 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:00 np0005486808 systemd[1]: Starting Ceph crash.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:00 np0005486808 python3[82172]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:00 np0005486808 podman[82211]: 2025-10-14 08:23:00.772390343 +0000 UTC m=+0.047249819 container create bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:00 np0005486808 systemd[1]: Started libpod-conmon-bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150.scope.
Oct 14 04:23:00 np0005486808 podman[82234]: 2025-10-14 08:23:00.822887123 +0000 UTC m=+0.049934616 container create 72b1fb695a87de215fc982e68332cc51a865274f1a8891046d3377f292da74a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:00 np0005486808 podman[82211]: 2025-10-14 08:23:00.74838557 +0000 UTC m=+0.023245076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa9336275c0cca5c77afece02ad78935cbc08fb895b2e919270fdccfd512b55/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa9336275c0cca5c77afece02ad78935cbc08fb895b2e919270fdccfd512b55/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa9336275c0cca5c77afece02ad78935cbc08fb895b2e919270fdccfd512b55/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 podman[82211]: 2025-10-14 08:23:00.861525085 +0000 UTC m=+0.136384591 container init bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04b3c3e61307cd73a1d9fb028b5968e9f8c29fd5ad27144da5a268ecc62844c8/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04b3c3e61307cd73a1d9fb028b5968e9f8c29fd5ad27144da5a268ecc62844c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04b3c3e61307cd73a1d9fb028b5968e9f8c29fd5ad27144da5a268ecc62844c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04b3c3e61307cd73a1d9fb028b5968e9f8c29fd5ad27144da5a268ecc62844c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:00 np0005486808 podman[82211]: 2025-10-14 08:23:00.880952464 +0000 UTC m=+0.155811930 container start bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:00 np0005486808 podman[82234]: 2025-10-14 08:23:00.88438617 +0000 UTC m=+0.111433763 container init 72b1fb695a87de215fc982e68332cc51a865274f1a8891046d3377f292da74a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:23:00 np0005486808 podman[82211]: 2025-10-14 08:23:00.887528319 +0000 UTC m=+0.162387805 container attach bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:00 np0005486808 podman[82234]: 2025-10-14 08:23:00.895578541 +0000 UTC m=+0.122626064 container start 72b1fb695a87de215fc982e68332cc51a865274f1a8891046d3377f292da74a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:00 np0005486808 podman[82234]: 2025-10-14 08:23:00.804926742 +0000 UTC m=+0.031974235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:00 np0005486808 bash[82234]: 72b1fb695a87de215fc982e68332cc51a865274f1a8891046d3377f292da74a3
Oct 14 04:23:00 np0005486808 systemd[1]: Started Ceph crash.compute-0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: Deploying daemon crash.compute-0 on compute-0
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1305393051' entity='client.admin' 
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 2d135922-62cb-447e-af7f-dbc8a4fc8058 (Updating crash deployment (+1 -> 1))
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 2d135922-62cb-447e-af7f-dbc8a4fc8058 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d5ee9b77-bd71-4bf8-ad22-c14f6dc15c80 does not exist
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 783e2c25-cb1f-4d1b-b23b-c348d0483823 (Updating mgr deployment (+1 -> 2))
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.aggyjb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aggyjb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aggyjb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.aggyjb on compute-0
Oct 14 04:23:00 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.aggyjb on compute-0
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.324+0000 7ffbd5abc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.324+0000 7ffbd5abc640 -1 AuthRegistry(0x7ffbd0066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.325+0000 7ffbd5abc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.325+0000 7ffbd5abc640 -1 AuthRegistry(0x7ffbd5abb000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.326+0000 7ffbcf7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: 2025-10-14T08:23:01.326+0000 7ffbd5abc640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 14 04:23:01 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-crash-compute-0[82254]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2464390018' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.610339175 +0000 UTC m=+0.036105339 container create a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:23:01 np0005486808 systemd[1]: Started libpod-conmon-a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1.scope.
Oct 14 04:23:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.682991753 +0000 UTC m=+0.108757917 container init a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.591915172 +0000 UTC m=+0.017681326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.689082086 +0000 UTC m=+0.114848230 container start a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.692253255 +0000 UTC m=+0.118019409 container attach a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:23:01 np0005486808 gallant_swartz[82446]: 167 167
Oct 14 04:23:01 np0005486808 systemd[1]: libpod-a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1.scope: Deactivated successfully.
Oct 14 04:23:01 np0005486808 conmon[82446]: conmon a954a27d187aea476396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1.scope/container/memory.events
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.696595145 +0000 UTC m=+0.122361289 container died a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:23:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ec72405f7715967520bd0a16220d5b2a0c6395f866a7066ca414382a996be4a8-merged.mount: Deactivated successfully.
Oct 14 04:23:01 np0005486808 podman[82430]: 2025-10-14 08:23:01.735722699 +0000 UTC m=+0.161488843 container remove a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_swartz, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:01 np0005486808 systemd[1]: libpod-conmon-a954a27d187aea47639679c3342ef43e6dbd8f1ba0e22f521a8e6b1bd6666dd1.scope: Deactivated successfully.
Oct 14 04:23:01 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:01 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:01 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aggyjb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aggyjb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2464390018' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2464390018' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 14 04:23:01 np0005486808 angry_lewin[82249]: set require_min_compat_client to mimic
Oct 14 04:23:01 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 14 04:23:02 np0005486808 podman[82211]: 2025-10-14 08:23:02.013518715 +0000 UTC m=+1.288378171 container died bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:02 np0005486808 systemd[1]: libpod-bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150.scope: Deactivated successfully.
Oct 14 04:23:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4fa9336275c0cca5c77afece02ad78935cbc08fb895b2e919270fdccfd512b55-merged.mount: Deactivated successfully.
Oct 14 04:23:02 np0005486808 podman[82211]: 2025-10-14 08:23:02.11432134 +0000 UTC m=+1.389180806 container remove bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150 (image=quay.io/ceph/ceph:v18, name=angry_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:23:02 np0005486808 systemd[1]: libpod-conmon-bbb7ed2956c1d92fb7e83950553b0a656633a29f716afdac509b87ccc2953150.scope: Deactivated successfully.
Oct 14 04:23:02 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:02 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:02 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:02 np0005486808 systemd[1]: Starting Ceph mgr.compute-0.aggyjb for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 1 completed events
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:02 np0005486808 podman[82630]: 2025-10-14 08:23:02.734377343 +0000 UTC m=+0.053577758 container create cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:02 np0005486808 python3[82619]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:02 np0005486808 podman[82630]: 2025-10-14 08:23:02.711287092 +0000 UTC m=+0.030487517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/983c9fcf7d9c3ee5259110212a170f6aea8f19d82a0e60da6acece9e06ea925e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/983c9fcf7d9c3ee5259110212a170f6aea8f19d82a0e60da6acece9e06ea925e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/983c9fcf7d9c3ee5259110212a170f6aea8f19d82a0e60da6acece9e06ea925e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/983c9fcf7d9c3ee5259110212a170f6aea8f19d82a0e60da6acece9e06ea925e/merged/var/lib/ceph/mgr/ceph-compute-0.aggyjb supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 podman[82630]: 2025-10-14 08:23:02.831969497 +0000 UTC m=+0.151169952 container init cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:23:02 np0005486808 podman[82630]: 2025-10-14 08:23:02.842101672 +0000 UTC m=+0.161302077 container start cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:02 np0005486808 bash[82630]: cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73
Oct 14 04:23:02 np0005486808 systemd[1]: Started Ceph mgr.compute-0.aggyjb for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:02 np0005486808 podman[82643]: 2025-10-14 08:23:02.896052739 +0000 UTC m=+0.083412829 container create 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:02 np0005486808 ceph-mgr[82659]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:23:02 np0005486808 ceph-mgr[82659]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 14 04:23:02 np0005486808 ceph-mgr[82659]: pidfile_write: ignore empty --pid-file
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 783e2c25-cb1f-4d1b-b23b-c348d0483823 (Updating mgr deployment (+1 -> 2))
Oct 14 04:23:02 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 783e2c25-cb1f-4d1b-b23b-c348d0483823 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 systemd[1]: Started libpod-conmon-9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4.scope.
Oct 14 04:23:02 np0005486808 podman[82643]: 2025-10-14 08:23:02.866433494 +0000 UTC m=+0.053793664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: Deploying daemon mgr.compute-0.aggyjb on compute-0
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2464390018' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f839cb3ec4f37bee78dea21e5ce0ab4f605f44c1efea055b5c00038fb68738/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f839cb3ec4f37bee78dea21e5ce0ab4f605f44c1efea055b5c00038fb68738/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f839cb3ec4f37bee78dea21e5ce0ab4f605f44c1efea055b5c00038fb68738/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:03 np0005486808 podman[82643]: 2025-10-14 08:23:03.015328178 +0000 UTC m=+0.202688278 container init 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:03 np0005486808 podman[82643]: 2025-10-14 08:23:03.023426702 +0000 UTC m=+0.210786782 container start 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:03 np0005486808 podman[82643]: 2025-10-14 08:23:03.032179332 +0000 UTC m=+0.219539422 container attach 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:23:03 np0005486808 ceph-mgr[82659]: mgr[py] Loading python module 'alerts'
Oct 14 04:23:03 np0005486808 ceph-mgr[82659]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:23:03 np0005486808 ceph-mgr[82659]: mgr[py] Loading python module 'balancer'
Oct 14 04:23:03 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb[82646]: 2025-10-14T08:23:03.377+0000 7fcb9c83d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 14 04:23:03 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:23:03 np0005486808 ceph-mgr[82659]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:23:03 np0005486808 ceph-mgr[82659]: mgr[py] Loading python module 'cephadm'
Oct 14 04:23:03 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb[82646]: 2025-10-14T08:23:03.613+0000 7fcb9c83d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 14 04:23:03 np0005486808 podman[83005]: 2025-10-14 08:23:03.969800262 +0000 UTC m=+0.085980993 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:23:04 np0005486808 podman[83005]: 2025-10-14 08:23:04.056550543 +0000 UTC m=+0.172731264 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Added host compute-0
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 condescending_kowalevski[82694]: Added host 'compute-0' with addr '192.168.122.100'
Oct 14 04:23:04 np0005486808 condescending_kowalevski[82694]: Scheduled mon update...
Oct 14 04:23:04 np0005486808 condescending_kowalevski[82694]: Scheduled mgr update...
Oct 14 04:23:04 np0005486808 condescending_kowalevski[82694]: Scheduled osd.default_drive_group update...
Oct 14 04:23:04 np0005486808 systemd[1]: libpod-9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4.scope: Deactivated successfully.
Oct 14 04:23:04 np0005486808 podman[82643]: 2025-10-14 08:23:04.376374466 +0000 UTC m=+1.563734586 container died 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:23:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-02f839cb3ec4f37bee78dea21e5ce0ab4f605f44c1efea055b5c00038fb68738-merged.mount: Deactivated successfully.
Oct 14 04:23:04 np0005486808 podman[82643]: 2025-10-14 08:23:04.44532549 +0000 UTC m=+1.632685570 container remove 9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4 (image=quay.io/ceph/ceph:v18, name=condescending_kowalevski, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:04 np0005486808 systemd[1]: libpod-conmon-9ad0e2ecb1878fff0850b2475b5b9db2a077784920e918070115c9da1ba7bdd4.scope: Deactivated successfully.
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 20291b34-fa71-48fe-924f-1a632679af37 does not exist
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev c0f86abc-4539-41b7-981c-40e2d225f7e2 (Updating mgr deployment (-1 -> 1))
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.aggyjb from compute-0 -- ports [8765]
Oct 14 04:23:04 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.aggyjb from compute-0 -- ports [8765]
Oct 14 04:23:04 np0005486808 python3[83235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:05 np0005486808 podman[83280]: 2025-10-14 08:23:05.052959361 +0000 UTC m=+0.049774712 container create 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:23:05 np0005486808 systemd[1]: Started libpod-conmon-9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649.scope.
Oct 14 04:23:05 np0005486808 podman[83280]: 2025-10-14 08:23:05.029724107 +0000 UTC m=+0.026539458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c0aab5b1bc4180196d4b4e7b91553d6c3dc58686e0b40d4c5c6256eecbdfea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c0aab5b1bc4180196d4b4e7b91553d6c3dc58686e0b40d4c5c6256eecbdfea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c0aab5b1bc4180196d4b4e7b91553d6c3dc58686e0b40d4c5c6256eecbdfea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:05 np0005486808 podman[83280]: 2025-10-14 08:23:05.162579867 +0000 UTC m=+0.159395248 container init 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:05 np0005486808 podman[83280]: 2025-10-14 08:23:05.182461067 +0000 UTC m=+0.179276398 container start 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:23:05 np0005486808 podman[83280]: 2025-10-14 08:23:05.190045328 +0000 UTC m=+0.186860669 container attach 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Added host compute-0
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Saving service mon spec with placement compute-0
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Saving service mgr spec with placement compute-0
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Saving service osd.default_drive_group spec with placement compute-0
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: Removing daemon mgr.compute-0.aggyjb from compute-0 -- ports [8765]
Oct 14 04:23:05 np0005486808 systemd[1]: Stopping Ceph mgr.compute-0.aggyjb for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:05 np0005486808 podman[83369]: 2025-10-14 08:23:05.540967693 +0000 UTC m=+0.061538419 container died cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:23:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-983c9fcf7d9c3ee5259110212a170f6aea8f19d82a0e60da6acece9e06ea925e-merged.mount: Deactivated successfully.
Oct 14 04:23:05 np0005486808 podman[83369]: 2025-10-14 08:23:05.600824998 +0000 UTC m=+0.121395714 container remove cdc599d1def2d4fca9ab51a59552e194162b51fb6b30235dd4cf513b3f244e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:05 np0005486808 bash[83369]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-aggyjb
Oct 14 04:23:05 np0005486808 systemd[1]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mgr.compute-0.aggyjb.service: Main process exited, code=exited, status=143/n/a
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 14 04:23:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942659546' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 14 04:23:05 np0005486808 elegant_heyrovsky[83309]: 
Oct 14 04:23:05 np0005486808 elegant_heyrovsky[83309]: {"fsid":"c49aadb6-9b04-5cb1-8f5f-4c91676c568e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-14T08:21:43.857901+0000","services":{}},"progress_events":{}}
Oct 14 04:23:05 np0005486808 systemd[1]: libpod-9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649.scope: Deactivated successfully.
Oct 14 04:23:05 np0005486808 systemd[1]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mgr.compute-0.aggyjb.service: Failed with result 'exit-code'.
Oct 14 04:23:05 np0005486808 systemd[1]: Stopped Ceph mgr.compute-0.aggyjb for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:05 np0005486808 systemd[1]: ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mgr.compute-0.aggyjb.service: Consumed 3.697s CPU time.
Oct 14 04:23:05 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:05 np0005486808 podman[83442]: 2025-10-14 08:23:05.864309004 +0000 UTC m=+0.044039208 container died 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:05 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:05 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-80c0aab5b1bc4180196d4b4e7b91553d6c3dc58686e0b40d4c5c6256eecbdfea-merged.mount: Deactivated successfully.
Oct 14 04:23:06 np0005486808 podman[83442]: 2025-10-14 08:23:06.20651984 +0000 UTC m=+0.386249964 container remove 9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649 (image=quay.io/ceph/ceph:v18, name=elegant_heyrovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:06 np0005486808 systemd[1]: libpod-conmon-9f6ce64f6e61cb0980d200395284ee01a0b36682efbf5b07334ba81aafadd649.scope: Deactivated successfully.
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.aggyjb
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.aggyjb
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.aggyjb"} v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aggyjb"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aggyjb"}]': finished
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev c0f86abc-4539-41b7-981c-40e2d225f7e2 (Updating mgr deployment (-1 -> 1))
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event c0f86abc-4539-41b7-981c-40e2d225f7e2 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0551c842-e4ea-4dc1-9bd8-39acad857aa0 does not exist
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aggyjb"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aggyjb"}]': finished
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.004412056 +0000 UTC m=+0.063236381 container create 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:23:07 np0005486808 systemd[1]: Started libpod-conmon-1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6.scope.
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:06.978258258 +0000 UTC m=+0.037082643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.101509498 +0000 UTC m=+0.160333863 container init 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.112560036 +0000 UTC m=+0.171384361 container start 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.117479209 +0000 UTC m=+0.176303584 container attach 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:23:07 np0005486808 jolly_goodall[83650]: 167 167
Oct 14 04:23:07 np0005486808 systemd[1]: libpod-1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6.scope: Deactivated successfully.
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.120816723 +0000 UTC m=+0.179641048 container died 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:23:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cd5498198c8555837e1a572c7dd3758253e2acf474183ecc056d0f697de273df-merged.mount: Deactivated successfully.
Oct 14 04:23:07 np0005486808 podman[83634]: 2025-10-14 08:23:07.160524042 +0000 UTC m=+0.219348337 container remove 1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:23:07 np0005486808 systemd[1]: libpod-conmon-1ee99020cd781aca7cb47dd513b1c07a03d99ec81fe067749d8f18b21f1911d6.scope: Deactivated successfully.
Oct 14 04:23:07 np0005486808 podman[83673]: 2025-10-14 08:23:07.397481551 +0000 UTC m=+0.069935250 container create 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:23:07 np0005486808 systemd[1]: Started libpod-conmon-2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15.scope.
Oct 14 04:23:07 np0005486808 podman[83673]: 2025-10-14 08:23:07.368772499 +0000 UTC m=+0.041226238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:07 np0005486808 podman[83673]: 2025-10-14 08:23:07.498757918 +0000 UTC m=+0.171211667 container init 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:23:07 np0005486808 podman[83673]: 2025-10-14 08:23:07.515583981 +0000 UTC m=+0.188037680 container start 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:23:07 np0005486808 podman[83673]: 2025-10-14 08:23:07.521722425 +0000 UTC m=+0.194176094 container attach 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:07 np0005486808 ceph-mon[74249]: Removing key for mgr.compute-0.aggyjb
Oct 14 04:23:07 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 3 completed events
Oct 14 04:23:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:23:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:08 np0005486808 eloquent_bardeen[83689]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:23:08 np0005486808 eloquent_bardeen[83689]: --> relative data size: 1.0
Oct 14 04:23:08 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:08 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f3a61673-339d-4723-8921-51461af37696
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f3a61673-339d-4723-8921-51461af37696"} v 0) v1
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/362940945' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f3a61673-339d-4723-8921-51461af37696"}]: dispatch
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/362940945' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f3a61673-339d-4723-8921-51461af37696"}]': finished
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:09 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 14 04:23:09 np0005486808 lvm[83751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 14 04:23:09 np0005486808 lvm[83751]: VG ceph_vg0 finished
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/362940945' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f3a61673-339d-4723-8921-51461af37696"}]: dispatch
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/362940945' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f3a61673-339d-4723-8921-51461af37696"}]': finished
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 14 04:23:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070534060' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: stderr: got monmap epoch 1
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: --> Creating keyring file for osd.0
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 14 04:23:09 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f3a61673-339d-4723-8921-51461af37696 --setuser ceph --setgroup ceph
Oct 14 04:23:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 14 04:23:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 14 04:23:11 np0005486808 ceph-mon[74249]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 14 04:23:11 np0005486808 ceph-mon[74249]: Cluster is now healthy
Oct 14 04:23:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:10.021+0000 7f6f4fedd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:10.021+0000 7f6f4fedd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:10.021+0000 7f6f4fedd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:10.021+0000 7f6f4fedd740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 14 04:23:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:12 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8e4dad3b-8894-46e9-8f78-c9bac8e09fb7
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7"} v 0) v1
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396983002' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7"}]: dispatch
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396983002' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7"}]': finished
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:13 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:13 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:13 np0005486808 lvm[84684]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 04:23:13 np0005486808 lvm[84684]: VG ceph_vg1 finished
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2396983002' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7"}]: dispatch
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2396983002' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7"}]': finished
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 14 04:23:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887154310' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: stderr: got monmap epoch 1
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: --> Creating keyring file for osd.1
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 14 04:23:13 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8e4dad3b-8894-46e9-8f78-c9bac8e09fb7 --setuser ceph --setgroup ceph
Oct 14 04:23:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:13.936+0000 7f456aa58740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:13.936+0000 7f456aa58740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:13.936+0000 7f456aa58740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:13.937+0000 7f456aa58740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 14 04:23:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:16 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 89de92de-6791-493e-a676-9fee8315c8cf
Oct 14 04:23:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "89de92de-6791-493e-a676-9fee8315c8cf"} v 0) v1
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/90890218' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89de92de-6791-493e-a676-9fee8315c8cf"}]: dispatch
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/90890218' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89de92de-6791-493e-a676-9fee8315c8cf"}]': finished
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:17 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:17 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:17 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:17 np0005486808 lvm[85619]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 04:23:17 np0005486808 lvm[85619]: VG ceph_vg2 finished
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174841379' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: stderr: got monmap epoch 1
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: --> Creating keyring file for osd.2
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 14 04:23:17 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 89de92de-6791-493e-a676-9fee8315c8cf --setuser ceph --setgroup ceph
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/90890218' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89de92de-6791-493e-a676-9fee8315c8cf"}]: dispatch
Oct 14 04:23:17 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/90890218' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89de92de-6791-493e-a676-9fee8315c8cf"}]': finished
Oct 14 04:23:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:17.826+0000 7f587c380740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:17.826+0000 7f587c380740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:17.826+0000 7f587c380740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: stderr: 2025-10-14T08:23:17.827+0000 7f587c380740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 14 04:23:20 np0005486808 eloquent_bardeen[83689]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 14 04:23:20 np0005486808 systemd[1]: libpod-2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15.scope: Deactivated successfully.
Oct 14 04:23:20 np0005486808 systemd[1]: libpod-2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15.scope: Consumed 6.879s CPU time.
Oct 14 04:23:20 np0005486808 podman[86523]: 2025-10-14 08:23:20.52028354 +0000 UTC m=+0.043480720 container died 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:23:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bd05302052144191638fe49ea2da47e7c4d8c4a1352a2890b1bf6348965ca97a-merged.mount: Deactivated successfully.
Oct 14 04:23:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:20 np0005486808 podman[86523]: 2025-10-14 08:23:20.612348209 +0000 UTC m=+0.135545349 container remove 2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:20 np0005486808 systemd[1]: libpod-conmon-2894b4ca6a8699f28eeeade91ce0e971629be7defead622b73ffe5fe42dbdc15.scope: Deactivated successfully.
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.422631656 +0000 UTC m=+0.051223244 container create 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:21 np0005486808 systemd[1]: Started libpod-conmon-7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a.scope.
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.396679096 +0000 UTC m=+0.025270724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.530154025 +0000 UTC m=+0.158745653 container init 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.541449505 +0000 UTC m=+0.170041103 container start 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.545943302 +0000 UTC m=+0.174534900 container attach 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:21 np0005486808 distracted_turing[86695]: 167 167
Oct 14 04:23:21 np0005486808 systemd[1]: libpod-7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a.scope: Deactivated successfully.
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.551525265 +0000 UTC m=+0.180116854 container died 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4bfd0dedd18a0d2c7ee478213353c64318b72f790a64bf95e44fd11e2a902126-merged.mount: Deactivated successfully.
Oct 14 04:23:21 np0005486808 podman[86678]: 2025-10-14 08:23:21.60154436 +0000 UTC m=+0.230135938 container remove 7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:21 np0005486808 systemd[1]: libpod-conmon-7da1cbe890b7973a9ba5d54d391d39fd2b6f740816448838ed01d05266e9338a.scope: Deactivated successfully.
Oct 14 04:23:21 np0005486808 podman[86719]: 2025-10-14 08:23:21.855363744 +0000 UTC m=+0.078301521 container create 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:21 np0005486808 systemd[1]: Started libpod-conmon-8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e.scope.
Oct 14 04:23:21 np0005486808 podman[86719]: 2025-10-14 08:23:21.826822412 +0000 UTC m=+0.049760239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2daa0ae093e48fbdfc250996c728453e9457da860766ece51734008eae637b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2daa0ae093e48fbdfc250996c728453e9457da860766ece51734008eae637b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2daa0ae093e48fbdfc250996c728453e9457da860766ece51734008eae637b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2daa0ae093e48fbdfc250996c728453e9457da860766ece51734008eae637b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:21 np0005486808 podman[86719]: 2025-10-14 08:23:21.977581364 +0000 UTC m=+0.200519191 container init 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:23:21 np0005486808 podman[86719]: 2025-10-14 08:23:21.990581474 +0000 UTC m=+0.213519211 container start 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:23:21 np0005486808 podman[86719]: 2025-10-14 08:23:21.994363215 +0000 UTC m=+0.217301052 container attach 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]: {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    "0": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "devices": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "/dev/loop3"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            ],
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_name": "ceph_lv0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_size": "21470642176",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "name": "ceph_lv0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "tags": {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.crush_device_class": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.encrypted": "0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_id": "0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.vdo": "0"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            },
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "vg_name": "ceph_vg0"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        }
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    ],
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    "1": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "devices": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "/dev/loop4"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            ],
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_name": "ceph_lv1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_size": "21470642176",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "name": "ceph_lv1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "tags": {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.crush_device_class": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.encrypted": "0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_id": "1",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.vdo": "0"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            },
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "vg_name": "ceph_vg1"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        }
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    ],
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    "2": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "devices": [
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "/dev/loop5"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            ],
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_name": "ceph_lv2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_size": "21470642176",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "name": "ceph_lv2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "tags": {
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.crush_device_class": "",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.encrypted": "0",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osd_id": "2",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:                "ceph.vdo": "0"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            },
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "type": "block",
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:            "vg_name": "ceph_vg2"
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:        }
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]:    ]
Oct 14 04:23:22 np0005486808 angry_goldstine[86736]: }
Oct 14 04:23:22 np0005486808 systemd[1]: libpod-8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e.scope: Deactivated successfully.
Oct 14 04:23:22 np0005486808 podman[86719]: 2025-10-14 08:23:22.729759883 +0000 UTC m=+0.952697620 container died 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cf2daa0ae093e48fbdfc250996c728453e9457da860766ece51734008eae637b-merged.mount: Deactivated successfully.
Oct 14 04:23:22 np0005486808 podman[86719]: 2025-10-14 08:23:22.789821828 +0000 UTC m=+1.012759565 container remove 8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:23:22 np0005486808 systemd[1]: libpod-conmon-8826d3cb0b5501cfe4b3f1252f6a8810a772a571fb1cebb51c3159a248cad97e.scope: Deactivated successfully.
Oct 14 04:23:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 14 04:23:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 14 04:23:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:22 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 14 04:23:22 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 14 04:23:22 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.549903195 +0000 UTC m=+0.053563831 container create 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:23:23 np0005486808 systemd[1]: Started libpod-conmon-3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db.scope.
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.530519572 +0000 UTC m=+0.034180238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.6543224 +0000 UTC m=+0.157983106 container init 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.665301522 +0000 UTC m=+0.168962188 container start 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.66984227 +0000 UTC m=+0.173502976 container attach 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 14 04:23:23 np0005486808 heuristic_sinoussi[86915]: 167 167
Oct 14 04:23:23 np0005486808 systemd[1]: libpod-3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db.scope: Deactivated successfully.
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.674273906 +0000 UTC m=+0.177934562 container died 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:23:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a74db927573e5df6f2b75612f666b06fcfd8312e62649f9ca97f06fd85b30bdc-merged.mount: Deactivated successfully.
Oct 14 04:23:23 np0005486808 podman[86898]: 2025-10-14 08:23:23.726646247 +0000 UTC m=+0.230306903 container remove 3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:23 np0005486808 systemd[1]: libpod-conmon-3bf05187b1d3b1f7f91c4bd5d18e482342e8c606e02dce45f2075d1cdd3405db.scope: Deactivated successfully.
Oct 14 04:23:23 np0005486808 ceph-mon[74249]: Deploying daemon osd.0 on compute-0
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.016105422 +0000 UTC m=+0.052542776 container create abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:24 np0005486808 systemd[1]: Started libpod-conmon-abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133.scope.
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:23.988212516 +0000 UTC m=+0.024649930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.134979982 +0000 UTC m=+0.171417326 container init abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.153243979 +0000 UTC m=+0.189681333 container start abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.157447489 +0000 UTC m=+0.193884883 container attach abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:23:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:24 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test[86963]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 14 04:23:24 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test[86963]:                            [--no-systemd] [--no-tmpfs]
Oct 14 04:23:24 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test[86963]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 14 04:23:24 np0005486808 systemd[1]: libpod-abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133.scope: Deactivated successfully.
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.775483664 +0000 UTC m=+0.811920978 container died abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-413358b205bfddd9541c072afde95a075a20c1627ec4d965a059f16a46e87b3f-merged.mount: Deactivated successfully.
Oct 14 04:23:24 np0005486808 podman[86947]: 2025-10-14 08:23:24.830085418 +0000 UTC m=+0.866522732 container remove abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:23:24 np0005486808 systemd[1]: libpod-conmon-abded79b46a925515ab1becbbcf4a18c6e8964c8130cf9726d514190adaf0133.scope: Deactivated successfully.
Oct 14 04:23:25 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:25 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:25 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:25 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:25 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:25 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:25 np0005486808 systemd[1]: Starting Ceph osd.0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:26 np0005486808 podman[87125]: 2025-10-14 08:23:26.01395445 +0000 UTC m=+0.058141080 container create 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 14 04:23:26 np0005486808 podman[87125]: 2025-10-14 08:23:25.986412942 +0000 UTC m=+0.030599662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:26 np0005486808 podman[87125]: 2025-10-14 08:23:26.10770961 +0000 UTC m=+0.151896270 container init 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:26 np0005486808 podman[87125]: 2025-10-14 08:23:26.120916175 +0000 UTC m=+0.165102825 container start 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:23:26 np0005486808 podman[87125]: 2025-10-14 08:23:26.124519991 +0000 UTC m=+0.168706671 container attach 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:27 np0005486808 bash[87125]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 14 04:23:27 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate[87140]: --> ceph-volume raw activate successful for osd ID: 0
Oct 14 04:23:27 np0005486808 bash[87125]: --> ceph-volume raw activate successful for osd ID: 0
Oct 14 04:23:27 np0005486808 systemd[1]: libpod-71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5.scope: Deactivated successfully.
Oct 14 04:23:27 np0005486808 podman[87125]: 2025-10-14 08:23:27.27728477 +0000 UTC m=+1.321471440 container died 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:23:27 np0005486808 systemd[1]: libpod-71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5.scope: Consumed 1.179s CPU time.
Oct 14 04:23:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-64dc80bc8415065c3a403b813d8924f9ff01d53177228452fe5bbd8a7c699b41-merged.mount: Deactivated successfully.
Oct 14 04:23:27 np0005486808 podman[87125]: 2025-10-14 08:23:27.336326081 +0000 UTC m=+1.380512721 container remove 71378f10ae9c022f5c3e3f88936c9b783eaf9ec1ad4b18e8328ecb1a9d5af9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:23:27 np0005486808 podman[87328]: 2025-10-14 08:23:27.577967743 +0000 UTC m=+0.046841280 container create eff7cda8731338d4194b6d31a8c3973d989b440bebf685cbf11e7f878887a83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 04:23:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dd8b8080fcb319f62d2b5911c95f6634592a2e0119dfd685e69f2de6383de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dd8b8080fcb319f62d2b5911c95f6634592a2e0119dfd685e69f2de6383de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dd8b8080fcb319f62d2b5911c95f6634592a2e0119dfd685e69f2de6383de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dd8b8080fcb319f62d2b5911c95f6634592a2e0119dfd685e69f2de6383de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dd8b8080fcb319f62d2b5911c95f6634592a2e0119dfd685e69f2de6383de5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:27 np0005486808 podman[87328]: 2025-10-14 08:23:27.552992487 +0000 UTC m=+0.021866094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:27 np0005486808 podman[87328]: 2025-10-14 08:23:27.65485783 +0000 UTC m=+0.123731377 container init eff7cda8731338d4194b6d31a8c3973d989b440bebf685cbf11e7f878887a83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:27 np0005486808 podman[87328]: 2025-10-14 08:23:27.66700319 +0000 UTC m=+0.135876727 container start eff7cda8731338d4194b6d31a8c3973d989b440bebf685cbf11e7f878887a83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:27 np0005486808 bash[87328]: eff7cda8731338d4194b6d31a8c3973d989b440bebf685cbf11e7f878887a83c
Oct 14 04:23:27 np0005486808 systemd[1]: Started Ceph osd.0 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: pidfile_write: ignore empty --pid-file
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30bbb3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30bbb3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30bbb3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30c9eb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30c9eb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30c9eb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30c9eb800 /var/lib/ceph/osd/ceph-0/block) close
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:27 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 14 04:23:27 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 14 04:23:27 np0005486808 ceph-osd[87348]: bdev(0x55e30bbb3800 /var/lib/ceph/osd/ceph-0/block) close
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: load: jerasure load: lrc 
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.488242219 +0000 UTC m=+0.037982508 container create 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 14 04:23:28 np0005486808 systemd[1]: Started libpod-conmon-9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6.scope.
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.471340895 +0000 UTC m=+0.021081174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.594663411 +0000 UTC m=+0.144403760 container init 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.608471311 +0000 UTC m=+0.158211610 container start 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.612304283 +0000 UTC m=+0.162044622 container attach 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:23:28 np0005486808 magical_euler[87535]: 167 167
Oct 14 04:23:28 np0005486808 systemd[1]: libpod-9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6.scope: Deactivated successfully.
Oct 14 04:23:28 np0005486808 conmon[87535]: conmon 9c34cfd3fa43f682a203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6.scope/container/memory.events
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.619356791 +0000 UTC m=+0.169097060 container died 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:23:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-54e128da49ee2248081670726c18850020a97d0b6365d9a7fa5a82a587893f7e-merged.mount: Deactivated successfully.
Oct 14 04:23:28 np0005486808 podman[87514]: 2025-10-14 08:23:28.669327045 +0000 UTC m=+0.219067314 container remove 9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:28 np0005486808 systemd[1]: libpod-conmon-9c34cfd3fa43f682a20367e748c7b108cc6327fe3b32e10c7767a8866aeea9e6.scope: Deactivated successfully.
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluefs mount
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluefs mount shared_bdev_used = 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Git sha 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: DB SUMMARY
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: DB Session ID:  RIH7F115N9EPF925W1BO
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                                     Options.env: 0x55e30ca3dc70
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                                Options.info_log: 0x55e30bc3a8a0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.write_buffer_manager: 0x55e30cb50460
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Compression algorithms supported:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0b27f15e-4ae3-49d4-a50b-6351e9f94474
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430208817819, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430208818066, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: freelist init
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: freelist _read_cfg
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bluefs umount
Oct 14 04:23:28 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) close
Oct 14 04:23:28 np0005486808 ceph-mon[74249]: Deploying daemon osd.1 on compute-0
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.053529824 +0000 UTC m=+0.074347818 container create 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bdev(0x55e30ca6d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluefs mount
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluefs mount shared_bdev_used = 4718592
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Git sha 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: DB SUMMARY
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: DB Session ID:  RIH7F115N9EPF925W1BP
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                                     Options.env: 0x55e30cbf8b60
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                                Options.info_log: 0x55e30bc3a300
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.write_buffer_manager: 0x55e30cb506e0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Compression algorithms supported:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc271f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.025721799 +0000 UTC m=+0.046539833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:29 np0005486808 systemd[1]: Started libpod-conmon-3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86.scope.
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e30bc3a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e30bc27090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0b27f15e-4ae3-49d4-a50b-6351e9f94474
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430209131478, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430209139687, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430209, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b27f15e-4ae3-49d4-a50b-6351e9f94474", "db_session_id": "RIH7F115N9EPF925W1BP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430209143675, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430209, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b27f15e-4ae3-49d4-a50b-6351e9f94474", "db_session_id": "RIH7F115N9EPF925W1BP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430209146596, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430209, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b27f15e-4ae3-49d4-a50b-6351e9f94474", "db_session_id": "RIH7F115N9EPF925W1BP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430209148180, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 14 04:23:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.173980271 +0000 UTC m=+0.194798315 container init 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.187772511 +0000 UTC m=+0.208590505 container start 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e30bd94000
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: DB pointer 0x55e30cb2fa00
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 460.80 MB usage: 0
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.191721145 +0000 UTC m=+0.212539149 container attach 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: _get_class not permitted to load lua
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: _get_class not permitted to load sdk
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: _get_class not permitted to load test_remote_reads
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 load_pgs
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 load_pgs opened 0 pgs
Oct 14 04:23:29 np0005486808 ceph-osd[87348]: osd.0 0 log_to_monitors true
Oct 14 04:23:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0[87344]: 2025-10-14T08:23:29.195+0000 7f4f7ee86740 -1 osd.0 0 log_to_monitors true
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 14 04:23:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test[87914]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 14 04:23:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test[87914]:                            [--no-systemd] [--no-tmpfs]
Oct 14 04:23:29 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test[87914]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 14 04:23:29 np0005486808 systemd[1]: libpod-3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86.scope: Deactivated successfully.
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.80016415 +0000 UTC m=+0.820982144 container died 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:23:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-597f341665550eadba57c3ff988ff06cdf5a054891fa653ac5aad82ef58d6fe0-merged.mount: Deactivated successfully.
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 14 04:23:29 np0005486808 podman[87761]: 2025-10-14 08:23:29.891217836 +0000 UTC m=+0.912035830 container remove 3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate-test, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 14 04:23:29 np0005486808 systemd[1]: libpod-conmon-3ba59a0986489706e88b6e8d7634fa88bf12866a5c7aee8449c6f2b711af0a86.scope: Deactivated successfully.
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:29 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:29 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:29 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 14 04:23:30 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:30 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:30 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:30 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:30 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:30 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 done with init, starting boot process
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 start_boot
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 14 04:23:30 np0005486808 ceph-osd[87348]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1859262966; not ready for session (expect reconnect)
Oct 14 04:23:30 np0005486808 systemd[1]: Starting Ceph osd.1 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:30 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:31 np0005486808 podman[88150]: 2025-10-14 08:23:31.224157858 +0000 UTC m=+0.064357208 container create 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:31 np0005486808 podman[88150]: 2025-10-14 08:23:31.195155815 +0000 UTC m=+0.035355155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:31 np0005486808 podman[88150]: 2025-10-14 08:23:31.31423153 +0000 UTC m=+0.154430870 container init 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:23:31 np0005486808 podman[88150]: 2025-10-14 08:23:31.327945528 +0000 UTC m=+0.168144838 container start 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:31 np0005486808 podman[88150]: 2025-10-14 08:23:31.334252768 +0000 UTC m=+0.174452108 container attach 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:23:31 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1859262966; not ready for session (expect reconnect)
Oct 14 04:23:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:31 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:31 np0005486808 ceph-mon[74249]: from='osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:32 np0005486808 bash[88150]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 14 04:23:32 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate[88166]: --> ceph-volume raw activate successful for osd ID: 1
Oct 14 04:23:32 np0005486808 bash[88150]: --> ceph-volume raw activate successful for osd ID: 1
Oct 14 04:23:32 np0005486808 systemd[1]: libpod-8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293.scope: Deactivated successfully.
Oct 14 04:23:32 np0005486808 podman[88150]: 2025-10-14 08:23:32.487761436 +0000 UTC m=+1.327960776 container died 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:32 np0005486808 systemd[1]: libpod-8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293.scope: Consumed 1.174s CPU time.
Oct 14 04:23:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-42549b246d039aa903b1852c50af569fd13c3477ed86fbed481644fed1e66ba3-merged.mount: Deactivated successfully.
Oct 14 04:23:32 np0005486808 podman[88150]: 2025-10-14 08:23:32.584725912 +0000 UTC m=+1.424925232 container remove 8cce74e070e4eb9271a6ea032e3ab9b6f658c6bc7cf84ccbd849c27d3c7d0293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1-activate, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:23:32
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] No pools available
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:23:32 np0005486808 podman[88355]: 2025-10-14 08:23:32.866629617 +0000 UTC m=+0.042240390 container create 7925a900700bab3d7fc544967a47b466d9b3d2f9a38380333ce98d4b11903415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1859262966; not ready for session (expect reconnect)
Oct 14 04:23:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:32 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:32 np0005486808 podman[88355]: 2025-10-14 08:23:32.849536158 +0000 UTC m=+0.025146931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7452947af3805f5311593b5f9bc3a32f61f4c21f6039a5a10e8484baf93443f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7452947af3805f5311593b5f9bc3a32f61f4c21f6039a5a10e8484baf93443f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7452947af3805f5311593b5f9bc3a32f61f4c21f6039a5a10e8484baf93443f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7452947af3805f5311593b5f9bc3a32f61f4c21f6039a5a10e8484baf93443f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7452947af3805f5311593b5f9bc3a32f61f4c21f6039a5a10e8484baf93443f3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:32 np0005486808 podman[88355]: 2025-10-14 08:23:32.985555188 +0000 UTC m=+0.161165971 container init 7925a900700bab3d7fc544967a47b466d9b3d2f9a38380333ce98d4b11903415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:33 np0005486808 podman[88355]: 2025-10-14 08:23:33.007227946 +0000 UTC m=+0.182838759 container start 7925a900700bab3d7fc544967a47b466d9b3d2f9a38380333ce98d4b11903415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:23:33 np0005486808 bash[88355]: 7925a900700bab3d7fc544967a47b466d9b3d2f9a38380333ce98d4b11903415
Oct 14 04:23:33 np0005486808 systemd[1]: Started Ceph osd.1 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: pidfile_write: ignore empty --pid-file
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c563b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c563b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c563b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c6473800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c6473800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c6473800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c6473800 /var/lib/ceph/osd/ceph-1/block) close
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:33 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 14 04:23:33 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c563b800 /var/lib/ceph/osd/ceph-1/block) close
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.480 iops: 8314.907 elapsed_sec: 0.361
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: log_channel(cluster) log [WRN] : OSD bench result of 8314.906830 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 0 waiting for initial osdmap
Oct 14 04:23:33 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0[87344]: 2025-10-14T08:23:33.469+0000 7f4f7b61d640 -1 osd.0 0 waiting for initial osdmap
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:33 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-0[87344]: 2025-10-14T08:23:33.491+0000 7f4f7642e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 14 04:23:33 np0005486808 ceph-osd[87348]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: load: jerasure load: lrc 
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.826903096 +0000 UTC m=+0.063742453 container create 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:23:33 np0005486808 systemd[1]: Started libpod-conmon-7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2.scope.
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.797433742 +0000 UTC m=+0.034273159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:33 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 14 04:23:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:33 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1859262966; not ready for session (expect reconnect)
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:33 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.93213337 +0000 UTC m=+0.168972767 container init 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.943476421 +0000 UTC m=+0.180315778 container start 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.947631761 +0000 UTC m=+0.184471118 container attach 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:33 np0005486808 nervous_poincare[88553]: 167 167
Oct 14 04:23:33 np0005486808 systemd[1]: libpod-7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2.scope: Deactivated successfully.
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.951719118 +0000 UTC m=+0.188558485 container died 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 14 04:23:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e0cf54728af1fd922cff2bf5539ab52df2512c533d2e05b2fe590b391ff23da5-merged.mount: Deactivated successfully.
Oct 14 04:23:33 np0005486808 podman[88537]: 2025-10-14 08:23:33.997216585 +0000 UTC m=+0.234055962 container remove 7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:34 np0005486808 systemd[1]: libpod-conmon-7c9ea9f8320e280b93a16a228b52a69f276509e2d6b81292b14ef7092d5d84a2.scope: Deactivated successfully.
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966] boot
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-osd[87348]: osd.0 9 state: booting -> active
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:34 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs mount
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs mount shared_bdev_used = 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Git sha 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DB SUMMARY
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DB Session ID:  PT3ZTNECOOYGNNHK0Z0R
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                     Options.env: 0x5597c64c5c70
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                Options.info_log: 0x5597c56c28a0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.write_buffer_manager: 0x5597c65ce460
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Compression algorithms supported:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56c2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5c650b44-995d-4c58-a3aa-69301287a6cf
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214218982, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214219375, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: freelist init
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: freelist _read_cfg
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs umount
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) close
Oct 14 04:23:34 np0005486808 podman[88781]: 2025-10-14 08:23:34.383252997 +0000 UTC m=+0.063465187 container create 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:34 np0005486808 podman[88781]: 2025-10-14 08:23:34.352207956 +0000 UTC m=+0.032420196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:34 np0005486808 systemd[1]: Started libpod-conmon-7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7.scope.
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bdev(0x5597c64f5400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs mount
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluefs mount shared_bdev_used = 4718592
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Git sha 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DB SUMMARY
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DB Session ID:  PT3ZTNECOOYGNNHK0Z0Q
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                     Options.env: 0x5597c66763f0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                Options.info_log: 0x5597c5988f20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.write_buffer_manager: 0x5597c65ce6e0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Compression algorithms supported:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5597c56b9040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5597c56af090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5c650b44-995d-4c58-a3aa-69301287a6cf
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214519575, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:34 np0005486808 podman[88781]: 2025-10-14 08:23:34.53029198 +0000 UTC m=+0.210504230 container init 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214533572, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430214, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5c650b44-995d-4c58-a3aa-69301287a6cf", "db_session_id": "PT3ZTNECOOYGNNHK0Z0Q", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214537444, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430214, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5c650b44-995d-4c58-a3aa-69301287a6cf", "db_session_id": "PT3ZTNECOOYGNNHK0Z0Q", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:34 np0005486808 podman[88781]: 2025-10-14 08:23:34.541510058 +0000 UTC m=+0.221722208 container start 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214542170, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430214, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5c650b44-995d-4c58-a3aa-69301287a6cf", "db_session_id": "PT3ZTNECOOYGNNHK0Z0Q", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430214544889, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 14 04:23:34 np0005486808 podman[88781]: 2025-10-14 08:23:34.553819602 +0000 UTC m=+0.234031802 container attach 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5597c581dc00
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: DB pointer 0x5597c65b7a00
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 460.80 MB usag
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: _get_class not permitted to load lua
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: _get_class not permitted to load sdk
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: _get_class not permitted to load test_remote_reads
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 load_pgs
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 load_pgs opened 0 pgs
Oct 14 04:23:34 np0005486808 ceph-osd[88375]: osd.1 0 log_to_monitors true
Oct 14 04:23:34 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1[88371]: 2025-10-14T08:23:34.579+0000 7f5bad81a740 -1 osd.1 0 log_to_monitors true
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 14 04:23:34 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] creating mgr pool
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: Deploying daemon osd.2 on compute-0
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: OSD bench result of 8314.906830 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: osd.0 [v2:192.168.122.100:6802/1859262966,v1:192.168.122.100:6803/1859262966] boot
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 14 04:23:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 14 04:23:35 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test[88797]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 14 04:23:35 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test[88797]:                            [--no-systemd] [--no-tmpfs]
Oct 14 04:23:35 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test[88797]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 14 04:23:35 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:35 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:35 np0005486808 ceph-osd[87348]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 14 04:23:35 np0005486808 ceph-osd[87348]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 14 04:23:35 np0005486808 ceph-osd[87348]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 14 04:23:35 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 14 04:23:35 np0005486808 systemd[1]: libpod-7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7.scope: Deactivated successfully.
Oct 14 04:23:35 np0005486808 podman[88781]: 2025-10-14 08:23:35.154656296 +0000 UTC m=+0.834868446 container died 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:23:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-48b831cdb12de65c64a15bc8d57f6c35080439fd7fc33541139a348a06c10036-merged.mount: Deactivated successfully.
Oct 14 04:23:35 np0005486808 podman[88781]: 2025-10-14 08:23:35.229716199 +0000 UTC m=+0.909928389 container remove 7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:35 np0005486808 systemd[1]: libpod-conmon-7cf7bec33c92c011b91cdded0cf4cb4b21e49a72268673f4b1eeda8ddaae55e7.scope: Deactivated successfully.
Oct 14 04:23:35 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:35 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 14 04:23:35 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 14 04:23:35 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:35 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:35 np0005486808 systemd[1]: Reloading.
Oct 14 04:23:35 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:23:35 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 done with init, starting boot process
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 start_boot
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 14 04:23:36 np0005486808 ceph-osd[88375]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:36 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:36 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:36 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:23:36 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:23:36 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/15411749; not ready for session (expect reconnect)
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:36 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:36 np0005486808 systemd[1]: Starting Ceph osd.2 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:23:36 np0005486808 podman[89203]: 2025-10-14 08:23:36.503393437 +0000 UTC m=+0.074013019 container create 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:36 np0005486808 python3[89191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:36 np0005486808 podman[89203]: 2025-10-14 08:23:36.470956882 +0000 UTC m=+0.041576504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 14 04:23:36 np0005486808 podman[89203]: 2025-10-14 08:23:36.644865256 +0000 UTC m=+0.215484838 container init 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:23:36 np0005486808 podman[89203]: 2025-10-14 08:23:36.654423805 +0000 UTC m=+0.225043357 container start 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:23:36 np0005486808 podman[89221]: 2025-10-14 08:23:36.655399048 +0000 UTC m=+0.075437103 container create 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:23:36 np0005486808 podman[89203]: 2025-10-14 08:23:36.659042835 +0000 UTC m=+0.229662387 container attach 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:23:36 np0005486808 systemd[1]: Started libpod-conmon-72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e.scope.
Oct 14 04:23:36 np0005486808 podman[89221]: 2025-10-14 08:23:36.6224312 +0000 UTC m=+0.042469245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0114b4614b5d55362412c1ed45ddf4a673cae1fb1c513f46e2eef9f4e7ebdd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0114b4614b5d55362412c1ed45ddf4a673cae1fb1c513f46e2eef9f4e7ebdd4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0114b4614b5d55362412c1ed45ddf4a673cae1fb1c513f46e2eef9f4e7ebdd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:36 np0005486808 podman[89221]: 2025-10-14 08:23:36.747643702 +0000 UTC m=+0.167681807 container init 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:23:36 np0005486808 podman[89221]: 2025-10-14 08:23:36.757653081 +0000 UTC m=+0.177691306 container start 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:23:36 np0005486808 podman[89221]: 2025-10-14 08:23:36.764148736 +0000 UTC m=+0.184186791 container attach 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:23:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: from='osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 14 04:23:37 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/15411749; not ready for session (expect reconnect)
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:37 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 14 04:23:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088185538' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 14 04:23:37 np0005486808 intelligent_shamir[89241]: 
Oct 14 04:23:37 np0005486808 intelligent_shamir[89241]: {"fsid":"c49aadb6-9b04-5cb1-8f5f-4c91676c568e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1760430214,"num_in_osds":3,"osd_in_since":1760430197,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446971904,"bytes_avail":21023670272,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-14T08:23:34.602537+0000","services":{}},"progress_events":{}}
Oct 14 04:23:37 np0005486808 systemd[1]: libpod-72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e.scope: Deactivated successfully.
Oct 14 04:23:37 np0005486808 podman[89273]: 2025-10-14 08:23:37.427952613 +0000 UTC m=+0.027163610 container died 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f0114b4614b5d55362412c1ed45ddf4a673cae1fb1c513f46e2eef9f4e7ebdd4-merged.mount: Deactivated successfully.
Oct 14 04:23:37 np0005486808 podman[89273]: 2025-10-14 08:23:37.513395434 +0000 UTC m=+0.112606391 container remove 72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e (image=quay.io/ceph/ceph:v18, name=intelligent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:23:37 np0005486808 systemd[1]: libpod-conmon-72f611e6f53dc0de1f2dbb587e4fc9aa6845b0fae08cd755bc9b0fdfc0d1215e.scope: Deactivated successfully.
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:37 np0005486808 bash[89203]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 14 04:23:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate[89219]: --> ceph-volume raw activate successful for osd ID: 2
Oct 14 04:23:37 np0005486808 bash[89203]: --> ceph-volume raw activate successful for osd ID: 2
Oct 14 04:23:37 np0005486808 systemd[1]: libpod-5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60.scope: Deactivated successfully.
Oct 14 04:23:37 np0005486808 podman[89203]: 2025-10-14 08:23:37.72712002 +0000 UTC m=+1.297739622 container died 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:37 np0005486808 systemd[1]: libpod-5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60.scope: Consumed 1.072s CPU time.
Oct 14 04:23:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8b0e0462c7aa996cc123ff5733699ed736e38d8d398f84f3d9c6a5b6ca305443-merged.mount: Deactivated successfully.
Oct 14 04:23:37 np0005486808 podman[89203]: 2025-10-14 08:23:37.815281916 +0000 UTC m=+1.385901468 container remove 5b67662eaed5814da2a0518a245edf1cc6c662269bf0165665856c7ff9057b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:23:38 np0005486808 python3[89449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:38 np0005486808 podman[89477]: 2025-10-14 08:23:38.128738315 +0000 UTC m=+0.056015039 container create eaa7bcd0bf293d60428bf142ed3585974b817bbae027cfc89a6d97aae3b95b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 14 04:23:38 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/15411749; not ready for session (expect reconnect)
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:38 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:38 np0005486808 podman[89487]: 2025-10-14 08:23:38.183879382 +0000 UTC m=+0.076737134 container create 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:23:38 np0005486808 podman[89477]: 2025-10-14 08:23:38.100596772 +0000 UTC m=+0.027873536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e9c00d3148b2c375864b3e408559f6364195c850c6894a4c7107c0d0b583d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e9c00d3148b2c375864b3e408559f6364195c850c6894a4c7107c0d0b583d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e9c00d3148b2c375864b3e408559f6364195c850c6894a4c7107c0d0b583d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e9c00d3148b2c375864b3e408559f6364195c850c6894a4c7107c0d0b583d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e9c00d3148b2c375864b3e408559f6364195c850c6894a4c7107c0d0b583d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 systemd[1]: Started libpod-conmon-3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9.scope.
Oct 14 04:23:38 np0005486808 podman[89477]: 2025-10-14 08:23:38.225988128 +0000 UTC m=+0.153264832 container init eaa7bcd0bf293d60428bf142ed3585974b817bbae027cfc89a6d97aae3b95b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:23:38 np0005486808 podman[89477]: 2025-10-14 08:23:38.234715806 +0000 UTC m=+0.161992510 container start eaa7bcd0bf293d60428bf142ed3585974b817bbae027cfc89a6d97aae3b95b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee10f5b7e97d92876491a896811530b16edfc45571b0861bebbc293948c7d08a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee10f5b7e97d92876491a896811530b16edfc45571b0861bebbc293948c7d08a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:38 np0005486808 bash[89477]: eaa7bcd0bf293d60428bf142ed3585974b817bbae027cfc89a6d97aae3b95b58
Oct 14 04:23:38 np0005486808 podman[89487]: 2025-10-14 08:23:38.155457273 +0000 UTC m=+0.048315125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:38 np0005486808 systemd[1]: Started Ceph osd.2 for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:23:38 np0005486808 podman[89487]: 2025-10-14 08:23:38.266380813 +0000 UTC m=+0.159238585 container init 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:23:38 np0005486808 podman[89487]: 2025-10-14 08:23:38.292930827 +0000 UTC m=+0.185788579 container start 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: pidfile_write: ignore empty --pid-file
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3ca3f1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3ca3f1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3ca3f1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229000 /var/lib/ceph/osd/ceph-2/block) close
Oct 14 04:23:38 np0005486808 podman[89487]: 2025-10-14 08:23:38.308045988 +0000 UTC m=+0.200903770 container attach 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3ca3f1800 /var/lib/ceph/osd/ceph-2/block) close
Oct 14 04:23:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.359 iops: 7259.912 elapsed_sec: 0.413
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: log_channel(cluster) log [WRN] : OSD bench result of 7259.912129 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 0 waiting for initial osdmap
Oct 14 04:23:38 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1[88371]: 2025-10-14T08:23:38.821+0000 7f5ba979a640 -1 osd.1 0 waiting for initial osdmap
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2796122023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: load: jerasure load: lrc 
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:38 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:38 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-1[88371]: 2025-10-14T08:23:38.852+0000 7f5ba4dc2640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 set_numa_affinity not setting numa affinity
Oct 14 04:23:38 np0005486808 ceph-osd[88375]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 14 04:23:38 np0005486808 podman[89699]: 2025-10-14 08:23:38.970292099 +0000 UTC m=+0.050324033 container create 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:23:39 np0005486808 systemd[1]: Started libpod-conmon-298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622.scope.
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:38.945825095 +0000 UTC m=+0.025857039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:39.067524082 +0000 UTC m=+0.147556026 container init 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:39.078938535 +0000 UTC m=+0.158970459 container start 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:23:39 np0005486808 intelligent_hertz[89716]: 167 167
Oct 14 04:23:39 np0005486808 systemd[1]: libpod-298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622.scope: Deactivated successfully.
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:39.085506822 +0000 UTC m=+0.165538746 container attach 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:39.085782358 +0000 UTC m=+0.165814272 container died 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-703b87f4e8214686d141797865938bf5dae79abbc17b366869c3305e368a4349-merged.mount: Deactivated successfully.
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 14 04:23:39 np0005486808 podman[89699]: 2025-10-14 08:23:39.125083297 +0000 UTC m=+0.205115221 container remove 298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:39 np0005486808 systemd[1]: libpod-conmon-298a0b774c09a3b88a8f273f361dbc9ef3e2be3002bfb2b829571c6944267622.scope: Deactivated successfully.
Oct 14 04:23:39 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/15411749; not ready for session (expect reconnect)
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:39 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2796122023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2796122023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:39 np0005486808 podman[89743]: 2025-10-14 08:23:39.333161868 +0000 UTC m=+0.066794717 container create b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:23:39 np0005486808 agitated_robinson[89510]: pool 'vms' created
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749] boot
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 14 04:23:39 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:39 np0005486808 ceph-osd[88375]: osd.1 12 state: booting -> active
Oct 14 04:23:39 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:39 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:39 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:23:39 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:23:39 np0005486808 systemd[1]: libpod-3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9.scope: Deactivated successfully.
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb229c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs mount
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs mount shared_bdev_used = 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:39 np0005486808 systemd[1]: Started libpod-conmon-b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62.scope.
Oct 14 04:23:39 np0005486808 podman[89743]: 2025-10-14 08:23:39.305008395 +0000 UTC m=+0.038641324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Git sha 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DB SUMMARY
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DB Session ID:  VQCS2IX5GD0WBEOQJMAR
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                     Options.env: 0x55b3cb27b2d0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                Options.info_log: 0x55b3ca478800
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.write_buffer_manager: 0x55b3cb386460
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Compression algorithms supported:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54340f64470c3a94df78393e0e8e31d0856f655cb1e5e66cde3816a8cba7ae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54340f64470c3a94df78393e0e8e31d0856f655cb1e5e66cde3816a8cba7ae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54340f64470c3a94df78393e0e8e31d0856f655cb1e5e66cde3816a8cba7ae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54340f64470c3a94df78393e0e8e31d0856f655cb1e5e66cde3816a8cba7ae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca478200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfedc930-2b2d-4369-ad7b-e3ceefa691c6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219435122, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219435352, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: freelist init
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: freelist _read_cfg
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs umount
Oct 14 04:23:39 np0005486808 podman[89758]: 2025-10-14 08:23:39.451591937 +0000 UTC m=+0.056781557 container died 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:39 np0005486808 podman[89743]: 2025-10-14 08:23:39.452478148 +0000 UTC m=+0.186110957 container init b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) close
Oct 14 04:23:39 np0005486808 podman[89743]: 2025-10-14 08:23:39.460288425 +0000 UTC m=+0.193921244 container start b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:23:39 np0005486808 podman[89743]: 2025-10-14 08:23:39.464327341 +0000 UTC m=+0.197960160 container attach b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ee10f5b7e97d92876491a896811530b16edfc45571b0861bebbc293948c7d08a-merged.mount: Deactivated successfully.
Oct 14 04:23:39 np0005486808 podman[89758]: 2025-10-14 08:23:39.505448454 +0000 UTC m=+0.110638084 container remove 3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9 (image=quay.io/ceph/ceph:v18, name=agitated_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:23:39 np0005486808 systemd[1]: libpod-conmon-3ed34ae004c121e3cfe78d99d57949c6004365a0cb012ca1863af02721c463a9.scope: Deactivated successfully.
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bdev(0x55b3cb40c400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs mount
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluefs mount shared_bdev_used = 4718592
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: RocksDB version: 7.9.2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Git sha 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DB SUMMARY
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DB Session ID:  VQCS2IX5GD0WBEOQJMAQ
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: CURRENT file:  CURRENT
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: IDENTITY file:  IDENTITY
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.error_if_exists: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.create_if_missing: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.paranoid_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                     Options.env: 0x55b3cb42c310
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                Options.info_log: 0x55b3ca44afc0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_file_opening_threads: 16
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.statistics: (nil)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.use_fsync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.max_log_file_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.allow_fallocate: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.use_direct_reads: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.create_missing_column_families: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.db_log_dir: 
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                                 Options.wal_dir: db.wal
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.advise_random_on_open: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.write_buffer_manager: 0x55b3cb3866e0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                            Options.rate_limiter: (nil)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.unordered_write: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.row_cache: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                              Options.wal_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.allow_ingest_behind: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.two_write_queues: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.manual_wal_flush: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.wal_compression: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.atomic_flush: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.log_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.allow_data_in_errors: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.db_host_id: __hostname__
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_background_jobs: 4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_background_compactions: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_subcompactions: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.max_open_files: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.max_background_flushes: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Compression algorithms supported:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZSTD supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kXpressCompression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kBZip2Compression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kLZ4Compression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kZlibCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: #011kSnappyCompression supported: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3cb2777a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca4651f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca73efc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca73efc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:           Options.merge_operator: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.compaction_filter_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3ca73efc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b3ca465090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.compression: LZ4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.num_levels: 7
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.inplace_update_support: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.bloom_locality: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                    Options.max_successive_merges: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.force_consistency_checks: 1
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                               Options.ttl: 2592000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                       Options.enable_blob_files: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                           Options.min_blob_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfedc930-2b2d-4369-ad7b-e3ceefa691c6
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219706273, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219711909, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfedc930-2b2d-4369-ad7b-e3ceefa691c6", "db_session_id": "VQCS2IX5GD0WBEOQJMAQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219716264, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfedc930-2b2d-4369-ad7b-e3ceefa691c6", "db_session_id": "VQCS2IX5GD0WBEOQJMAQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219720218, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfedc930-2b2d-4369-ad7b-e3ceefa691c6", "db_session_id": "VQCS2IX5GD0WBEOQJMAQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430219721966, "job": 1, "event": "recovery_finished"}
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b3cb439c00
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: DB pointer 0x55b3cb36fa00
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 460.80 MB usag
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: _get_class not permitted to load lua
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: _get_class not permitted to load sdk
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: _get_class not permitted to load test_remote_reads
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 load_pgs
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 load_pgs opened 0 pgs
Oct 14 04:23:39 np0005486808 ceph-osd[89514]: osd.2 0 log_to_monitors true
Oct 14 04:23:39 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2[89505]: 2025-10-14T08:23:39.766+0000 7f71aed55740 -1 osd.2 0 log_to_monitors true
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 14 04:23:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 14 04:23:39 np0005486808 python3[90014]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:39 np0005486808 podman[90213]: 2025-10-14 08:23:39.889310444 +0000 UTC m=+0.050122618 container create 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:23:39 np0005486808 systemd[1]: Started libpod-conmon-9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10.scope.
Oct 14 04:23:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6717d3a890db8c0e436bc3e2673f93b2507073b7b119b2d7046f7860698150/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6717d3a890db8c0e436bc3e2673f93b2507073b7b119b2d7046f7860698150/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:39 np0005486808 podman[90213]: 2025-10-14 08:23:39.872120584 +0000 UTC m=+0.032932778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:39 np0005486808 podman[90213]: 2025-10-14 08:23:39.973340772 +0000 UTC m=+0.134152966 container init 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:23:39 np0005486808 podman[90213]: 2025-10-14 08:23:39.984934919 +0000 UTC m=+0.145747093 container start 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:23:39 np0005486808 podman[90213]: 2025-10-14 08:23:39.988487773 +0000 UTC m=+0.149299947 container attach 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: OSD bench result of 7259.912129 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2796122023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: osd.1 [v2:192.168.122.100:6806/15411749,v1:192.168.122.100:6807/15411749] boot
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:40 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] creating main.db for devicehealth
Oct 14 04:23:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]: {
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_id": 2,
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "type": "bluestore"
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    },
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_id": 1,
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "type": "bluestore"
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    },
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_id": 0,
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:        "type": "bluestore"
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]:    }
Oct 14 04:23:40 np0005486808 priceless_dhawan[89776]: }
Oct 14 04:23:40 np0005486808 ceph-mgr[74543]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 14 04:23:40 np0005486808 systemd[1]: libpod-b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62.scope: Deactivated successfully.
Oct 14 04:23:40 np0005486808 podman[89743]: 2025-10-14 08:23:40.51954266 +0000 UTC m=+1.253175469 container died b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:23:40 np0005486808 systemd[1]: libpod-b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62.scope: Consumed 1.035s CPU time.
Oct 14 04:23:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a54340f64470c3a94df78393e0e8e31d0856f655cb1e5e66cde3816a8cba7ae1-merged.mount: Deactivated successfully.
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 14 04:23:40 np0005486808 podman[89743]: 2025-10-14 08:23:40.570383995 +0000 UTC m=+1.304016804 container remove b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/505127739' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:40 np0005486808 systemd[1]: libpod-conmon-b04cc313df6edd26b8cc78b6b4a9e879d6a5678a897d21d22ea7ede4e3e57b62.scope: Deactivated successfully.
Oct 14 04:23:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v38: 2 pgs: 2 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 14 04:23:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.euuwqu(active, since 68s)
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/505127739' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 done with init, starting boot process
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 start_boot
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 14 04:23:41 np0005486808 ceph-osd[89514]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 14 04:23:41 np0005486808 naughty_bhabha[90229]: pool 'volumes' created
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/505127739' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 14 04:23:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:41 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:41 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2622463785; not ready for session (expect reconnect)
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:41 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:41 np0005486808 podman[90213]: 2025-10-14 08:23:41.416086347 +0000 UTC m=+1.576898551 container died 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:23:41 np0005486808 systemd[1]: libpod-9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10.scope: Deactivated successfully.
Oct 14 04:23:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa6717d3a890db8c0e436bc3e2673f93b2507073b7b119b2d7046f7860698150-merged.mount: Deactivated successfully.
Oct 14 04:23:41 np0005486808 podman[90213]: 2025-10-14 08:23:41.518032433 +0000 UTC m=+1.678844607 container remove 9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10 (image=quay.io/ceph/ceph:v18, name=naughty_bhabha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:41 np0005486808 systemd[1]: libpod-conmon-9791881dfaba2213c8d4318c8f645415c5f22581d6c246adeb6613847f217a10.scope: Deactivated successfully.
Oct 14 04:23:41 np0005486808 podman[90546]: 2025-10-14 08:23:41.740472257 +0000 UTC m=+0.085129525 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:41 np0005486808 python3[90583]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:41 np0005486808 podman[90546]: 2025-10-14 08:23:41.880251226 +0000 UTC m=+0.224908484 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:23:41 np0005486808 podman[90591]: 2025-10-14 08:23:41.960471123 +0000 UTC m=+0.086280483 container create 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:23:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:42 np0005486808 systemd[1]: Started libpod-conmon-04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885.scope.
Oct 14 04:23:42 np0005486808 podman[90591]: 2025-10-14 08:23:41.926215194 +0000 UTC m=+0.052024594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587cb61473bb12e49724a908b8e544aec1090f09bc408eaabfb3510483650c84/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/587cb61473bb12e49724a908b8e544aec1090f09bc408eaabfb3510483650c84/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:42 np0005486808 podman[90591]: 2025-10-14 08:23:42.044778937 +0000 UTC m=+0.170588317 container init 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:42 np0005486808 podman[90591]: 2025-10-14 08:23:42.052734837 +0000 UTC m=+0.178544227 container start 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 14 04:23:42 np0005486808 podman[90591]: 2025-10-14 08:23:42.065732307 +0000 UTC m=+0.191541697 container attach 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: from='osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/505127739' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:42 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2622463785; not ready for session (expect reconnect)
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:42 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:42 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2013105526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.135726689 +0000 UTC m=+0.053115560 container create 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:43 np0005486808 systemd[1]: Started libpod-conmon-7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673.scope.
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.11359626 +0000 UTC m=+0.030985161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.231891286 +0000 UTC m=+0.149280217 container init 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.243126825 +0000 UTC m=+0.160515726 container start 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:43 np0005486808 laughing_haibt[90883]: 167 167
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.251270409 +0000 UTC m=+0.168659360 container attach 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:23:43 np0005486808 systemd[1]: libpod-7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673.scope: Deactivated successfully.
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.251762221 +0000 UTC m=+0.169151122 container died 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:23:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ffc551c4b886156b26624e088f3a5ca6613e81c03f3ad5ac1a5968ad83551766-merged.mount: Deactivated successfully.
Oct 14 04:23:43 np0005486808 podman[90866]: 2025-10-14 08:23:43.324217322 +0000 UTC m=+0.241606193 container remove 7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haibt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:43 np0005486808 systemd[1]: libpod-conmon-7981e199c698c2096ce2697b80db3690ebc0f8853d9ab52f2b5996f68f131673.scope: Deactivated successfully.
Oct 14 04:23:43 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2622463785; not ready for session (expect reconnect)
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:43 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2013105526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2013105526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 14 04:23:43 np0005486808 thirsty_archimedes[90632]: pool 'backups' created
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:43 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:43 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:43 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=14 pruub=12.909769058s) [] r=-1 lpr=14 pi=[12,14)/1 crt=0'0 mlcod 0'0 active pruub 27.183803558s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:23:43 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=14 pruub=12.909769058s) [] r=-1 lpr=14 pi=[12,14)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 27.183803558s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:23:43 np0005486808 podman[90907]: 2025-10-14 08:23:43.482388231 +0000 UTC m=+0.039729010 container create 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 04:23:43 np0005486808 systemd[1]: libpod-04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885.scope: Deactivated successfully.
Oct 14 04:23:43 np0005486808 podman[90591]: 2025-10-14 08:23:43.506433285 +0000 UTC m=+1.632242645 container died 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:43 np0005486808 systemd[1]: Started libpod-conmon-983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa.scope.
Oct 14 04:23:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-587cb61473bb12e49724a908b8e544aec1090f09bc408eaabfb3510483650c84-merged.mount: Deactivated successfully.
Oct 14 04:23:43 np0005486808 podman[90907]: 2025-10-14 08:23:43.46310667 +0000 UTC m=+0.020447479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:43 np0005486808 podman[90591]: 2025-10-14 08:23:43.565466685 +0000 UTC m=+1.691276055 container remove 04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885 (image=quay.io/ceph/ceph:v18, name=thirsty_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:23:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af93ef19581b265cbde00a000d8081c37b1e468a9b386409c1da3d9abc2a490/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af93ef19581b265cbde00a000d8081c37b1e468a9b386409c1da3d9abc2a490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af93ef19581b265cbde00a000d8081c37b1e468a9b386409c1da3d9abc2a490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af93ef19581b265cbde00a000d8081c37b1e468a9b386409c1da3d9abc2a490/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:43 np0005486808 systemd[1]: libpod-conmon-04bd289402582a88ce8e4c7b8ae211fe206aac33cfbc7707270bf9a2590ae885.scope: Deactivated successfully.
Oct 14 04:23:43 np0005486808 podman[90907]: 2025-10-14 08:23:43.59369861 +0000 UTC m=+0.151039389 container init 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:23:43 np0005486808 podman[90907]: 2025-10-14 08:23:43.601539987 +0000 UTC m=+0.158880766 container start 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:43 np0005486808 podman[90907]: 2025-10-14 08:23:43.606349512 +0000 UTC m=+0.163690331 container attach 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:23:43 np0005486808 python3[90969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.603 iops: 8858.287 elapsed_sec: 0.339
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: log_channel(cluster) log [WRN] : OSD bench result of 8858.287220 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 0 waiting for initial osdmap
Oct 14 04:23:43 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2[89505]: 2025-10-14T08:23:43.903+0000 7f71aacd5640 -1 osd.2 0 waiting for initial osdmap
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 set_numa_affinity not setting numa affinity
Oct 14 04:23:43 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-osd-2[89505]: 2025-10-14T08:23:43.924+0000 7f71a62fd640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 14 04:23:43 np0005486808 ceph-osd[89514]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 14 04:23:43 np0005486808 podman[90970]: 2025-10-14 08:23:43.950793261 +0000 UTC m=+0.050966759 container create 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:43 np0005486808 systemd[1]: Started libpod-conmon-4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537.scope.
Oct 14 04:23:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00984e3e0ce0d61dde3c16d4ce9035ee7c82b9a5c66e77bf30f0021d5d0f6694/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00984e3e0ce0d61dde3c16d4ce9035ee7c82b9a5c66e77bf30f0021d5d0f6694/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:44 np0005486808 podman[90970]: 2025-10-14 08:23:43.925777743 +0000 UTC m=+0.025951241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:44 np0005486808 podman[90970]: 2025-10-14 08:23:44.036493018 +0000 UTC m=+0.136666536 container init 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:23:44 np0005486808 podman[90970]: 2025-10-14 08:23:44.043200818 +0000 UTC m=+0.143374306 container start 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:44 np0005486808 podman[90970]: 2025-10-14 08:23:44.047324537 +0000 UTC m=+0.147498115 container attach 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:23:44 np0005486808 ceph-mgr[74543]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2622463785; not ready for session (expect reconnect)
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:44 np0005486808 ceph-mgr[74543]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2013105526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785] boot
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 14 04:23:44 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=17 pruub=11.905755043s) [2] r=-1 lpr=17 pi=[12,17)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 27.183803558s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:23:44 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=17 pruub=11.905668259s) [2] r=-1 lpr=17 pi=[12,17)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 27.183803558s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:23:44 np0005486808 ceph-osd[89514]: osd.2 17 state: booting -> active
Oct 14 04:23:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=17) [2] r=0 lpr=17 pi=[12,17)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:44 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281775910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v44: 4 pgs: 2 unknown, 2 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 14 04:23:45 np0005486808 tender_villani[90938]: [
Oct 14 04:23:45 np0005486808 tender_villani[90938]:    {
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "available": false,
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "ceph_device": false,
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "lsm_data": {},
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "lvs": [],
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "path": "/dev/sr0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "rejected_reasons": [
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "Insufficient space (<5GB)",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "Has a FileSystem"
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        ],
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        "sys_api": {
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "actuators": null,
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "device_nodes": "sr0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "devname": "sr0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "human_readable_size": "482.00 KB",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "id_bus": "ata",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "model": "QEMU DVD-ROM",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "nr_requests": "2",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "parent": "/dev/sr0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "partitions": {},
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "path": "/dev/sr0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "removable": "1",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "rev": "2.5+",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "ro": "0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "rotational": "0",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "sas_address": "",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "sas_device_handle": "",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "scheduler_mode": "mq-deadline",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "sectors": 0,
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "sectorsize": "2048",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "size": 493568.0,
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "support_discard": "2048",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "type": "disk",
Oct 14 04:23:45 np0005486808 tender_villani[90938]:            "vendor": "QEMU"
Oct 14 04:23:45 np0005486808 tender_villani[90938]:        }
Oct 14 04:23:45 np0005486808 tender_villani[90938]:    }
Oct 14 04:23:45 np0005486808 tender_villani[90938]: ]
Oct 14 04:23:45 np0005486808 systemd[1]: libpod-983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa.scope: Deactivated successfully.
Oct 14 04:23:45 np0005486808 systemd[1]: libpod-983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa.scope: Consumed 1.734s CPU time.
Oct 14 04:23:45 np0005486808 podman[90907]: 2025-10-14 08:23:45.281244944 +0000 UTC m=+1.838585783 container died 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:23:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2af93ef19581b265cbde00a000d8081c37b1e468a9b386409c1da3d9abc2a490-merged.mount: Deactivated successfully.
Oct 14 04:23:45 np0005486808 podman[90907]: 2025-10-14 08:23:45.343764917 +0000 UTC m=+1.901105706 container remove 983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:45 np0005486808 systemd[1]: libpod-conmon-983794828b1d07112d4ccc408b6b1790595eaddfa7347e7e1a1c1c49c3dcd8aa.scope: Deactivated successfully.
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43699k
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43699k
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 53ca7dec-5ada-427d-95bb-3ebc48b072c6 does not exist
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b2a227a0-a89a-45bb-8722-b88ea4508df4 does not exist
Oct 14 04:23:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eeea6d89-08e1-4814-9a97-16222effafdc does not exist
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281775910' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 14 04:23:45 np0005486808 elated_thompson[90987]: pool 'images' created
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: OSD bench result of 8858.287220 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: osd.2 [v2:192.168.122.100:6810/2622463785,v1:192.168.122.100:6811/2622463785] boot
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1281775910' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:23:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 14 04:23:45 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:45 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=17) [2] r=0 lpr=17 pi=[12,17)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:45 np0005486808 systemd[1]: libpod-4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537.scope: Deactivated successfully.
Oct 14 04:23:45 np0005486808 podman[90970]: 2025-10-14 08:23:45.511560506 +0000 UTC m=+1.611734024 container died 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:23:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-00984e3e0ce0d61dde3c16d4ce9035ee7c82b9a5c66e77bf30f0021d5d0f6694-merged.mount: Deactivated successfully.
Oct 14 04:23:45 np0005486808 podman[90970]: 2025-10-14 08:23:45.575899983 +0000 UTC m=+1.676073501 container remove 4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537 (image=quay.io/ceph/ceph:v18, name=elated_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:45 np0005486808 systemd[1]: libpod-conmon-4e5b62440e6b96b9f7d5b87e8bd06e53f5763e43de5c370405f9200fc5f81537.scope: Deactivated successfully.
Oct 14 04:23:45 np0005486808 python3[93105]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:46 np0005486808 podman[93127]: 2025-10-14 08:23:46.011817707 +0000 UTC m=+0.068026936 container create 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:23:46 np0005486808 systemd[1]: Started libpod-conmon-2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c.scope.
Oct 14 04:23:46 np0005486808 podman[93127]: 2025-10-14 08:23:45.98850321 +0000 UTC m=+0.044712439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7183145507277e5332303fcd039acc64422945676defe4f2280eb43ddb720d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7183145507277e5332303fcd039acc64422945676defe4f2280eb43ddb720d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 podman[93127]: 2025-10-14 08:23:46.110498244 +0000 UTC m=+0.166707483 container init 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:46 np0005486808 podman[93127]: 2025-10-14 08:23:46.118875874 +0000 UTC m=+0.175085073 container start 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:46 np0005486808 podman[93127]: 2025-10-14 08:23:46.123035194 +0000 UTC m=+0.179244393 container attach 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.272923875 +0000 UTC m=+0.054438212 container create e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:46 np0005486808 systemd[1]: Started libpod-conmon-e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0.scope.
Oct 14 04:23:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.246176016 +0000 UTC m=+0.027690433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.359075423 +0000 UTC m=+0.140589850 container init e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.368613411 +0000 UTC m=+0.150127778 container start e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.373215491 +0000 UTC m=+0.154729908 container attach e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:23:46 np0005486808 keen_robinson[93204]: 167 167
Oct 14 04:23:46 np0005486808 systemd[1]: libpod-e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0.scope: Deactivated successfully.
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.378101647 +0000 UTC m=+0.159616054 container died e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:23:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5e1fe1150e8d122fede757e0f0729a7703e96c6af770f111732bd48721e3e61c-merged.mount: Deactivated successfully.
Oct 14 04:23:46 np0005486808 podman[93187]: 2025-10-14 08:23:46.428525992 +0000 UTC m=+0.210040319 container remove e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_robinson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:46 np0005486808 systemd[1]: libpod-conmon-e196d63ead23ee4b80f358e1ffc2b6f0195e7496c819da991023d706ea687af0.scope: Deactivated successfully.
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: Adjusting osd_memory_target on compute-0 to 43699k
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1281775910' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v47: 5 pgs: 1 unknown, 1 peering, 3 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 14 04:23:46 np0005486808 podman[93248]: 2025-10-14 08:23:46.605950611 +0000 UTC m=+0.071182982 container create 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:46 np0005486808 systemd[1]: Started libpod-conmon-10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a.scope.
Oct 14 04:23:46 np0005486808 podman[93248]: 2025-10-14 08:23:46.561193611 +0000 UTC m=+0.026426062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2291376168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:46 np0005486808 podman[93248]: 2025-10-14 08:23:46.686811392 +0000 UTC m=+0.152043793 container init 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:46 np0005486808 podman[93248]: 2025-10-14 08:23:46.695122411 +0000 UTC m=+0.160354782 container start 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:23:46 np0005486808 podman[93248]: 2025-10-14 08:23:46.69883723 +0000 UTC m=+0.164069621 container attach 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2291376168' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2291376168' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:47 np0005486808 sweet_buck[93169]: pool 'cephfs.cephfs.meta' created
Oct 14 04:23:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 14 04:23:47 np0005486808 systemd[1]: libpod-2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c.scope: Deactivated successfully.
Oct 14 04:23:47 np0005486808 podman[93127]: 2025-10-14 08:23:47.534938884 +0000 UTC m=+1.591148113 container died 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:23:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8e7183145507277e5332303fcd039acc64422945676defe4f2280eb43ddb720d-merged.mount: Deactivated successfully.
Oct 14 04:23:47 np0005486808 podman[93127]: 2025-10-14 08:23:47.596037463 +0000 UTC m=+1.652246652 container remove 2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c (image=quay.io/ceph/ceph:v18, name=sweet_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:47 np0005486808 systemd[1]: libpod-conmon-2cb680abfcc14484a445d8d66476673d12151f871a5beca8eddd32e8cba0501c.scope: Deactivated successfully.
Oct 14 04:23:47 np0005486808 funny_saha[93265]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:23:47 np0005486808 funny_saha[93265]: --> relative data size: 1.0
Oct 14 04:23:47 np0005486808 funny_saha[93265]: --> All data devices are unavailable
Oct 14 04:23:47 np0005486808 systemd[1]: libpod-10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a.scope: Deactivated successfully.
Oct 14 04:23:47 np0005486808 podman[93248]: 2025-10-14 08:23:47.833601639 +0000 UTC m=+1.298834060 container died 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:47 np0005486808 systemd[1]: libpod-10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a.scope: Consumed 1.047s CPU time.
Oct 14 04:23:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7a67d7ed07f52ee1d428f19a48d19811a21980b3ea6b0bf55f1c9d59adec783b-merged.mount: Deactivated successfully.
Oct 14 04:23:47 np0005486808 podman[93248]: 2025-10-14 08:23:47.896120292 +0000 UTC m=+1.361352673 container remove 10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:23:47 np0005486808 systemd[1]: libpod-conmon-10e712fab401174ef83c85b6af6bba39b9815256c46911a58c3ca1e176333d3a.scope: Deactivated successfully.
Oct 14 04:23:47 np0005486808 python3[93333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:48 np0005486808 podman[93354]: 2025-10-14 08:23:48.033917834 +0000 UTC m=+0.062847822 container create 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:23:48 np0005486808 systemd[1]: Started libpod-conmon-8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f.scope.
Oct 14 04:23:48 np0005486808 podman[93354]: 2025-10-14 08:23:48.006854838 +0000 UTC m=+0.035784876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ab913dd5e51ee6b0243bd375cbc7789046200ba436d8948d3fbae28e4c53ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ab913dd5e51ee6b0243bd375cbc7789046200ba436d8948d3fbae28e4c53ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:48 np0005486808 podman[93354]: 2025-10-14 08:23:48.140280744 +0000 UTC m=+0.169210732 container init 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:23:48 np0005486808 podman[93354]: 2025-10-14 08:23:48.14638949 +0000 UTC m=+0.175319508 container start 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:23:48 np0005486808 podman[93354]: 2025-10-14 08:23:48.150394936 +0000 UTC m=+0.179324914 container attach 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2291376168' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.539709916 +0000 UTC m=+0.081148979 container create cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.486621438 +0000 UTC m=+0.028060531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v49: 6 pgs: 2 unknown, 1 peering, 3 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3599772455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:48 np0005486808 systemd[1]: Started libpod-conmon-cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33.scope.
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 14 04:23:48 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 14 04:23:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.740878692 +0000 UTC m=+0.282317735 container init cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.752289145 +0000 UTC m=+0.293728198 container start cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:23:48 np0005486808 strange_elion[93544]: 167 167
Oct 14 04:23:48 np0005486808 systemd[1]: libpod-cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33.scope: Deactivated successfully.
Oct 14 04:23:48 np0005486808 conmon[93544]: conmon cc6554f2369d82e20ec0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33.scope/container/memory.events
Oct 14 04:23:48 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.847404807 +0000 UTC m=+0.388843870 container attach cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:48 np0005486808 podman[93526]: 2025-10-14 08:23:48.848525714 +0000 UTC m=+0.389964767 container died cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:23:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-520d8dc4af9fca911f2b8e8c25f39c6939c1f5f7b206babce8dd7e0c06424468-merged.mount: Deactivated successfully.
Oct 14 04:23:49 np0005486808 podman[93526]: 2025-10-14 08:23:49.06100714 +0000 UTC m=+0.602446163 container remove cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:49 np0005486808 systemd[1]: libpod-conmon-cc6554f2369d82e20ec0720bf8946483d4f4b93fd166c33acda06dadc5999d33.scope: Deactivated successfully.
Oct 14 04:23:49 np0005486808 podman[93571]: 2025-10-14 08:23:49.309636549 +0000 UTC m=+0.113174604 container create bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:49 np0005486808 podman[93571]: 2025-10-14 08:23:49.219773382 +0000 UTC m=+0.023311417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:49 np0005486808 systemd[1]: Started libpod-conmon-bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb.scope.
Oct 14 04:23:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c54e9f277cf0334df109cbb5001b4ea6cdcb95d45ff80b4b41e98a0ade5cc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c54e9f277cf0334df109cbb5001b4ea6cdcb95d45ff80b4b41e98a0ade5cc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c54e9f277cf0334df109cbb5001b4ea6cdcb95d45ff80b4b41e98a0ade5cc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c54e9f277cf0334df109cbb5001b4ea6cdcb95d45ff80b4b41e98a0ade5cc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:49 np0005486808 podman[93571]: 2025-10-14 08:23:49.47877239 +0000 UTC m=+0.282310405 container init bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:23:49 np0005486808 podman[93571]: 2025-10-14 08:23:49.484857105 +0000 UTC m=+0.288395160 container start bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:49 np0005486808 podman[93571]: 2025-10-14 08:23:49.526800197 +0000 UTC m=+0.330338252 container attach bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:23:49 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3599772455' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 14 04:23:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 14 04:23:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3599772455' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 14 04:23:49 np0005486808 nostalgic_keller[93435]: pool 'cephfs.cephfs.data' created
Oct 14 04:23:49 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 14 04:23:49 np0005486808 systemd[1]: libpod-8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f.scope: Deactivated successfully.
Oct 14 04:23:49 np0005486808 podman[93354]: 2025-10-14 08:23:49.91789359 +0000 UTC m=+1.946823578 container died 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:23:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-09ab913dd5e51ee6b0243bd375cbc7789046200ba436d8948d3fbae28e4c53ea-merged.mount: Deactivated successfully.
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]: {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    "0": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "devices": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "/dev/loop3"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            ],
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_name": "ceph_lv0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_size": "21470642176",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "name": "ceph_lv0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "tags": {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.crush_device_class": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.encrypted": "0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_id": "0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.vdo": "0"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            },
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "vg_name": "ceph_vg0"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        }
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    ],
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    "1": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "devices": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "/dev/loop4"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            ],
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_name": "ceph_lv1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_size": "21470642176",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "name": "ceph_lv1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "tags": {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.crush_device_class": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.encrypted": "0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_id": "1",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.vdo": "0"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            },
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "vg_name": "ceph_vg1"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        }
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    ],
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    "2": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "devices": [
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "/dev/loop5"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            ],
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_name": "ceph_lv2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_size": "21470642176",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "name": "ceph_lv2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "tags": {
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.cluster_name": "ceph",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.crush_device_class": "",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.encrypted": "0",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osd_id": "2",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:                "ceph.vdo": "0"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            },
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "type": "block",
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:            "vg_name": "ceph_vg2"
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:        }
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]:    ]
Oct 14 04:23:50 np0005486808 bold_mahavira[93588]: }
Oct 14 04:23:50 np0005486808 systemd[1]: libpod-bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb.scope: Deactivated successfully.
Oct 14 04:23:50 np0005486808 podman[93571]: 2025-10-14 08:23:50.245077717 +0000 UTC m=+1.048615742 container died bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-06c54e9f277cf0334df109cbb5001b4ea6cdcb95d45ff80b4b41e98a0ade5cc6-merged.mount: Deactivated successfully.
Oct 14 04:23:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:23:50 np0005486808 podman[93571]: 2025-10-14 08:23:50.607522835 +0000 UTC m=+1.411060890 container remove bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:23:50 np0005486808 systemd[1]: libpod-conmon-bd36ddb8e4da180828fc4fa494024a4eea9b4e61ebb63c878fe21a3d43bc87eb.scope: Deactivated successfully.
Oct 14 04:23:50 np0005486808 podman[93354]: 2025-10-14 08:23:50.70941759 +0000 UTC m=+2.738347578 container remove 8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f (image=quay.io/ceph/ceph:v18, name=nostalgic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:23:50 np0005486808 systemd[1]: libpod-conmon-8c7e3618695afa6e65add6cd679ed731e6fd4cb8b263d39ce9fd9ae66a429c6f.scope: Deactivated successfully.
Oct 14 04:23:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 14 04:23:50 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3599772455' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 14 04:23:51 np0005486808 python3[93723]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 14 04:23:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 14 04:23:51 np0005486808 podman[93752]: 2025-10-14 08:23:51.233853838 +0000 UTC m=+0.032451026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:51 np0005486808 podman[93752]: 2025-10-14 08:23:51.453809633 +0000 UTC m=+0.252406771 container create 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:23:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:23:51 np0005486808 systemd[1]: Started libpod-conmon-2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110.scope.
Oct 14 04:23:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1b846487352293542eec9d0ad9ac346192f54a8ac0709f095b0b888e3c74ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1b846487352293542eec9d0ad9ac346192f54a8ac0709f095b0b888e3c74ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:51 np0005486808 podman[93752]: 2025-10-14 08:23:51.797305938 +0000 UTC m=+0.595903076 container init 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:51 np0005486808 podman[93752]: 2025-10-14 08:23:51.810730289 +0000 UTC m=+0.609327437 container start 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:51 np0005486808 podman[93752]: 2025-10-14 08:23:51.872100055 +0000 UTC m=+0.670697183 container attach 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.029681869 +0000 UTC m=+0.025408808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.15820521 +0000 UTC m=+0.153932139 container create e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 14 04:23:52 np0005486808 systemd[1]: Started libpod-conmon-e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590.scope.
Oct 14 04:23:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.476649357 +0000 UTC m=+0.472376276 container init e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.483182824 +0000 UTC m=+0.478909773 container start e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:23:52 np0005486808 elastic_aryabhata[93844]: 167 167
Oct 14 04:23:52 np0005486808 systemd[1]: libpod-e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590.scope: Deactivated successfully.
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 14 04:23:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/286748006' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 14 04:23:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.61113812 +0000 UTC m=+0.606865059 container attach e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:23:52 np0005486808 podman[93809]: 2025-10-14 08:23:52.61154106 +0000 UTC m=+0.607267969 container died e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:23:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1b70ff673fb4cab355ebe54a463d2ff4d3dc76465cd0c548fb62c4e70267d3de-merged.mount: Deactivated successfully.
Oct 14 04:23:53 np0005486808 podman[93809]: 2025-10-14 08:23:53.176424214 +0000 UTC m=+1.172151173 container remove e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/286748006' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 14 04:23:53 np0005486808 systemd[1]: libpod-conmon-e8db7695c32c77c860c998fbee7e384c9332648f73b623e91e9c6ea8fb6b7590.scope: Deactivated successfully.
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 14 04:23:53 np0005486808 podman[93869]: 2025-10-14 08:23:53.319245767 +0000 UTC m=+0.027859887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:53 np0005486808 podman[93869]: 2025-10-14 08:23:53.474522326 +0000 UTC m=+0.183136456 container create dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/286748006' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 14 04:23:53 np0005486808 romantic_bassi[93791]: enabled application 'rbd' on pool 'vms'
Oct 14 04:23:53 np0005486808 systemd[1]: libpod-2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110.scope: Deactivated successfully.
Oct 14 04:23:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 14 04:23:53 np0005486808 podman[93752]: 2025-10-14 08:23:53.620183376 +0000 UTC m=+2.418780534 container died 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:23:53 np0005486808 systemd[1]: Started libpod-conmon-dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993.scope.
Oct 14 04:23:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a8631f08ff0784016f16b8b5b6af1d64616720d9218bf7e31347acb922b12d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a8631f08ff0784016f16b8b5b6af1d64616720d9218bf7e31347acb922b12d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a8631f08ff0784016f16b8b5b6af1d64616720d9218bf7e31347acb922b12d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a8631f08ff0784016f16b8b5b6af1d64616720d9218bf7e31347acb922b12d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cd1b846487352293542eec9d0ad9ac346192f54a8ac0709f095b0b888e3c74ca-merged.mount: Deactivated successfully.
Oct 14 04:23:54 np0005486808 podman[93869]: 2025-10-14 08:23:54.380463429 +0000 UTC m=+1.089077619 container init dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:23:54 np0005486808 podman[93869]: 2025-10-14 08:23:54.386830351 +0000 UTC m=+1.095444441 container start dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:23:54 np0005486808 podman[93869]: 2025-10-14 08:23:54.423830415 +0000 UTC m=+1.132444835 container attach dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:54 np0005486808 podman[93752]: 2025-10-14 08:23:54.46926065 +0000 UTC m=+3.267857798 container remove 2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110 (image=quay.io/ceph/ceph:v18, name=romantic_bassi, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:23:54 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/286748006' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 14 04:23:54 np0005486808 systemd[1]: libpod-conmon-2cb5bddd1c85a2036d6de23f37bafa565944da2555a72c0b87f7f4e1508a5110.scope: Deactivated successfully.
Oct 14 04:23:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:23:54 np0005486808 python3[93929]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:54 np0005486808 podman[93930]: 2025-10-14 08:23:54.90499543 +0000 UTC m=+0.070365002 container create 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:23:54 np0005486808 podman[93930]: 2025-10-14 08:23:54.863101559 +0000 UTC m=+0.028471141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:54 np0005486808 systemd[1]: Started libpod-conmon-915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375.scope.
Oct 14 04:23:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d623a18d0c5f9b92f973feb35f3764dbfefbd2e99ed022e072a4946bec7ce656/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d623a18d0c5f9b92f973feb35f3764dbfefbd2e99ed022e072a4946bec7ce656/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:55 np0005486808 podman[93930]: 2025-10-14 08:23:55.032970697 +0000 UTC m=+0.198340299 container init 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:55 np0005486808 podman[93930]: 2025-10-14 08:23:55.046299086 +0000 UTC m=+0.211668658 container start 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:23:55 np0005486808 podman[93930]: 2025-10-14 08:23:55.050055295 +0000 UTC m=+0.215424897 container attach 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]: {
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_id": 2,
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "type": "bluestore"
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    },
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_id": 1,
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "type": "bluestore"
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    },
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_id": 0,
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:        "type": "bluestore"
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]:    }
Oct 14 04:23:55 np0005486808 romantic_sinoussi[93898]: }
Oct 14 04:23:55 np0005486808 systemd[1]: libpod-dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993.scope: Deactivated successfully.
Oct 14 04:23:55 np0005486808 systemd[1]: libpod-dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993.scope: Consumed 1.146s CPU time.
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3630956899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 14 04:23:55 np0005486808 podman[93996]: 2025-10-14 08:23:55.602170264 +0000 UTC m=+0.043181732 container died dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:23:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-24a8631f08ff0784016f16b8b5b6af1d64616720d9218bf7e31347acb922b12d-merged.mount: Deactivated successfully.
Oct 14 04:23:55 np0005486808 podman[93996]: 2025-10-14 08:23:55.676001498 +0000 UTC m=+0.117012926 container remove dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:23:55 np0005486808 systemd[1]: libpod-conmon-dfe2b0a41a543eac8c3467a359e364f552fe8a3b4ede572b7e4594d4b91cf993.scope: Deactivated successfully.
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:55 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 14 04:23:55 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:55 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 14 04:23:55 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3630956899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3630956899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 14 04:23:56 np0005486808 recursing_wright[93945]: enabled application 'rbd' on pool 'volumes'
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.583932918 +0000 UTC m=+0.047774312 container create e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:23:56 np0005486808 systemd[1]: libpod-915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375.scope: Deactivated successfully.
Oct 14 04:23:56 np0005486808 podman[93930]: 2025-10-14 08:23:56.598900166 +0000 UTC m=+1.764269768 container died 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:23:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:23:56 np0005486808 systemd[1]: Started libpod-conmon-e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9.scope.
Oct 14 04:23:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d623a18d0c5f9b92f973feb35f3764dbfefbd2e99ed022e072a4946bec7ce656-merged.mount: Deactivated successfully.
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.565526048 +0000 UTC m=+0.029367462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:56 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:56 np0005486808 podman[93930]: 2025-10-14 08:23:56.670308802 +0000 UTC m=+1.835678364 container remove 915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375 (image=quay.io/ceph/ceph:v18, name=recursing_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:23:56 np0005486808 systemd[1]: libpod-conmon-915d5b64b7e4b3de2824d68df741fbd666c61bb308ee98da9303f9afda755375.scope: Deactivated successfully.
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.686514219 +0000 UTC m=+0.150355723 container init e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.694837128 +0000 UTC m=+0.158678522 container start e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.699753735 +0000 UTC m=+0.163595239 container attach e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:23:56 np0005486808 sleepy_lumiere[94203]: 167 167
Oct 14 04:23:56 np0005486808 systemd[1]: libpod-e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9.scope: Deactivated successfully.
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.701190379 +0000 UTC m=+0.165031783 container died e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:23:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-49ae21a42f8c7ba6880f9198f73296822f54c1d79c5bdeb4fecbfcc5b032ff9d-merged.mount: Deactivated successfully.
Oct 14 04:23:56 np0005486808 podman[94178]: 2025-10-14 08:23:56.748641113 +0000 UTC m=+0.212482507 container remove e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 14 04:23:56 np0005486808 systemd[1]: libpod-conmon-e5e118c6c807b6cc0a5fa84a9a815a5c4d71181a3a4948c1a9cf24d8586d94a9.scope: Deactivated successfully.
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:56 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.euuwqu (unknown last config time)...
Oct 14 04:23:56 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.euuwqu (unknown last config time)...
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.euuwqu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.euuwqu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:23:56 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.euuwqu on compute-0
Oct 14 04:23:56 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.euuwqu on compute-0
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:23:57 np0005486808 python3[94254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:57 np0005486808 podman[94303]: 2025-10-14 08:23:57.083270477 +0000 UTC m=+0.040088489 container create 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:23:57 np0005486808 systemd[1]: Started libpod-conmon-57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f.scope.
Oct 14 04:23:57 np0005486808 podman[94303]: 2025-10-14 08:23:57.066174219 +0000 UTC m=+0.022992271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbc04240cc047568966c380c0f1ad7edd64fbca4b0ddfd3379d93f555e4cce6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbc04240cc047568966c380c0f1ad7edd64fbca4b0ddfd3379d93f555e4cce6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:57 np0005486808 podman[94303]: 2025-10-14 08:23:57.18721436 +0000 UTC m=+0.144032392 container init 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:23:57 np0005486808 podman[94303]: 2025-10-14 08:23:57.198751806 +0000 UTC m=+0.155569818 container start 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:23:57 np0005486808 podman[94303]: 2025-10-14 08:23:57.202355602 +0000 UTC m=+0.159173604 container attach 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.562635509 +0000 UTC m=+0.065388803 container create ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3630956899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: Reconfiguring mgr.compute-0.euuwqu (unknown last config time)...
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.euuwqu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: Reconfiguring daemon mgr.compute-0.euuwqu on compute-0
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:23:57 np0005486808 systemd[1]: Started libpod-conmon-ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1.scope.
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.536888444 +0000 UTC m=+0.039641788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:23:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.646243946 +0000 UTC m=+0.148997260 container init ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.659299188 +0000 UTC m=+0.162052512 container start ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:23:57 np0005486808 tender_carver[94423]: 167 167
Oct 14 04:23:57 np0005486808 systemd[1]: libpod-ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1.scope: Deactivated successfully.
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.665954847 +0000 UTC m=+0.168708171 container attach ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.666573982 +0000 UTC m=+0.169327276 container died ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:23:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-26c887858bc3a46428a00a243e14e66aaeea21fce1842b7732c24bef6a013528-merged.mount: Deactivated successfully.
Oct 14 04:23:57 np0005486808 podman[94388]: 2025-10-14 08:23:57.709824515 +0000 UTC m=+0.212577809 container remove ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:57 np0005486808 systemd[1]: libpod-conmon-ae7f62fbc47cf3b2adabf142804ca15b121e3bab7a38ce9c3d8480eae610acb1.scope: Deactivated successfully.
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1100702057' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 14 04:23:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1100702057' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:58 np0005486808 podman[94615]: 2025-10-14 08:23:58.789767205 +0000 UTC m=+0.095474532 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1100702057' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 14 04:23:58 np0005486808 recursing_rubin[94342]: enabled application 'rbd' on pool 'backups'
Oct 14 04:23:58 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 14 04:23:58 np0005486808 systemd[1]: libpod-57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f.scope: Deactivated successfully.
Oct 14 04:23:58 np0005486808 podman[94303]: 2025-10-14 08:23:58.840635919 +0000 UTC m=+1.797453941 container died 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:23:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4bbc04240cc047568966c380c0f1ad7edd64fbca4b0ddfd3379d93f555e4cce6-merged.mount: Deactivated successfully.
Oct 14 04:23:58 np0005486808 podman[94303]: 2025-10-14 08:23:58.90431694 +0000 UTC m=+1.861134982 container remove 57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f (image=quay.io/ceph/ceph:v18, name=recursing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:23:58 np0005486808 systemd[1]: libpod-conmon-57cfad95d366048845ea12074414cae0dc93e30235921463f1f7d8c233931b9f.scope: Deactivated successfully.
Oct 14 04:23:58 np0005486808 podman[94615]: 2025-10-14 08:23:58.941111529 +0000 UTC m=+0.246818816 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:23:59 np0005486808 python3[94709]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:23:59 np0005486808 podman[94730]: 2025-10-14 08:23:59.356502713 +0000 UTC m=+0.059662607 container create 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:23:59 np0005486808 systemd[1]: Started libpod-conmon-3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35.scope.
Oct 14 04:23:59 np0005486808 podman[94730]: 2025-10-14 08:23:59.326744142 +0000 UTC m=+0.029904086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:23:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:23:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e887d6d3cefe03477fb505e2645cdf30f7fb86d84bd2ce6cc50a7ce635d0d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e887d6d3cefe03477fb505e2645cdf30f7fb86d84bd2ce6cc50a7ce635d0d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:23:59 np0005486808 podman[94730]: 2025-10-14 08:23:59.47316439 +0000 UTC m=+0.176324294 container init 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:23:59 np0005486808 podman[94730]: 2025-10-14 08:23:59.484425059 +0000 UTC m=+0.187584953 container start 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:23:59 np0005486808 podman[94730]: 2025-10-14 08:23:59.489689375 +0000 UTC m=+0.192849279 container attach 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1100702057' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:23:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3527189901' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ba9a61ec-2d2e-41c0-9121-4d0419af98fe does not exist
Oct 14 04:24:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 852f7b20-24bb-462e-ae75-489f301d60d9 does not exist
Oct 14 04:24:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 39beffe3-7a46-4254-ae0e-140abe39157a does not exist
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3527189901' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3527189901' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 14 04:24:00 np0005486808 determined_zhukovsky[94761]: enabled application 'rbd' on pool 'images'
Oct 14 04:24:00 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 14 04:24:00 np0005486808 systemd[1]: libpod-3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35.scope: Deactivated successfully.
Oct 14 04:24:00 np0005486808 podman[94730]: 2025-10-14 08:24:00.857163243 +0000 UTC m=+1.560323137 container died 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:24:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-53e887d6d3cefe03477fb505e2645cdf30f7fb86d84bd2ce6cc50a7ce635d0d1-merged.mount: Deactivated successfully.
Oct 14 04:24:00 np0005486808 podman[94730]: 2025-10-14 08:24:00.920092396 +0000 UTC m=+1.623252250 container remove 3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35 (image=quay.io/ceph/ceph:v18, name=determined_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:24:00 np0005486808 systemd[1]: libpod-conmon-3c207ae90782290239394d9d80316b3180dd54b3fb9160131405ef71b9747f35.scope: Deactivated successfully.
Oct 14 04:24:01 np0005486808 python3[95084]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:01 np0005486808 podman[95112]: 2025-10-14 08:24:01.297128604 +0000 UTC m=+0.054917023 container create 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.328212376 +0000 UTC m=+0.042335682 container create 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:24:01 np0005486808 systemd[1]: Started libpod-conmon-605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e.scope.
Oct 14 04:24:01 np0005486808 systemd[1]: Started libpod-conmon-98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e.scope.
Oct 14 04:24:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b7a67b2c059729a1cae9146eda3aa051e6f45e0b6d054cb63758d4ea7c8c9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b7a67b2c059729a1cae9146eda3aa051e6f45e0b6d054cb63758d4ea7c8c9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 podman[95112]: 2025-10-14 08:24:01.275536468 +0000 UTC m=+0.033324867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:01 np0005486808 podman[95112]: 2025-10-14 08:24:01.382836911 +0000 UTC m=+0.140625310 container init 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:01 np0005486808 podman[95112]: 2025-10-14 08:24:01.388332892 +0000 UTC m=+0.146121271 container start 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.390985906 +0000 UTC m=+0.105109262 container init 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 14 04:24:01 np0005486808 podman[95112]: 2025-10-14 08:24:01.39366354 +0000 UTC m=+0.151451919 container attach 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.400504733 +0000 UTC m=+0.114628029 container start 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.403430803 +0000 UTC m=+0.117554099 container attach 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:24:01 np0005486808 admiring_greider[95160]: 167 167
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.307517832 +0000 UTC m=+0.021641158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:01 np0005486808 systemd[1]: libpod-98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e.scope: Deactivated successfully.
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.404501869 +0000 UTC m=+0.118625165 container died 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:24:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2a28cc6e99e3fd27d0c60d28a32eb43e6a83ba750cb1d345f1d45f097820d94d-merged.mount: Deactivated successfully.
Oct 14 04:24:01 np0005486808 podman[95137]: 2025-10-14 08:24:01.442325042 +0000 UTC m=+0.156448338 container remove 98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:01 np0005486808 systemd[1]: libpod-conmon-98157f7d9dded5c611962893a03b7b8542302eea3f7a2239b96478f94629615e.scope: Deactivated successfully.
Oct 14 04:24:01 np0005486808 podman[95185]: 2025-10-14 08:24:01.648960639 +0000 UTC m=+0.044205407 container create 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:24:01 np0005486808 systemd[1]: Started libpod-conmon-98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52.scope.
Oct 14 04:24:01 np0005486808 podman[95185]: 2025-10-14 08:24:01.629358961 +0000 UTC m=+0.024603699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:01 np0005486808 podman[95185]: 2025-10-14 08:24:01.758332872 +0000 UTC m=+0.153577620 container init 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:01 np0005486808 podman[95185]: 2025-10-14 08:24:01.766788494 +0000 UTC m=+0.162033222 container start 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:01 np0005486808 podman[95185]: 2025-10-14 08:24:01.771560198 +0000 UTC m=+0.166804926 container attach 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 14 04:24:01 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3527189901' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 14 04:24:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 14 04:24:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1390273429' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:02 np0005486808 mystifying_kirch[95204]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:24:02 np0005486808 mystifying_kirch[95204]: --> relative data size: 1.0
Oct 14 04:24:02 np0005486808 mystifying_kirch[95204]: --> All data devices are unavailable
Oct 14 04:24:02 np0005486808 systemd[1]: libpod-98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52.scope: Deactivated successfully.
Oct 14 04:24:02 np0005486808 podman[95185]: 2025-10-14 08:24:02.729356938 +0000 UTC m=+1.124601696 container died 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:24:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4b7f926d6f77bf3e7bf42c2386afb6de155594322fba3720fc243392f17737a0-merged.mount: Deactivated successfully.
Oct 14 04:24:02 np0005486808 podman[95185]: 2025-10-14 08:24:02.790407007 +0000 UTC m=+1.185651735 container remove 98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:02 np0005486808 systemd[1]: libpod-conmon-98c5092af97a0d96b7c5e5a00f85a866c8f60bdbb4859c42a10c1c428ac9cb52.scope: Deactivated successfully.
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1390273429' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1390273429' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 14 04:24:02 np0005486808 crazy_turing[95155]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 14 04:24:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 14 04:24:02 np0005486808 systemd[1]: libpod-605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e.scope: Deactivated successfully.
Oct 14 04:24:02 np0005486808 podman[95112]: 2025-10-14 08:24:02.877396265 +0000 UTC m=+1.635184664 container died 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:24:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c9b7a67b2c059729a1cae9146eda3aa051e6f45e0b6d054cb63758d4ea7c8c9c-merged.mount: Deactivated successfully.
Oct 14 04:24:02 np0005486808 podman[95112]: 2025-10-14 08:24:02.926377905 +0000 UTC m=+1.684166324 container remove 605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e (image=quay.io/ceph/ceph:v18, name=crazy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:24:02 np0005486808 systemd[1]: libpod-conmon-605d93e52aeb57f6ebdf314cb38b7a0e039b0e977505a0bca3e06fb21341d40e.scope: Deactivated successfully.
Oct 14 04:24:03 np0005486808 python3[95398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:03 np0005486808 podman[95406]: 2025-10-14 08:24:03.388996217 +0000 UTC m=+0.067756640 container create 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:24:03 np0005486808 systemd[1]: Started libpod-conmon-9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2.scope.
Oct 14 04:24:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:03 np0005486808 podman[95406]: 2025-10-14 08:24:03.361002698 +0000 UTC m=+0.039763191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be6188064e31149aa4e3b0a1301caf82f973019805f1afd0add55c76e592747/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be6188064e31149aa4e3b0a1301caf82f973019805f1afd0add55c76e592747/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:03 np0005486808 podman[95406]: 2025-10-14 08:24:03.474333745 +0000 UTC m=+0.153094188 container init 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:24:03 np0005486808 podman[95406]: 2025-10-14 08:24:03.481677301 +0000 UTC m=+0.160437724 container start 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:03 np0005486808 podman[95406]: 2025-10-14 08:24:03.484490108 +0000 UTC m=+0.163250531 container attach 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.667983742 +0000 UTC m=+0.064710727 container create bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:24:03 np0005486808 systemd[1]: Started libpod-conmon-bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da.scope.
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.638313043 +0000 UTC m=+0.035040078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.763577695 +0000 UTC m=+0.160304720 container init bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.769667061 +0000 UTC m=+0.166394056 container start bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.773969054 +0000 UTC m=+0.170696079 container attach bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:24:03 np0005486808 serene_jemison[95477]: 167 167
Oct 14 04:24:03 np0005486808 systemd[1]: libpod-bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da.scope: Deactivated successfully.
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.778700737 +0000 UTC m=+0.175427692 container died bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:24:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-990f21aa29f1cffa99b5dcae69ce18c4f3e307f9fdebdd6c875903d29057ef76-merged.mount: Deactivated successfully.
Oct 14 04:24:03 np0005486808 podman[95461]: 2025-10-14 08:24:03.818820395 +0000 UTC m=+0.215547350 container remove bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:24:03 np0005486808 systemd[1]: libpod-conmon-bac9dbc9b0c4add44c0f638f9c1a6600ea7fd31d13fbf31321e65da998fc48da.scope: Deactivated successfully.
Oct 14 04:24:03 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1390273429' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 14 04:24:03 np0005486808 podman[95520]: 2025-10-14 08:24:03.98017226 +0000 UTC m=+0.052829493 container create f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:24:04 np0005486808 systemd[1]: Started libpod-conmon-f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0.scope.
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2586935441' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:03.962657111 +0000 UTC m=+0.035314334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f60b171567a278070387ee58d042b4ef5055abcce9c07625ed15a33ed0d8278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f60b171567a278070387ee58d042b4ef5055abcce9c07625ed15a33ed0d8278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f60b171567a278070387ee58d042b4ef5055abcce9c07625ed15a33ed0d8278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f60b171567a278070387ee58d042b4ef5055abcce9c07625ed15a33ed0d8278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:04.090944786 +0000 UTC m=+0.163602019 container init f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:04.105610426 +0000 UTC m=+0.178267659 container start f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:04.109391287 +0000 UTC m=+0.182048490 container attach f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:24:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]: {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    "0": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "devices": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "/dev/loop3"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            ],
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_name": "ceph_lv0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_size": "21470642176",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "name": "ceph_lv0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "tags": {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.crush_device_class": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.encrypted": "0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_id": "0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.vdo": "0"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            },
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "vg_name": "ceph_vg0"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        }
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    ],
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    "1": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "devices": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "/dev/loop4"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            ],
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_name": "ceph_lv1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_size": "21470642176",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "name": "ceph_lv1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "tags": {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.crush_device_class": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.encrypted": "0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_id": "1",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.vdo": "0"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            },
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "vg_name": "ceph_vg1"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        }
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    ],
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    "2": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "devices": [
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "/dev/loop5"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            ],
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_name": "ceph_lv2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_size": "21470642176",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "name": "ceph_lv2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "tags": {
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.crush_device_class": "",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.encrypted": "0",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osd_id": "2",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:                "ceph.vdo": "0"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            },
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "type": "block",
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:            "vg_name": "ceph_vg2"
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:        }
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]:    ]
Oct 14 04:24:04 np0005486808 friendly_banzai[95536]: }
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2586935441' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 14 04:24:04 np0005486808 systemd[1]: libpod-f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0.scope: Deactivated successfully.
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2586935441' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:04.868653025 +0000 UTC m=+0.941310258 container died f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:24:04 np0005486808 clever_shaw[95440]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 14 04:24:04 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 14 04:24:04 np0005486808 systemd[1]: libpod-9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2.scope: Deactivated successfully.
Oct 14 04:24:04 np0005486808 podman[95406]: 2025-10-14 08:24:04.905096806 +0000 UTC m=+1.583857279 container died 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8f60b171567a278070387ee58d042b4ef5055abcce9c07625ed15a33ed0d8278-merged.mount: Deactivated successfully.
Oct 14 04:24:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2be6188064e31149aa4e3b0a1301caf82f973019805f1afd0add55c76e592747-merged.mount: Deactivated successfully.
Oct 14 04:24:04 np0005486808 podman[95520]: 2025-10-14 08:24:04.978977981 +0000 UTC m=+1.051635214 container remove f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:04 np0005486808 systemd[1]: libpod-conmon-f42d6d7abe428e544f32409636873c475cc6baa3740591900697ae5f5d5fa9a0.scope: Deactivated successfully.
Oct 14 04:24:04 np0005486808 podman[95406]: 2025-10-14 08:24:04.993262372 +0000 UTC m=+1.672022845 container remove 9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2 (image=quay.io/ceph/ceph:v18, name=clever_shaw, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:05 np0005486808 systemd[1]: libpod-conmon-9c16e4fc121f62291ca9f768d993908b229435483469ffa2c09e9f5fe1ee66c2.scope: Deactivated successfully.
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.778118462 +0000 UTC m=+0.067429722 container create 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:05 np0005486808 systemd[1]: Started libpod-conmon-6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737.scope.
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.752356977 +0000 UTC m=+0.041668307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:05 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2586935441' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.877270251 +0000 UTC m=+0.166581531 container init 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.887943916 +0000 UTC m=+0.177255176 container start 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.891841709 +0000 UTC m=+0.181152969 container attach 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:05 np0005486808 confident_agnesi[95804]: 167 167
Oct 14 04:24:05 np0005486808 systemd[1]: libpod-6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737.scope: Deactivated successfully.
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.895110867 +0000 UTC m=+0.184422137 container died 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:24:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-52dd3fa579e121e97162eb97f1100db3064f54c9b7b5195ea626e8432c3abf39-merged.mount: Deactivated successfully.
Oct 14 04:24:05 np0005486808 podman[95740]: 2025-10-14 08:24:05.942270824 +0000 UTC m=+0.231582114 container remove 6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:24:05 np0005486808 systemd[1]: libpod-conmon-6a43dc405e30b81a46f3c3832e94d771e06d5868d62a96241235d4752421d737.scope: Deactivated successfully.
Oct 14 04:24:05 np0005486808 python3[95803]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:24:06 np0005486808 podman[95852]: 2025-10-14 08:24:06.155671911 +0000 UTC m=+0.053342106 container create 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 14 04:24:06 np0005486808 systemd[1]: Started libpod-conmon-714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a.scope.
Oct 14 04:24:06 np0005486808 podman[95852]: 2025-10-14 08:24:06.131957754 +0000 UTC m=+0.029628019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f639ff5a34f21a570536d9822418f466461447eb6e99508f1238da21a648c0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f639ff5a34f21a570536d9822418f466461447eb6e99508f1238da21a648c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f639ff5a34f21a570536d9822418f466461447eb6e99508f1238da21a648c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f639ff5a34f21a570536d9822418f466461447eb6e99508f1238da21a648c0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:06 np0005486808 podman[95852]: 2025-10-14 08:24:06.253764544 +0000 UTC m=+0.151434799 container init 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:24:06 np0005486808 podman[95852]: 2025-10-14 08:24:06.268123567 +0000 UTC m=+0.165793792 container start 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:24:06 np0005486808 podman[95852]: 2025-10-14 08:24:06.272736727 +0000 UTC m=+0.170406932 container attach 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:24:06 np0005486808 python3[95918]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430245.6602724-33203-221544747072293/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:24:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:06 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 14 04:24:06 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 14 04:24:06 np0005486808 ceph-mon[74249]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 14 04:24:06 np0005486808 ceph-mon[74249]: Cluster is now healthy
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:07 np0005486808 python3[96027]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]: {
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_id": 2,
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "type": "bluestore"
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    },
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_id": 1,
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "type": "bluestore"
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    },
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_id": 0,
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:        "type": "bluestore"
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]:    }
Oct 14 04:24:07 np0005486808 heuristic_chebyshev[95906]: }
Oct 14 04:24:07 np0005486808 systemd[1]: libpod-714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a.scope: Deactivated successfully.
Oct 14 04:24:07 np0005486808 podman[95852]: 2025-10-14 08:24:07.236545852 +0000 UTC m=+1.134216047 container died 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4f639ff5a34f21a570536d9822418f466461447eb6e99508f1238da21a648c0a-merged.mount: Deactivated successfully.
Oct 14 04:24:07 np0005486808 podman[95852]: 2025-10-14 08:24:07.291522666 +0000 UTC m=+1.189192891 container remove 714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:07 np0005486808 systemd[1]: libpod-conmon-714a5fe1567739a0e942ec66fdadcb05307de119d8e377e65151e59ec181308a.scope: Deactivated successfully.
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:07 np0005486808 python3[96164]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430246.7939146-33217-250876268691278/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=97f6f66403d35749b204bd6afac33e62b8a21847 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:08 np0005486808 python3[96239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.117277213 +0000 UTC m=+0.060770723 container create 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:08 np0005486808 systemd[1]: Started libpod-conmon-49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec.scope.
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.08786657 +0000 UTC m=+0.031360140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa060d868ab8f7d7ab61e5c835432fe3c4ab22d306b1ce3fe7bc0d510fb5981/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa060d868ab8f7d7ab61e5c835432fe3c4ab22d306b1ce3fe7bc0d510fb5981/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa060d868ab8f7d7ab61e5c835432fe3c4ab22d306b1ce3fe7bc0d510fb5981/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.217867066 +0000 UTC m=+0.161360606 container init 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.230913928 +0000 UTC m=+0.174407468 container start 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.235311223 +0000 UTC m=+0.178804763 container attach 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 14 04:24:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579911161' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:24:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579911161' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 14 04:24:08 np0005486808 nostalgic_saha[96255]: 
Oct 14 04:24:08 np0005486808 nostalgic_saha[96255]: [global]
Oct 14 04:24:08 np0005486808 nostalgic_saha[96255]: #011fsid = c49aadb6-9b04-5cb1-8f5f-4c91676c568e
Oct 14 04:24:08 np0005486808 nostalgic_saha[96255]: #011mon_host = 192.168.122.100
Oct 14 04:24:08 np0005486808 systemd[1]: libpod-49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec.scope: Deactivated successfully.
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.793869697 +0000 UTC m=+0.737363237 container died 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:24:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-daa060d868ab8f7d7ab61e5c835432fe3c4ab22d306b1ce3fe7bc0d510fb5981-merged.mount: Deactivated successfully.
Oct 14 04:24:08 np0005486808 podman[96240]: 2025-10-14 08:24:08.846959035 +0000 UTC m=+0.790452545 container remove 49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec (image=quay.io/ceph/ceph:v18, name=nostalgic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:24:08 np0005486808 systemd[1]: libpod-conmon-49c9f6a6fe295685818832fe2a8d4c8e3abf4a67a2a5f080e3c9dbcd22faefec.scope: Deactivated successfully.
Oct 14 04:24:08 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2579911161' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 14 04:24:08 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2579911161' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 14 04:24:09 np0005486808 python3[96396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.258308892 +0000 UTC m=+0.055027115 container create 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:09 np0005486808 systemd[1]: Started libpod-conmon-381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c.scope.
Oct 14 04:24:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34757b1f83ea10a1ebcdc136a211de6ea9845019ff19e3bd00fc94e2f408752a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34757b1f83ea10a1ebcdc136a211de6ea9845019ff19e3bd00fc94e2f408752a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34757b1f83ea10a1ebcdc136a211de6ea9845019ff19e3bd00fc94e2f408752a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.233844768 +0000 UTC m=+0.030562971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.339108343 +0000 UTC m=+0.135826566 container init 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.350154766 +0000 UTC m=+0.146872959 container start 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.35367181 +0000 UTC m=+0.150390033 container attach 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:09 np0005486808 podman[96506]: 2025-10-14 08:24:09.630854841 +0000 UTC m=+0.080516653 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:09 np0005486808 podman[96506]: 2025-10-14 08:24:09.746616907 +0000 UTC m=+0.196278729 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 14 04:24:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1849514707' entity='client.admin' 
Oct 14 04:24:09 np0005486808 reverent_cori[96458]: set ssl_option
Oct 14 04:24:09 np0005486808 systemd[1]: libpod-381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c.scope: Deactivated successfully.
Oct 14 04:24:09 np0005486808 podman[96427]: 2025-10-14 08:24:09.996174269 +0000 UTC m=+0.792892502 container died 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:24:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-34757b1f83ea10a1ebcdc136a211de6ea9845019ff19e3bd00fc94e2f408752a-merged.mount: Deactivated successfully.
Oct 14 04:24:10 np0005486808 podman[96427]: 2025-10-14 08:24:10.066457918 +0000 UTC m=+0.863176111 container remove 381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c (image=quay.io/ceph/ceph:v18, name=reverent_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 04:24:10 np0005486808 systemd[1]: libpod-conmon-381b2c0054f603ee70648e49f137c4019142bd43b4b94f39810d5cca625d641c.scope: Deactivated successfully.
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9ff4a549-bb07-45f3-a9fc-2b239f4a67b5 does not exist
Oct 14 04:24:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5cdf4a45-9cd7-4940-beb1-641e8355cfa8 does not exist
Oct 14 04:24:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f8042f24-41d2-4eae-8909-4e3259d1b5ff does not exist
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:10 np0005486808 python3[96676]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:10 np0005486808 podman[96721]: 2025-10-14 08:24:10.523945427 +0000 UTC m=+0.043050570 container create 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:10 np0005486808 systemd[1]: Started libpod-conmon-3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f.scope.
Oct 14 04:24:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd6e22d500c6de151ace65e1a3b5f6121da98cd3b0b637166b89b3d15b1bf0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd6e22d500c6de151ace65e1a3b5f6121da98cd3b0b637166b89b3d15b1bf0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd6e22d500c6de151ace65e1a3b5f6121da98cd3b0b637166b89b3d15b1bf0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:10 np0005486808 podman[96721]: 2025-10-14 08:24:10.595834034 +0000 UTC m=+0.114939207 container init 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:10 np0005486808 podman[96721]: 2025-10-14 08:24:10.499176075 +0000 UTC m=+0.018281248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:10 np0005486808 podman[96721]: 2025-10-14 08:24:10.602179366 +0000 UTC m=+0.121284509 container start 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:10 np0005486808 podman[96721]: 2025-10-14 08:24:10.605599608 +0000 UTC m=+0.124704771 container attach 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 04:24:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/1849514707' entity='client.admin' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.021945904 +0000 UTC m=+0.068793144 container create 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:24:11 np0005486808 systemd[1]: Started libpod-conmon-66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1.scope.
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:10.992230724 +0000 UTC m=+0.039078004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.107118179 +0000 UTC m=+0.153965469 container init 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.117560148 +0000 UTC m=+0.164407348 container start 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:24:11 np0005486808 frosty_shockley[96886]: 167 167
Oct 14 04:24:11 np0005486808 systemd[1]: libpod-66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1.scope: Deactivated successfully.
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.122062706 +0000 UTC m=+0.168909986 container attach 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.123110241 +0000 UTC m=+0.169957481 container died 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:24:11 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:24:11 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:11 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 14 04:24:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:11 np0005486808 mystifying_liskov[96779]: Scheduled rgw.rgw update...
Oct 14 04:24:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6336dafcbe2fe3bd07f0cd1455ff03cb786eb644b1c57301e769109e6ec758a3-merged.mount: Deactivated successfully.
Oct 14 04:24:11 np0005486808 podman[96869]: 2025-10-14 08:24:11.166080507 +0000 UTC m=+0.212927717 container remove 66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:24:11 np0005486808 systemd[1]: libpod-3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f.scope: Deactivated successfully.
Oct 14 04:24:11 np0005486808 podman[96721]: 2025-10-14 08:24:11.168077295 +0000 UTC m=+0.687182438 container died 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:24:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-54fd6e22d500c6de151ace65e1a3b5f6121da98cd3b0b637166b89b3d15b1bf0-merged.mount: Deactivated successfully.
Oct 14 04:24:11 np0005486808 systemd[1]: libpod-conmon-66224df4c3853239799c044e4377cd9fcb5f252faabe633cbd13f7bd6b767ad1.scope: Deactivated successfully.
Oct 14 04:24:11 np0005486808 podman[96721]: 2025-10-14 08:24:11.215653572 +0000 UTC m=+0.734758715 container remove 3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f (image=quay.io/ceph/ceph:v18, name=mystifying_liskov, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:24:11 np0005486808 systemd[1]: libpod-conmon-3d5a58c483d196eeb6cf43f976cbd0348f38ae94df9334174c247c0c5debf51f.scope: Deactivated successfully.
Oct 14 04:24:11 np0005486808 podman[96925]: 2025-10-14 08:24:11.362578332 +0000 UTC m=+0.050163920 container create 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:24:11 np0005486808 systemd[1]: Started libpod-conmon-854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217.scope.
Oct 14 04:24:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:11 np0005486808 podman[96925]: 2025-10-14 08:24:11.336365785 +0000 UTC m=+0.023951433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:11 np0005486808 podman[96925]: 2025-10-14 08:24:11.452230103 +0000 UTC m=+0.139815681 container init 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:11 np0005486808 podman[96925]: 2025-10-14 08:24:11.45880153 +0000 UTC m=+0.146387098 container start 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:24:11 np0005486808 podman[96925]: 2025-10-14 08:24:11.462276683 +0000 UTC m=+0.149862251 container attach 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:12 np0005486808 ceph-mon[74249]: Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:12 np0005486808 python3[97023]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:24:12 np0005486808 naughty_swirles[96942]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:24:12 np0005486808 naughty_swirles[96942]: --> relative data size: 1.0
Oct 14 04:24:12 np0005486808 naughty_swirles[96942]: --> All data devices are unavailable
Oct 14 04:24:12 np0005486808 python3[97113]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430251.9219217-33258-217667512228062/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:24:12 np0005486808 systemd[1]: libpod-854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217.scope: Deactivated successfully.
Oct 14 04:24:12 np0005486808 systemd[1]: libpod-854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217.scope: Consumed 1.027s CPU time.
Oct 14 04:24:12 np0005486808 podman[97119]: 2025-10-14 08:24:12.600756981 +0000 UTC m=+0.028972813 container died 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f5c5c157c03288a201bb4a13ff32ce5362c2720ed6a1dcb8c7c740bf1fc75870-merged.mount: Deactivated successfully.
Oct 14 04:24:12 np0005486808 podman[97119]: 2025-10-14 08:24:12.654998127 +0000 UTC m=+0.083213959 container remove 854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swirles, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:24:12 np0005486808 systemd[1]: libpod-conmon-854bf71675607c2bb623e918a602089a96444613042eb9de5158089e89616217.scope: Deactivated successfully.
Oct 14 04:24:12 np0005486808 python3[97233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.072161213 +0000 UTC m=+0.057819712 container create e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:24:13 np0005486808 systemd[1]: Started libpod-conmon-e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8.scope.
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.042520255 +0000 UTC m=+0.028178814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31c0188a46f8173876fc8868e8ae4b8d404447114767cf99dcd534c55fd7681/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31c0188a46f8173876fc8868e8ae4b8d404447114767cf99dcd534c55fd7681/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31c0188a46f8173876fc8868e8ae4b8d404447114767cf99dcd534c55fd7681/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.206382149 +0000 UTC m=+0.192040648 container init e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.214201135 +0000 UTC m=+0.199859604 container start e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.217814242 +0000 UTC m=+0.203472741 container attach e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.388764576 +0000 UTC m=+0.056604584 container create 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:24:13 np0005486808 systemd[1]: Started libpod-conmon-315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6.scope.
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.359697461 +0000 UTC m=+0.027537519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.479530514 +0000 UTC m=+0.147370562 container init 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.48898514 +0000 UTC m=+0.156825148 container start 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.49441464 +0000 UTC m=+0.162254648 container attach 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:24:13 np0005486808 systemd[1]: libpod-315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6.scope: Deactivated successfully.
Oct 14 04:24:13 np0005486808 vigorous_carver[97357]: 167 167
Oct 14 04:24:13 np0005486808 conmon[97357]: conmon 315735e1d589707ec678 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6.scope/container/memory.events
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.498863236 +0000 UTC m=+0.166703234 container died 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:24:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-94669ec2599f65a6fee99cc423878055266da108fd0a50af5c54401aa615fee4-merged.mount: Deactivated successfully.
Oct 14 04:24:13 np0005486808 podman[97341]: 2025-10-14 08:24:13.553800191 +0000 UTC m=+0.221640199 container remove 315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:13 np0005486808 systemd[1]: libpod-conmon-315735e1d589707ec67892cc55bb9d8be2cd0e662c80704d7502d21f642c7ab6.scope: Deactivated successfully.
Oct 14 04:24:13 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 14 04:24:13 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 14 04:24:13 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0[74245]: 2025-10-14T08:24:13.774+0000 7fc76305f640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 14 04:24:13 np0005486808 podman[97399]: 2025-10-14 08:24:13.78089974 +0000 UTC m=+0.059752482 container create d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e2 new map
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:24:13.775757+0000#012modified#0112025-10-14T08:24:13.775788+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 14 04:24:13 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:13 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 14 04:24:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:13 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 14 04:24:13 np0005486808 systemd[1]: libpod-e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8.scope: Deactivated successfully.
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.815553586 +0000 UTC m=+0.801212085 container died e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:24:13 np0005486808 systemd[1]: Started libpod-conmon-d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91.scope.
Oct 14 04:24:13 np0005486808 podman[97399]: 2025-10-14 08:24:13.751513301 +0000 UTC m=+0.030366093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f31c0188a46f8173876fc8868e8ae4b8d404447114767cf99dcd534c55fd7681-merged.mount: Deactivated successfully.
Oct 14 04:24:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2da0020232e553261d067a1daebed08aeff04406b74d55b07b01b337cbfb5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 podman[97281]: 2025-10-14 08:24:13.877387448 +0000 UTC m=+0.863045927 container remove e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8 (image=quay.io/ceph/ceph:v18, name=upbeat_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2da0020232e553261d067a1daebed08aeff04406b74d55b07b01b337cbfb5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2da0020232e553261d067a1daebed08aeff04406b74d55b07b01b337cbfb5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2da0020232e553261d067a1daebed08aeff04406b74d55b07b01b337cbfb5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:13 np0005486808 systemd[1]: libpod-conmon-e336078e5f901d6fd60967ae5d96e8b6db1fd8673a609333ef5d99a3856b74a8.scope: Deactivated successfully.
Oct 14 04:24:13 np0005486808 podman[97399]: 2025-10-14 08:24:13.91433302 +0000 UTC m=+0.193185772 container init d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:24:13 np0005486808 podman[97399]: 2025-10-14 08:24:13.925833047 +0000 UTC m=+0.204685769 container start d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:13 np0005486808 podman[97399]: 2025-10-14 08:24:13.929898845 +0000 UTC m=+0.208751597 container attach d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:14 np0005486808 python3[97459]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:14 np0005486808 podman[97460]: 2025-10-14 08:24:14.38976629 +0000 UTC m=+0.066408433 container create 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:14 np0005486808 systemd[1]: Started libpod-conmon-01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a.scope.
Oct 14 04:24:14 np0005486808 podman[97460]: 2025-10-14 08:24:14.362146414 +0000 UTC m=+0.038788597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5cbb0e07160063d32d665a422a15fa2d0c2b0a8c7af597d66a98094299e697/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5cbb0e07160063d32d665a422a15fa2d0c2b0a8c7af597d66a98094299e697/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5cbb0e07160063d32d665a422a15fa2d0c2b0a8c7af597d66a98094299e697/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:14 np0005486808 podman[97460]: 2025-10-14 08:24:14.505529243 +0000 UTC m=+0.182171446 container init 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:14 np0005486808 podman[97460]: 2025-10-14 08:24:14.517487051 +0000 UTC m=+0.194129184 container start 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 14 04:24:14 np0005486808 podman[97460]: 2025-10-14 08:24:14.522204465 +0000 UTC m=+0.198846598 container attach 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 04:24:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:14 np0005486808 busy_beaver[97424]: {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    "0": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "devices": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "/dev/loop3"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            ],
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_name": "ceph_lv0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_size": "21470642176",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "name": "ceph_lv0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "tags": {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.crush_device_class": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.encrypted": "0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_id": "0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.vdo": "0"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            },
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "vg_name": "ceph_vg0"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        }
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    ],
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    "1": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "devices": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "/dev/loop4"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            ],
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_name": "ceph_lv1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_size": "21470642176",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "name": "ceph_lv1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "tags": {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.crush_device_class": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.encrypted": "0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_id": "1",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.vdo": "0"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            },
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "vg_name": "ceph_vg1"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        }
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    ],
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    "2": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "devices": [
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "/dev/loop5"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            ],
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_name": "ceph_lv2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_size": "21470642176",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "name": "ceph_lv2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "tags": {
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.crush_device_class": "",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.encrypted": "0",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osd_id": "2",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:                "ceph.vdo": "0"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            },
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "type": "block",
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:            "vg_name": "ceph_vg2"
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:        }
Oct 14 04:24:14 np0005486808 busy_beaver[97424]:    ]
Oct 14 04:24:14 np0005486808 busy_beaver[97424]: }
Oct 14 04:24:14 np0005486808 systemd[1]: libpod-d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91.scope: Deactivated successfully.
Oct 14 04:24:14 np0005486808 podman[97399]: 2025-10-14 08:24:14.779419361 +0000 UTC m=+1.058272113 container died d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3f2da0020232e553261d067a1daebed08aeff04406b74d55b07b01b337cbfb5d-merged.mount: Deactivated successfully.
Oct 14 04:24:14 np0005486808 podman[97399]: 2025-10-14 08:24:14.860965458 +0000 UTC m=+1.139818180 container remove d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:24:14 np0005486808 systemd[1]: libpod-conmon-d759de25f2caf07d5aca35aa02ab7aa0cf0ab82b03ff99814e1e16c2f9c8af91.scope: Deactivated successfully.
Oct 14 04:24:15 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 04:24:15 np0005486808 ceph-mgr[74543]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:15 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 14 04:24:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:15 np0005486808 recursing_noyce[97475]: Scheduled mds.cephfs update...
Oct 14 04:24:15 np0005486808 systemd[1]: libpod-01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a.scope: Deactivated successfully.
Oct 14 04:24:15 np0005486808 podman[97460]: 2025-10-14 08:24:15.137991772 +0000 UTC m=+0.814633905 container died 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-de5cbb0e07160063d32d665a422a15fa2d0c2b0a8c7af597d66a98094299e697-merged.mount: Deactivated successfully.
Oct 14 04:24:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:15 np0005486808 podman[97460]: 2025-10-14 08:24:15.194360052 +0000 UTC m=+0.871002165 container remove 01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a (image=quay.io/ceph/ceph:v18, name=recursing_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:24:15 np0005486808 systemd[1]: libpod-conmon-01e6cc69173261580f83697a930096bbd941fa6b65fd0bdacb1cbe57449cfb9a.scope: Deactivated successfully.
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.663603803 +0000 UTC m=+0.051229997 container create 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:24:15 np0005486808 systemd[1]: Started libpod-conmon-5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d.scope.
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.640047455 +0000 UTC m=+0.027673729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.763330659 +0000 UTC m=+0.150956883 container init 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.775869771 +0000 UTC m=+0.163495975 container start 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.779615762 +0000 UTC m=+0.167241956 container attach 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:24:15 np0005486808 serene_kirch[97711]: 167 167
Oct 14 04:24:15 np0005486808 systemd[1]: libpod-5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d.scope: Deactivated successfully.
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.782999693 +0000 UTC m=+0.170625917 container died 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:24:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-335583cfa34a0da7f279d6bdea86f79840ddb629ff8532f55328aff882d52345-merged.mount: Deactivated successfully.
Oct 14 04:24:15 np0005486808 podman[97671]: 2025-10-14 08:24:15.832122759 +0000 UTC m=+0.219748953 container remove 5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:15 np0005486808 systemd[1]: libpod-conmon-5ec428275ea58c38f31124e6ca52162398aa3eaa35ef42edbdee39fa94efb33d.scope: Deactivated successfully.
Oct 14 04:24:16 np0005486808 podman[97788]: 2025-10-14 08:24:16.021756924 +0000 UTC m=+0.042527857 container create 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:16 np0005486808 python3[97782]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 14 04:24:16 np0005486808 systemd[1]: Started libpod-conmon-95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02.scope.
Oct 14 04:24:16 np0005486808 podman[97788]: 2025-10-14 08:24:16.002242283 +0000 UTC m=+0.023013206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2baf9ef87c378ad5c05a68f8f9c1aaa0b569ec3c6c961ca1dc325b519e2b57e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2baf9ef87c378ad5c05a68f8f9c1aaa0b569ec3c6c961ca1dc325b519e2b57e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2baf9ef87c378ad5c05a68f8f9c1aaa0b569ec3c6c961ca1dc325b519e2b57e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2baf9ef87c378ad5c05a68f8f9c1aaa0b569ec3c6c961ca1dc325b519e2b57e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:16 np0005486808 podman[97788]: 2025-10-14 08:24:16.121227374 +0000 UTC m=+0.141998337 container init 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:24:16 np0005486808 podman[97788]: 2025-10-14 08:24:16.130485687 +0000 UTC m=+0.151256580 container start 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:24:16 np0005486808 podman[97788]: 2025-10-14 08:24:16.134302099 +0000 UTC m=+0.155073102 container attach 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:24:16 np0005486808 ceph-mon[74249]: Saving service mds.cephfs spec with placement compute-0
Oct 14 04:24:16 np0005486808 python3[97882]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430255.6973305-33288-230123329227401/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=797a8c26440dacfba8271b2a1db1dbf2ce2a5750 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:24:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:16 np0005486808 python3[97932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:16 np0005486808 podman[97944]: 2025-10-14 08:24:16.974779276 +0000 UTC m=+0.049515146 container create e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:17 np0005486808 systemd[1]: Started libpod-conmon-e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487.scope.
Oct 14 04:24:17 np0005486808 podman[97944]: 2025-10-14 08:24:16.949881565 +0000 UTC m=+0.024617455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:17 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d897a12150f27efdb1a6dee077f36e06c0d88f5875fe190fab13f52294c975/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d897a12150f27efdb1a6dee077f36e06c0d88f5875fe190fab13f52294c975/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:17 np0005486808 podman[97944]: 2025-10-14 08:24:17.066860077 +0000 UTC m=+0.141595977 container init e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:17 np0005486808 podman[97944]: 2025-10-14 08:24:17.072171715 +0000 UTC m=+0.146907595 container start e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 04:24:17 np0005486808 podman[97944]: 2025-10-14 08:24:17.075095016 +0000 UTC m=+0.149830896 container attach e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]: {
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_id": 2,
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "type": "bluestore"
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    },
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_id": 1,
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "type": "bluestore"
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    },
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_id": 0,
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:        "type": "bluestore"
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]:    }
Oct 14 04:24:17 np0005486808 exciting_kirch[97805]: }
Oct 14 04:24:17 np0005486808 systemd[1]: libpod-95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02.scope: Deactivated successfully.
Oct 14 04:24:17 np0005486808 systemd[1]: libpod-95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02.scope: Consumed 1.021s CPU time.
Oct 14 04:24:17 np0005486808 podman[97788]: 2025-10-14 08:24:17.154952312 +0000 UTC m=+1.175723245 container died 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:24:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f2baf9ef87c378ad5c05a68f8f9c1aaa0b569ec3c6c961ca1dc325b519e2b57e-merged.mount: Deactivated successfully.
Oct 14 04:24:17 np0005486808 podman[97788]: 2025-10-14 08:24:17.214791746 +0000 UTC m=+1.235562629 container remove 95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:17 np0005486808 systemd[1]: libpod-conmon-95ebb5b2dc04caf479a6ed9a2e3740fc48823b0ae8a11d7768583c88870b6d02.scope: Deactivated successfully.
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/98097051' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 14 04:24:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/98097051' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 14 04:24:17 np0005486808 systemd[1]: libpod-e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487.scope: Deactivated successfully.
Oct 14 04:24:17 np0005486808 podman[98139]: 2025-10-14 08:24:17.719493073 +0000 UTC m=+0.028343005 container died e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:24:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-98d897a12150f27efdb1a6dee077f36e06c0d88f5875fe190fab13f52294c975-merged.mount: Deactivated successfully.
Oct 14 04:24:17 np0005486808 podman[98139]: 2025-10-14 08:24:17.771307313 +0000 UTC m=+0.080157195 container remove e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487 (image=quay.io/ceph/ceph:v18, name=nostalgic_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:24:17 np0005486808 systemd[1]: libpod-conmon-e3c414b5a6f4c7d21f18a9975830831a2c045485e85e3b2913d828baae827487.scope: Deactivated successfully.
Oct 14 04:24:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:18 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/98097051' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 14 04:24:18 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/98097051' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 14 04:24:18 np0005486808 podman[98249]: 2025-10-14 08:24:18.272102185 +0000 UTC m=+0.081898157 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:24:18 np0005486808 podman[98249]: 2025-10-14 08:24:18.396515247 +0000 UTC m=+0.206311129 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:24:18 np0005486808 python3[98294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:18 np0005486808 podman[98324]: 2025-10-14 08:24:18.596090702 +0000 UTC m=+0.069156300 container create 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:24:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:18 np0005486808 systemd[1]: Started libpod-conmon-1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0.scope.
Oct 14 04:24:18 np0005486808 podman[98324]: 2025-10-14 08:24:18.568149808 +0000 UTC m=+0.041215446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad04f0d36b92306f1530cc670021f317d5af31859da3d30cd1216c20a422979/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad04f0d36b92306f1530cc670021f317d5af31859da3d30cd1216c20a422979/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:18 np0005486808 podman[98324]: 2025-10-14 08:24:18.709538119 +0000 UTC m=+0.182603727 container init 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:24:18 np0005486808 podman[98324]: 2025-10-14 08:24:18.717378288 +0000 UTC m=+0.190443886 container start 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:18 np0005486808 podman[98324]: 2025-10-14 08:24:18.721359544 +0000 UTC m=+0.194425142 container attach 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9a1390c7-d9bf-4eaf-a2ea-369191f74851 does not exist
Oct 14 04:24:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 48837262-584f-4f4e-8294-217a5f48b9ae does not exist
Oct 14 04:24:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eb1bed4e-c6cf-4b5e-891c-a4dcf1c83e5f does not exist
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 14 04:24:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619228210' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 14 04:24:19 np0005486808 festive_cartwright[98363]: 
Oct 14 04:24:19 np0005486808 festive_cartwright[98363]: {"fsid":"c49aadb6-9b04-5cb1-8f5f-4c91676c568e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":152,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1760430224,"num_in_osds":3,"osd_in_since":1760430197,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83828736,"bytes_avail":64328097792,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-14T08:23:34.602537+0000","services":{}},"progress_events":{}}
Oct 14 04:24:19 np0005486808 systemd[1]: libpod-1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0.scope: Deactivated successfully.
Oct 14 04:24:19 np0005486808 podman[98324]: 2025-10-14 08:24:19.347146842 +0000 UTC m=+0.820212430 container died 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3ad04f0d36b92306f1530cc670021f317d5af31859da3d30cd1216c20a422979-merged.mount: Deactivated successfully.
Oct 14 04:24:19 np0005486808 podman[98324]: 2025-10-14 08:24:19.398313116 +0000 UTC m=+0.871378674 container remove 1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0 (image=quay.io/ceph/ceph:v18, name=festive_cartwright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:24:19 np0005486808 systemd[1]: libpod-conmon-1d5b6e994e3ce0a871fe07fa9136f381b2b92caaf47c98d2247297102b7660b0.scope: Deactivated successfully.
Oct 14 04:24:19 np0005486808 python3[98603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.780722462 +0000 UTC m=+0.069027006 container create 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:24:19 np0005486808 systemd[1]: Started libpod-conmon-271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9.scope.
Oct 14 04:24:19 np0005486808 podman[98634]: 2025-10-14 08:24:19.835455893 +0000 UTC m=+0.055357367 container create 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.750171185 +0000 UTC m=+0.038475789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:19 np0005486808 systemd[1]: Started libpod-conmon-14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa.scope.
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.876238557 +0000 UTC m=+0.164543151 container init 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.891495055 +0000 UTC m=+0.179799579 container start 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.895285967 +0000 UTC m=+0.183590561 container attach 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:19 np0005486808 suspicious_lederberg[98649]: 167 167
Oct 14 04:24:19 np0005486808 systemd[1]: libpod-271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9.scope: Deactivated successfully.
Oct 14 04:24:19 np0005486808 conmon[98649]: conmon 271869228ff7e838f821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9.scope/container/memory.events
Oct 14 04:24:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b01897a26c959e30370bf14f935a8dfe593f613ca68c79ada9d3a47617b2b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.901136008 +0000 UTC m=+0.189440582 container died 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b01897a26c959e30370bf14f935a8dfe593f613ca68c79ada9d3a47617b2b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:19 np0005486808 podman[98634]: 2025-10-14 08:24:19.808498093 +0000 UTC m=+0.028399647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:19 np0005486808 podman[98634]: 2025-10-14 08:24:19.923318073 +0000 UTC m=+0.143219597 container init 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:19 np0005486808 podman[98634]: 2025-10-14 08:24:19.936431179 +0000 UTC m=+0.156332663 container start 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:24:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9940a1456545656758ee06ffd4c4a4004bab7f3364373c7dc50e0b1ed5857123-merged.mount: Deactivated successfully.
Oct 14 04:24:19 np0005486808 podman[98620]: 2025-10-14 08:24:19.957505068 +0000 UTC m=+0.245809622 container remove 271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:24:19 np0005486808 systemd[1]: libpod-conmon-271869228ff7e838f8210cf7102403af651be934c370e3339743cc96d67ad9f9.scope: Deactivated successfully.
Oct 14 04:24:19 np0005486808 podman[98634]: 2025-10-14 08:24:19.975318877 +0000 UTC m=+0.195220421 container attach 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:20 np0005486808 podman[98678]: 2025-10-14 08:24:20.201453833 +0000 UTC m=+0.079066128 container create e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:20 np0005486808 podman[98678]: 2025-10-14 08:24:20.162684988 +0000 UTC m=+0.040297383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:20 np0005486808 systemd[1]: Started libpod-conmon-e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497.scope.
Oct 14 04:24:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:20 np0005486808 podman[98678]: 2025-10-14 08:24:20.309835177 +0000 UTC m=+0.187447472 container init e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:20 np0005486808 podman[98678]: 2025-10-14 08:24:20.320556516 +0000 UTC m=+0.198168821 container start e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:20 np0005486808 podman[98678]: 2025-10-14 08:24:20.324683335 +0000 UTC m=+0.202295630 container attach e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:24:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500851557' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:24:20 np0005486808 affectionate_gauss[98654]: 
Oct 14 04:24:20 np0005486808 affectionate_gauss[98654]: {"epoch":1,"fsid":"c49aadb6-9b04-5cb1-8f5f-4c91676c568e","modified":"2025-10-14T08:21:41.573061Z","created":"2025-10-14T08:21:41.573061Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 14 04:24:20 np0005486808 affectionate_gauss[98654]: dumped monmap epoch 1
Oct 14 04:24:20 np0005486808 systemd[1]: libpod-14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa.scope: Deactivated successfully.
Oct 14 04:24:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:20 np0005486808 podman[98720]: 2025-10-14 08:24:20.661227615 +0000 UTC m=+0.041398230 container died 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:24:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e3b01897a26c959e30370bf14f935a8dfe593f613ca68c79ada9d3a47617b2b8-merged.mount: Deactivated successfully.
Oct 14 04:24:20 np0005486808 podman[98720]: 2025-10-14 08:24:20.721876188 +0000 UTC m=+0.102046783 container remove 14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa (image=quay.io/ceph/ceph:v18, name=affectionate_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 14 04:24:20 np0005486808 systemd[1]: libpod-conmon-14a5eb0ea9e7425da7ae054916aac440c05236d73289ab79ed871f104774c8aa.scope: Deactivated successfully.
Oct 14 04:24:21 np0005486808 python3[98775]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:21 np0005486808 podman[98782]: 2025-10-14 08:24:21.343798063 +0000 UTC m=+0.055546311 container create 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:21 np0005486808 systemd[1]: Started libpod-conmon-8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950.scope.
Oct 14 04:24:21 np0005486808 quirky_cohen[98695]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:24:21 np0005486808 quirky_cohen[98695]: --> relative data size: 1.0
Oct 14 04:24:21 np0005486808 quirky_cohen[98695]: --> All data devices are unavailable
Oct 14 04:24:21 np0005486808 podman[98782]: 2025-10-14 08:24:21.317337294 +0000 UTC m=+0.029085552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/029b75ac55f8ccb642d18b9ac81c14620161dc85249c2787c7618969c66dd57d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/029b75ac55f8ccb642d18b9ac81c14620161dc85249c2787c7618969c66dd57d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:21 np0005486808 systemd[1]: libpod-e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497.scope: Deactivated successfully.
Oct 14 04:24:21 np0005486808 systemd[1]: libpod-e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497.scope: Consumed 1.049s CPU time.
Oct 14 04:24:21 np0005486808 podman[98678]: 2025-10-14 08:24:21.441144461 +0000 UTC m=+1.318756836 container died e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 04:24:21 np0005486808 podman[98782]: 2025-10-14 08:24:21.460877727 +0000 UTC m=+0.172626015 container init 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:24:21 np0005486808 podman[98782]: 2025-10-14 08:24:21.475107901 +0000 UTC m=+0.186856139 container start 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7b768475a86d6e7b0a7cc4acfa592849c7a3086bf5367203c8966b983b096aaa-merged.mount: Deactivated successfully.
Oct 14 04:24:21 np0005486808 podman[98782]: 2025-10-14 08:24:21.488826502 +0000 UTC m=+0.200574750 container attach 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:24:21 np0005486808 podman[98678]: 2025-10-14 08:24:21.525976688 +0000 UTC m=+1.403589003 container remove e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:24:21 np0005486808 systemd[1]: libpod-conmon-e4f0f36f1af945fd4fcf5c1790f473dde29af1bee1fe2406b6b8bf9a94d8a497.scope: Deactivated successfully.
Oct 14 04:24:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 14 04:24:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3434917530' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 14 04:24:22 np0005486808 upbeat_antonelli[98800]: [client.openstack]
Oct 14 04:24:22 np0005486808 upbeat_antonelli[98800]: #011key = AQD6B+5oAAAAABAAVEQ7lkthlmRNcbZUzlB7sg==
Oct 14 04:24:22 np0005486808 upbeat_antonelli[98800]: #011caps mgr = "allow *"
Oct 14 04:24:22 np0005486808 upbeat_antonelli[98800]: #011caps mon = "profile rbd"
Oct 14 04:24:22 np0005486808 upbeat_antonelli[98800]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 14 04:24:22 np0005486808 systemd[1]: libpod-8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950.scope: Deactivated successfully.
Oct 14 04:24:22 np0005486808 podman[98782]: 2025-10-14 08:24:22.088645553 +0000 UTC m=+0.800393791 container died 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:24:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-029b75ac55f8ccb642d18b9ac81c14620161dc85249c2787c7618969c66dd57d-merged.mount: Deactivated successfully.
Oct 14 04:24:22 np0005486808 podman[98782]: 2025-10-14 08:24:22.136250602 +0000 UTC m=+0.847998880 container remove 8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950 (image=quay.io/ceph/ceph:v18, name=upbeat_antonelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:24:22 np0005486808 systemd[1]: libpod-conmon-8e3fa3adb4ebd786829df3f2ad07f73627e0a29a7de6ab3d0db71d682bbe8950.scope: Deactivated successfully.
Oct 14 04:24:22 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/3434917530' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.377062201 +0000 UTC m=+0.067954130 container create d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:24:22 np0005486808 systemd[1]: Started libpod-conmon-d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b.scope.
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.34922954 +0000 UTC m=+0.040121529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.484151355 +0000 UTC m=+0.175043324 container init d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.496562904 +0000 UTC m=+0.187454843 container start d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.500963391 +0000 UTC m=+0.191855400 container attach d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:22 np0005486808 peaceful_brahmagupta[99006]: 167 167
Oct 14 04:24:22 np0005486808 systemd[1]: libpod-d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b.scope: Deactivated successfully.
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.503909642 +0000 UTC m=+0.194801571 container died d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6a99bacd98ee93ab516df81408f9d0d3c55d026531669dd34ad975243a2d970d-merged.mount: Deactivated successfully.
Oct 14 04:24:22 np0005486808 podman[98990]: 2025-10-14 08:24:22.556601673 +0000 UTC m=+0.247493622 container remove d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brahmagupta, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:24:22 np0005486808 systemd[1]: libpod-conmon-d02271c805b661d98b2575a5f52485355f6e7e72b646876c9f115f780e94070b.scope: Deactivated successfully.
Oct 14 04:24:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:22 np0005486808 podman[99031]: 2025-10-14 08:24:22.748172765 +0000 UTC m=+0.062905349 container create 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:24:22 np0005486808 systemd[1]: Started libpod-conmon-519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9.scope.
Oct 14 04:24:22 np0005486808 podman[99031]: 2025-10-14 08:24:22.716298816 +0000 UTC m=+0.031031450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058657c5a6b0c6d67a49bcfd2d025dd9e0c76a36bc6d385f0af2d35b9bf93e00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058657c5a6b0c6d67a49bcfd2d025dd9e0c76a36bc6d385f0af2d35b9bf93e00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058657c5a6b0c6d67a49bcfd2d025dd9e0c76a36bc6d385f0af2d35b9bf93e00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058657c5a6b0c6d67a49bcfd2d025dd9e0c76a36bc6d385f0af2d35b9bf93e00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:22 np0005486808 podman[99031]: 2025-10-14 08:24:22.860761981 +0000 UTC m=+0.175494575 container init 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:22 np0005486808 podman[99031]: 2025-10-14 08:24:22.875364613 +0000 UTC m=+0.190097197 container start 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:24:22 np0005486808 podman[99031]: 2025-10-14 08:24:22.879483123 +0000 UTC m=+0.194215667 container attach 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:23 np0005486808 ansible-async_wrapper.py[99201]: Invoked with j136473381820 30 /home/zuul/.ansible/tmp/ansible-tmp-1760430263.058167-33360-253035777103833/AnsiballZ_command.py _
Oct 14 04:24:23 np0005486808 ansible-async_wrapper.py[99207]: Starting module and watcher
Oct 14 04:24:23 np0005486808 ansible-async_wrapper.py[99207]: Start watching 99208 (30)
Oct 14 04:24:23 np0005486808 ansible-async_wrapper.py[99208]: Start module (99208)
Oct 14 04:24:23 np0005486808 ansible-async_wrapper.py[99201]: Return async_wrapper task started.
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]: {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    "0": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "devices": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "/dev/loop3"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            ],
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_name": "ceph_lv0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_size": "21470642176",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "name": "ceph_lv0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "tags": {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.crush_device_class": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.encrypted": "0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_id": "0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.vdo": "0"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            },
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "vg_name": "ceph_vg0"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        }
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    ],
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    "1": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "devices": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "/dev/loop4"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            ],
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_name": "ceph_lv1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_size": "21470642176",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "name": "ceph_lv1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "tags": {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.crush_device_class": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.encrypted": "0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_id": "1",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.vdo": "0"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            },
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "vg_name": "ceph_vg1"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        }
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    ],
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    "2": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "devices": [
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "/dev/loop5"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            ],
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_name": "ceph_lv2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_size": "21470642176",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "name": "ceph_lv2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "tags": {
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.crush_device_class": "",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.encrypted": "0",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osd_id": "2",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:                "ceph.vdo": "0"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            },
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "type": "block",
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:            "vg_name": "ceph_vg2"
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:        }
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]:    ]
Oct 14 04:24:23 np0005486808 gallant_wilson[99047]: }
Oct 14 04:24:23 np0005486808 systemd[1]: libpod-519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9.scope: Deactivated successfully.
Oct 14 04:24:23 np0005486808 podman[99031]: 2025-10-14 08:24:23.770547851 +0000 UTC m=+1.085280405 container died 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:24:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-058657c5a6b0c6d67a49bcfd2d025dd9e0c76a36bc6d385f0af2d35b9bf93e00-merged.mount: Deactivated successfully.
Oct 14 04:24:23 np0005486808 python3[99210]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:23 np0005486808 podman[99031]: 2025-10-14 08:24:23.857421527 +0000 UTC m=+1.172154081 container remove 519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:24:23 np0005486808 systemd[1]: libpod-conmon-519a86407e2490fe69295c83c86f7105dbda0f7b740bc27d4148423ef23008d9.scope: Deactivated successfully.
Oct 14 04:24:23 np0005486808 podman[99223]: 2025-10-14 08:24:23.91066812 +0000 UTC m=+0.068445391 container create 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:24:23 np0005486808 systemd[1]: Started libpod-conmon-439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48.scope.
Oct 14 04:24:23 np0005486808 podman[99223]: 2025-10-14 08:24:23.87584086 +0000 UTC m=+0.033618181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6fa25b245d75d8a62e995c0ab3c72e39481e59f7db490a715c5735e429bbb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6fa25b245d75d8a62e995c0ab3c72e39481e59f7db490a715c5735e429bbb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:24 np0005486808 podman[99223]: 2025-10-14 08:24:24.019633589 +0000 UTC m=+0.177410910 container init 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:24 np0005486808 podman[99223]: 2025-10-14 08:24:24.033254648 +0000 UTC m=+0.191031899 container start 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:24:24 np0005486808 podman[99223]: 2025-10-14 08:24:24.037190293 +0000 UTC m=+0.194967644 container attach 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:24 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:24:24 np0005486808 youthful_banach[99258]: 
Oct 14 04:24:24 np0005486808 youthful_banach[99258]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 14 04:24:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:24 np0005486808 systemd[1]: libpod-439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48.scope: Deactivated successfully.
Oct 14 04:24:24 np0005486808 podman[99223]: 2025-10-14 08:24:24.625071956 +0000 UTC m=+0.782849267 container died 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:24:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cc6fa25b245d75d8a62e995c0ab3c72e39481e59f7db490a715c5735e429bbb0-merged.mount: Deactivated successfully.
Oct 14 04:24:24 np0005486808 podman[99223]: 2025-10-14 08:24:24.693563499 +0000 UTC m=+0.851340770 container remove 439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48 (image=quay.io/ceph/ceph:v18, name=youthful_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:24 np0005486808 systemd[1]: libpod-conmon-439234f0c5e12c575274a793c7fbc4b0cca90c18db14841d8098b6fe72bc4d48.scope: Deactivated successfully.
Oct 14 04:24:24 np0005486808 ansible-async_wrapper.py[99208]: Module complete (99208)
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.768430945 +0000 UTC m=+0.044338651 container create 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:24 np0005486808 systemd[1]: Started libpod-conmon-149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f.scope.
Oct 14 04:24:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.75288375 +0000 UTC m=+0.028791476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.862767461 +0000 UTC m=+0.138675247 container init 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.868562001 +0000 UTC m=+0.144469747 container start 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:24:24 np0005486808 angry_cray[99466]: 167 167
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.873464899 +0000 UTC m=+0.149372705 container attach 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:24 np0005486808 systemd[1]: libpod-149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f.scope: Deactivated successfully.
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.874419372 +0000 UTC m=+0.150327118 container died 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:24:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-66f85a779dabc129668ec2aea15379823da88835d50b1a3e743b5470ecc63d20-merged.mount: Deactivated successfully.
Oct 14 04:24:24 np0005486808 podman[99416]: 2025-10-14 08:24:24.917665396 +0000 UTC m=+0.193573102 container remove 149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:24 np0005486808 systemd[1]: libpod-conmon-149e85e62c17200b17446380737aa4bf2dd3685e2e6e167bbc5757309c15328f.scope: Deactivated successfully.
Oct 14 04:24:25 np0005486808 python3[99483]: ansible-ansible.legacy.async_status Invoked with jid=j136473381820.99201 mode=status _async_dir=/root/.ansible_async
Oct 14 04:24:25 np0005486808 podman[99503]: 2025-10-14 08:24:25.09275502 +0000 UTC m=+0.054540747 container create c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:25 np0005486808 systemd[1]: Started libpod-conmon-c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169.scope.
Oct 14 04:24:25 np0005486808 podman[99503]: 2025-10-14 08:24:25.067774477 +0000 UTC m=+0.029560274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2524d402afdbc37a18e37620143ad4c40b4c26dee58fc445467b9fa4a16e67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2524d402afdbc37a18e37620143ad4c40b4c26dee58fc445467b9fa4a16e67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2524d402afdbc37a18e37620143ad4c40b4c26dee58fc445467b9fa4a16e67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2524d402afdbc37a18e37620143ad4c40b4c26dee58fc445467b9fa4a16e67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:25 np0005486808 podman[99503]: 2025-10-14 08:24:25.188267634 +0000 UTC m=+0.150053401 container init c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:25 np0005486808 podman[99503]: 2025-10-14 08:24:25.199212608 +0000 UTC m=+0.160998335 container start c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:25 np0005486808 podman[99503]: 2025-10-14 08:24:25.202972829 +0000 UTC m=+0.164758556 container attach c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:25 np0005486808 python3[99572]: ansible-ansible.legacy.async_status Invoked with jid=j136473381820.99201 mode=cleanup _async_dir=/root/.ansible_async
Oct 14 04:24:25 np0005486808 python3[99598]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.032114293 +0000 UTC m=+0.065162133 container create a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 14 04:24:26 np0005486808 systemd[1]: Started libpod-conmon-a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a.scope.
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.0095949 +0000 UTC m=+0.042642800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec669843ba2db5187d316fca2d81f52f18953d59fbbbbf1ccc695b6bcaaba90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec669843ba2db5187d316fca2d81f52f18953d59fbbbbf1ccc695b6bcaaba90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.153620615 +0000 UTC m=+0.186668465 container init a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.161241988 +0000 UTC m=+0.194289848 container start a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.16546366 +0000 UTC m=+0.198511500 container attach a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]: {
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_id": 2,
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "type": "bluestore"
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    },
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_id": 1,
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "type": "bluestore"
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    },
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_id": 0,
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:        "type": "bluestore"
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]:    }
Oct 14 04:24:26 np0005486808 adoring_ellis[99554]: }
Oct 14 04:24:26 np0005486808 systemd[1]: libpod-c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169.scope: Deactivated successfully.
Oct 14 04:24:26 np0005486808 systemd[1]: libpod-c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169.scope: Consumed 1.045s CPU time.
Oct 14 04:24:26 np0005486808 conmon[99554]: conmon c7332d8df62722f2bdb7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169.scope/container/memory.events
Oct 14 04:24:26 np0005486808 podman[99503]: 2025-10-14 08:24:26.247321975 +0000 UTC m=+1.209107702 container died c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6d2524d402afdbc37a18e37620143ad4c40b4c26dee58fc445467b9fa4a16e67-merged.mount: Deactivated successfully.
Oct 14 04:24:26 np0005486808 podman[99503]: 2025-10-14 08:24:26.317611841 +0000 UTC m=+1.279397568 container remove c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:26 np0005486808 systemd[1]: libpod-conmon-c7332d8df62722f2bdb77c4c342d0503a3ecda58618efe742f1c0bfbc654a169.scope: Deactivated successfully.
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:26 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 3a68e9b8-c942-4b8c-8766-633f4d4d02ec (Updating rgw.rgw deployment (+1 -> 1))
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.opginv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.opginv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.opginv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:26 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.opginv on compute-0
Oct 14 04:24:26 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.opginv on compute-0
Oct 14 04:24:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:26 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:24:26 np0005486808 elastic_faraday[99634]: 
Oct 14 04:24:26 np0005486808 elastic_faraday[99634]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 14 04:24:26 np0005486808 systemd[1]: libpod-a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a.scope: Deactivated successfully.
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.759846831 +0000 UTC m=+0.792894651 container died a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5ec669843ba2db5187d316fca2d81f52f18953d59fbbbbf1ccc695b6bcaaba90-merged.mount: Deactivated successfully.
Oct 14 04:24:26 np0005486808 podman[99610]: 2025-10-14 08:24:26.810330578 +0000 UTC m=+0.843378408 container remove a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a (image=quay.io/ceph/ceph:v18, name=elastic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:24:26 np0005486808 systemd[1]: libpod-conmon-a3530475e59ee6cb7e61fb813dbd38324cbf4fcc228022f4e0f7818e71769b0a.scope: Deactivated successfully.
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.048530885 +0000 UTC m=+0.046496472 container create a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:27 np0005486808 systemd[1]: Started libpod-conmon-a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde.scope.
Oct 14 04:24:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.119352654 +0000 UTC m=+0.117318281 container init a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.028175824 +0000 UTC m=+0.026141431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.126413054 +0000 UTC m=+0.124378641 container start a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.129523399 +0000 UTC m=+0.127489006 container attach a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:27 np0005486808 thirsty_morse[99850]: 167 167
Oct 14 04:24:27 np0005486808 systemd[1]: libpod-a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde.scope: Deactivated successfully.
Oct 14 04:24:27 np0005486808 conmon[99850]: conmon a08f3a6a12f4c00b2552 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde.scope/container/memory.events
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.134067129 +0000 UTC m=+0.132032756 container died a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:24:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-afaca2d8e6541cd9d3404d3efb91106793acf1d7b7cc02405f337557511d54ed-merged.mount: Deactivated successfully.
Oct 14 04:24:27 np0005486808 podman[99833]: 2025-10-14 08:24:27.186586436 +0000 UTC m=+0.184552063 container remove a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:27 np0005486808 systemd[1]: libpod-conmon-a08f3a6a12f4c00b2552868b9136db8c1a02d1f002b6ed9db3744c39bf4c4fde.scope: Deactivated successfully.
Oct 14 04:24:27 np0005486808 systemd[1]: Reloading.
Oct 14 04:24:27 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:24:27 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.opginv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.opginv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:27 np0005486808 ceph-mon[74249]: Deploying daemon rgw.rgw.compute-0.opginv on compute-0
Oct 14 04:24:27 np0005486808 systemd[1]: Reloading.
Oct 14 04:24:27 np0005486808 python3[99932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:27 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:24:27 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:24:27 np0005486808 podman[99968]: 2025-10-14 08:24:27.802446034 +0000 UTC m=+0.058481122 container create 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:27 np0005486808 podman[99968]: 2025-10-14 08:24:27.776854036 +0000 UTC m=+0.032889194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:27 np0005486808 systemd[1]: Started libpod-conmon-1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c.scope.
Oct 14 04:24:27 np0005486808 systemd[1]: Starting Ceph rgw.rgw.compute-0.opginv for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:24:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b729f08d72cd8395cd6c5ec1aec2fa25f01466c1762e02e65a6cf1b46d7f6b2e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b729f08d72cd8395cd6c5ec1aec2fa25f01466c1762e02e65a6cf1b46d7f6b2e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:27 np0005486808 podman[99968]: 2025-10-14 08:24:27.984407364 +0000 UTC m=+0.240442452 container init 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:24:27 np0005486808 podman[99968]: 2025-10-14 08:24:27.991570276 +0000 UTC m=+0.247605374 container start 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:24:27 np0005486808 podman[99968]: 2025-10-14 08:24:27.995246725 +0000 UTC m=+0.251281813 container attach 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:28 np0005486808 podman[100042]: 2025-10-14 08:24:28.202075835 +0000 UTC m=+0.046978154 container create c883c837a78ada02f7974f70d2fe8e2519e0266d14c76bcc3e46b5b6a3cb585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-rgw-rgw-compute-0-opginv, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 04:24:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322fdac24860cd199670d6642dce9bcf7947019ecafd3255cbcdfa1ddea3f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322fdac24860cd199670d6642dce9bcf7947019ecafd3255cbcdfa1ddea3f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322fdac24860cd199670d6642dce9bcf7947019ecafd3255cbcdfa1ddea3f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4322fdac24860cd199670d6642dce9bcf7947019ecafd3255cbcdfa1ddea3f25/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.opginv supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:28 np0005486808 podman[100042]: 2025-10-14 08:24:28.182086783 +0000 UTC m=+0.026989102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:28 np0005486808 podman[100042]: 2025-10-14 08:24:28.297493627 +0000 UTC m=+0.142395996 container init c883c837a78ada02f7974f70d2fe8e2519e0266d14c76bcc3e46b5b6a3cb585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-rgw-rgw-compute-0-opginv, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:24:28 np0005486808 podman[100042]: 2025-10-14 08:24:28.307312664 +0000 UTC m=+0.152215003 container start c883c837a78ada02f7974f70d2fe8e2519e0266d14c76bcc3e46b5b6a3cb585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-rgw-rgw-compute-0-opginv, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:24:28 np0005486808 bash[100042]: c883c837a78ada02f7974f70d2fe8e2519e0266d14c76bcc3e46b5b6a3cb585e
Oct 14 04:24:28 np0005486808 systemd[1]: Started Ceph rgw.rgw.compute-0.opginv for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:28 np0005486808 radosgw[100062]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:24:28 np0005486808 radosgw[100062]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 14 04:24:28 np0005486808 radosgw[100062]: framework: beast
Oct 14 04:24:28 np0005486808 radosgw[100062]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 14 04:24:28 np0005486808 radosgw[100062]: init_numa not setting numa affinity
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 3a68e9b8-c942-4b8c-8766-633f4d4d02ec (Updating rgw.rgw deployment (+1 -> 1))
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 3a68e9b8-c942-4b8c-8766-633f4d4d02ec (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev b9d8e7ae-219b-4fc7-aef6-2ee93232ed8f (Updating mds.cephfs deployment (+1 -> 1))
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qkuhkt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qkuhkt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qkuhkt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.qkuhkt on compute-0
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.qkuhkt on compute-0
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:24:28 np0005486808 epic_wright[99989]: 
Oct 14 04:24:28 np0005486808 epic_wright[99989]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 14 04:24:28 np0005486808 systemd[1]: libpod-1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c.scope: Deactivated successfully.
Oct 14 04:24:28 np0005486808 podman[99968]: 2025-10-14 08:24:28.574265085 +0000 UTC m=+0.830300153 container died 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b729f08d72cd8395cd6c5ec1aec2fa25f01466c1762e02e65a6cf1b46d7f6b2e-merged.mount: Deactivated successfully.
Oct 14 04:24:28 np0005486808 podman[99968]: 2025-10-14 08:24:28.642319777 +0000 UTC m=+0.898354885 container remove 1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c (image=quay.io/ceph/ceph:v18, name=epic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:24:28 np0005486808 systemd[1]: libpod-conmon-1186d57f293a46245bf6f38a0e7d120ed0a324dd7fa8bd1821d34728368a1d7c.scope: Deactivated successfully.
Oct 14 04:24:28 np0005486808 ansible-async_wrapper.py[99207]: Done in kid B.
Oct 14 04:24:29 np0005486808 podman[100300]: 2025-10-14 08:24:29.126385745 +0000 UTC m=+0.038463509 container create a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:24:29 np0005486808 systemd[1]: Started libpod-conmon-a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3.scope.
Oct 14 04:24:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:29 np0005486808 podman[100300]: 2025-10-14 08:24:29.110045631 +0000 UTC m=+0.022123405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:29 np0005486808 podman[100300]: 2025-10-14 08:24:29.228860038 +0000 UTC m=+0.140937882 container init a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:29 np0005486808 podman[100300]: 2025-10-14 08:24:29.240426677 +0000 UTC m=+0.152504441 container start a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:24:29 np0005486808 podman[100300]: 2025-10-14 08:24:29.244983127 +0000 UTC m=+0.157060881 container attach a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:24:29 np0005486808 thirsty_herschel[100316]: 167 167
Oct 14 04:24:29 np0005486808 systemd[1]: libpod-a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3.scope: Deactivated successfully.
Oct 14 04:24:29 np0005486808 podman[100321]: 2025-10-14 08:24:29.293239851 +0000 UTC m=+0.031018939 container died a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7b741390ec30f7cc0caa9d1826495db2e006dd11f5e171d9ee945ec4005016c4-merged.mount: Deactivated successfully.
Oct 14 04:24:29 np0005486808 podman[100321]: 2025-10-14 08:24:29.334797573 +0000 UTC m=+0.072576601 container remove a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:24:29 np0005486808 systemd[1]: libpod-conmon-a52c1ae19095ea1a26ae0dd088e7566baa9c2b40753447c8ab769e854ff1dbe3.scope: Deactivated successfully.
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: Saving service rgw.rgw spec with placement compute-0
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qkuhkt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qkuhkt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: Deploying daemon mds.cephfs.compute-0.qkuhkt on compute-0
Oct 14 04:24:29 np0005486808 systemd[1]: Reloading.
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 14 04:24:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 14 04:24:29 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:24:29 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:24:29 np0005486808 systemd[1]: Reloading.
Oct 14 04:24:29 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:29 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:24:29 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:24:29 np0005486808 python3[100401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:29 np0005486808 podman[100439]: 2025-10-14 08:24:29.916930968 +0000 UTC m=+0.050966260 container create d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:24:29 np0005486808 podman[100439]: 2025-10-14 08:24:29.896726191 +0000 UTC m=+0.030761483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:30 np0005486808 systemd[1]: Started libpod-conmon-d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a.scope.
Oct 14 04:24:30 np0005486808 systemd[1]: Starting Ceph mds.cephfs.compute-0.qkuhkt for c49aadb6-9b04-5cb1-8f5f-4c91676c568e...
Oct 14 04:24:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e8f896bc3c310d0367cf8485001b76dcc64b8f1b97daf991112e4e69cad177/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e8f896bc3c310d0367cf8485001b76dcc64b8f1b97daf991112e4e69cad177/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 podman[100439]: 2025-10-14 08:24:30.062178702 +0000 UTC m=+0.196213974 container init d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:30 np0005486808 podman[100439]: 2025-10-14 08:24:30.070280128 +0000 UTC m=+0.204315390 container start d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:24:30 np0005486808 podman[100439]: 2025-10-14 08:24:30.073868355 +0000 UTC m=+0.207903627 container attach d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 04:24:30 np0005486808 podman[100509]: 2025-10-14 08:24:30.299385385 +0000 UTC m=+0.061979366 container create bd911ebff7dd4bbfbc15a4ac4c2388c35f19fa7dd295e9c12629c44fd232fc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mds-cephfs-compute-0-qkuhkt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2678ee7bc8ffde4de8f4d1775de90bd427811f6d44452fc983ede74a5b9724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2678ee7bc8ffde4de8f4d1775de90bd427811f6d44452fc983ede74a5b9724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2678ee7bc8ffde4de8f4d1775de90bd427811f6d44452fc983ede74a5b9724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2678ee7bc8ffde4de8f4d1775de90bd427811f6d44452fc983ede74a5b9724/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.qkuhkt supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:30 np0005486808 podman[100509]: 2025-10-14 08:24:30.268760246 +0000 UTC m=+0.031354277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:30 np0005486808 podman[100509]: 2025-10-14 08:24:30.389036318 +0000 UTC m=+0.151630299 container init bd911ebff7dd4bbfbc15a4ac4c2388c35f19fa7dd295e9c12629c44fd232fc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mds-cephfs-compute-0-qkuhkt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:30 np0005486808 podman[100509]: 2025-10-14 08:24:30.394839648 +0000 UTC m=+0.157433629 container start bd911ebff7dd4bbfbc15a4ac4c2388c35f19fa7dd295e9c12629c44fd232fc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mds-cephfs-compute-0-qkuhkt, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:24:30 np0005486808 bash[100509]: bd911ebff7dd4bbfbc15a4ac4c2388c35f19fa7dd295e9c12629c44fd232fc5d
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 14 04:24:30 np0005486808 systemd[1]: Started Ceph mds.cephfs.compute-0.qkuhkt for c49aadb6-9b04-5cb1-8f5f-4c91676c568e.
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 14 04:24:30 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:30 np0005486808 ceph-mds[100530]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:24:30 np0005486808 ceph-mds[100530]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 14 04:24:30 np0005486808 ceph-mds[100530]: main not setting numa affinity
Oct 14 04:24:30 np0005486808 ceph-mds[100530]: pidfile_write: ignore empty --pid-file
Oct 14 04:24:30 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mds-cephfs-compute-0-qkuhkt[100526]: starting mds.cephfs.compute-0.qkuhkt at 
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:30 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt Updating MDS map to version 2 from mon.0
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:30 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev b9d8e7ae-219b-4fc7-aef6-2ee93232ed8f (Updating mds.cephfs deployment (+1 -> 1))
Oct 14 04:24:30 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event b9d8e7ae-219b-4fc7-aef6-2ee93232ed8f (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 14 04:24:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v83: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:30 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 14 04:24:30 np0005486808 busy_chandrasekhar[100457]: 
Oct 14 04:24:30 np0005486808 busy_chandrasekhar[100457]: [{"container_id": "72b1fb695a87", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.57%", "created": "2025-10-14T08:23:00.914417Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-14T08:23:00.956666Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.054137Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-14T08:23:00.809155Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.qkuhkt", "daemon_name": "mds.cephfs.compute-0.qkuhkt", "daemon_type": "mds", "events": ["2025-10-14T08:24:30.498127Z daemon:mds.cephfs.compute-0.qkuhkt [INFO] \"Deployed mds.cephfs.compute-0.qkuhkt on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "295afd06ae4a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "31.00%", "created": "2025-10-14T08:21:49.229566Z", "daemon_id": "compute-0.euuwqu", "daemon_name": "mgr.compute-0.euuwqu", "daemon_type": "mgr", "events": ["2025-10-14T08:23:57.797765Z daemon:mgr.compute-0.euuwqu [INFO] \"Reconfigured mgr.compute-0.euuwqu on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.054052Z", "memory_usage": 547566387, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-14T08:21:49.101405Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mgr.compute-0.euuwqu", "version": "18.2.7"}, {"container_id": "c954a7df2f1a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.28%", "created": "2025-10-14T08:21:43.784410Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-14T08:23:56.828569Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.053924Z", "memory_request": 2147483648, "memory_usage": 36091985, "ports": [], "service_name": "mon", "started": "2025-10-14T08:21:46.740383Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@mon.compute-0", "version": "18.2.7"}, {"container_id": "eff7cda87313", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.97%", "created": "2025-10-14T08:23:27.686066Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-14T08:23:27.758803Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.054218Z", "memory_request": 4294967296, "memory_usage": 58961428, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-14T08:23:27.560198Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@osd.0", "version": "18.2.7"}, {"container_id": "7925a900700b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.16%", "created": "2025-10-14T08:23:33.032194Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-14T08:23:33.114259Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.054310Z", "memory_request": 4294967296, "memory_usage": 56969134, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-14T08:23:32.852710Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@osd.1", "version": "18.2.7"}, {"container_id": "eaa7bcd0bf29", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.24%", "created": "2025-10-14T08:23:38.268319Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-14T08:23:38.329330Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-14T08:24:19.054392Z", "memory_request": 4294967296, "memory_usage": 59653488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-14T08:23:38.109108Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.opginv", "daemon_name": "rgw.rgw.compute-0.opginv", "daemon_type": "rgw", "events": ["2025-10-14T08:24:28.386322Z daemon:rgw.rgw.compute-0.opginv [INFO] \"Deployed rgw.rgw.compute-0.opginv on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 14 04:24:30 np0005486808 systemd[1]: libpod-d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a.scope: Deactivated successfully.
Oct 14 04:24:30 np0005486808 podman[100439]: 2025-10-14 08:24:30.739451843 +0000 UTC m=+0.873487135 container died d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-88e8f896bc3c310d0367cf8485001b76dcc64b8f1b97daf991112e4e69cad177-merged.mount: Deactivated successfully.
Oct 14 04:24:30 np0005486808 podman[100439]: 2025-10-14 08:24:30.790793471 +0000 UTC m=+0.924828723 container remove d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a (image=quay.io/ceph/ceph:v18, name=busy_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:24:30 np0005486808 systemd[1]: libpod-conmon-d1472c38e25ede80f321738a575c1d02acc36e2229991e70197d1c47ef6fcf0a.scope: Deactivated successfully.
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e3 new map
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:24:13.775757+0000#012modified#0112025-10-14T08:24:13.775788+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qkuhkt{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] compat {c=[1],r=[1],i=[7ff]}]
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt Updating MDS map to version 3 from mon.0
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt Monitors have assigned me to become a standby.
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] up:boot
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] as mds.0
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.qkuhkt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qkuhkt"} v 0) v1
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qkuhkt"}]: dispatch
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e3 all = 0
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e4 new map
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:24:13.775757+0000#012modified#0112025-10-14T08:24:31.489488+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.qkuhkt{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qkuhkt=up:creating}
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt Updating MDS map to version 4 from mon.0
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x1
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x100
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x600
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x601
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x602
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x603
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x604
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x605
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x606
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x607
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x608
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.cache creating system inode with ino:0x609
Oct 14 04:24:31 np0005486808 ceph-mds[100530]: mds.0.4 creating_done
Oct 14 04:24:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.qkuhkt is now active in filesystem cephfs as rank 0
Oct 14 04:24:31 np0005486808 podman[100833]: 2025-10-14 08:24:31.654730444 +0000 UTC m=+0.079367616 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:24:31 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:31 np0005486808 python3[100849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:31 np0005486808 podman[100833]: 2025-10-14 08:24:31.791276648 +0000 UTC m=+0.215913790 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:24:31 np0005486808 podman[100856]: 2025-10-14 08:24:31.827798399 +0000 UTC m=+0.069134079 container create 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:24:31 np0005486808 systemd[1]: Started libpod-conmon-863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d.scope.
Oct 14 04:24:31 np0005486808 podman[100856]: 2025-10-14 08:24:31.80130428 +0000 UTC m=+0.042640020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9dc40ed7f5762784dac259e68aca9e18d766d4b02c56987ba28f6d735faf25b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9dc40ed7f5762784dac259e68aca9e18d766d4b02c56987ba28f6d735faf25b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:31 np0005486808 podman[100856]: 2025-10-14 08:24:31.933440498 +0000 UTC m=+0.174776188 container init 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:24:31 np0005486808 podman[100856]: 2025-10-14 08:24:31.941441271 +0000 UTC m=+0.182776921 container start 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:24:31 np0005486808 podman[100856]: 2025-10-14 08:24:31.945073999 +0000 UTC m=+0.186409669 container attach 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: daemon mds.cephfs.compute-0.qkuhkt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: Cluster is now healthy
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: daemon mds.cephfs.compute-0.qkuhkt is now active in filesystem cephfs as rank 0
Oct 14 04:24:32 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e5 new map
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-14T08:24:13.775757+0000#012modified#0112025-10-14T08:24:32.525050+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.qkuhkt{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 14 04:24:32 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt Updating MDS map to version 5 from mon.0
Oct 14 04:24:32 np0005486808 ceph-mds[100530]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 14 04:24:32 np0005486808 ceph-mds[100530]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 14 04:24:32 np0005486808 ceph-mds[100530]: mds.0.4 recovery_done -- successful recovery!
Oct 14 04:24:32 np0005486808 ceph-mds[100530]: mds.0.4 active_start
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3361830772,v1:192.168.122.100:6815/3361830772] up:active
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qkuhkt=up:active}
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360711209' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 14 04:24:32 np0005486808 elegant_rosalind[100895]: 
Oct 14 04:24:32 np0005486808 elegant_rosalind[100895]: {"fsid":"c49aadb6-9b04-5cb1-8f5f-4c91676c568e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":165,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1760430224,"num_in_osds":3,"osd_in_since":1760430197,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"creating+peering","count":1}],"num_pgs":8,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":83853312,"bytes_avail":64328073216,"bytes_total":64411926528,"inactive_pgs_ratio":0.125},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.qkuhkt","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-14T08:23:34.602537+0000","services":{}},"progress_events":{}}
Oct 14 04:24:32 np0005486808 systemd[1]: libpod-863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d.scope: Deactivated successfully.
Oct 14 04:24:32 np0005486808 podman[100856]: 2025-10-14 08:24:32.555055405 +0000 UTC m=+0.796391085 container died 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d9dc40ed7f5762784dac259e68aca9e18d766d4b02c56987ba28f6d735faf25b-merged.mount: Deactivated successfully.
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:24:32
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Some PGs (0.111111) are unknown; try again later
Oct 14 04:24:32 np0005486808 podman[100856]: 2025-10-14 08:24:32.612899091 +0000 UTC m=+0.854234741 container remove 863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d (image=quay.io/ceph/ceph:v18, name=elegant_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v86: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c6306454-d8cf-4c86-9cee-f92ae34e21bd does not exist
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4530b8da-8d57-4c6d-aacf-f22f56e84f61 does not exist
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a06355f6-14ac-435d-bcd7-da372f7059f1 does not exist
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:32 np0005486808 systemd[1]: libpod-conmon-863b01783d4400525d1e08c6e6710a749a9c3bb3d1371c976a20861f331ee40d.scope: Deactivated successfully.
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 5 completed events
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.326703332 +0000 UTC m=+0.072528641 container create caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:24:33 np0005486808 systemd[1]: Started libpod-conmon-caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7.scope.
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.293937632 +0000 UTC m=+0.039762991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.413605859 +0000 UTC m=+0.159431178 container init caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.424479701 +0000 UTC m=+0.170304970 container start caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.428297633 +0000 UTC m=+0.174122982 container attach caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:33 np0005486808 infallible_galois[101203]: 167 167
Oct 14 04:24:33 np0005486808 systemd[1]: libpod-caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7.scope: Deactivated successfully.
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.431271305 +0000 UTC m=+0.177096614 container died caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d479b0520a64b69067e256f9c4837619877a8d73c68dc8073b4e838649146e9d-merged.mount: Deactivated successfully.
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 14 04:24:33 np0005486808 podman[101187]: 2025-10-14 08:24:33.480951414 +0000 UTC m=+0.226776723 container remove caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galois, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 14 04:24:33 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 7cb515dc-6f52-4d21-a2c8-9809e6a2dd8c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:33 np0005486808 systemd[1]: libpod-conmon-caa1df4e9ec44cea37dcbbc33d8a3539713b6d723fa4ae275251f26e7dcbaad7.scope: Deactivated successfully.
Oct 14 04:24:33 np0005486808 python3[101244]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:33 np0005486808 podman[101252]: 2025-10-14 08:24:33.659213034 +0000 UTC m=+0.043511580 container create 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:33 np0005486808 podman[101259]: 2025-10-14 08:24:33.676483411 +0000 UTC m=+0.048090961 container create 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:24:33 np0005486808 systemd[1]: Started libpod-conmon-338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3.scope.
Oct 14 04:24:33 np0005486808 systemd[1]: Started libpod-conmon-8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f.scope.
Oct 14 04:24:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d630042ba45880bce7b239fb1465c7a5287199d32be3c0cc989d5f3cba4fbb2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d630042ba45880bce7b239fb1465c7a5287199d32be3c0cc989d5f3cba4fbb2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:33 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:33 np0005486808 podman[101252]: 2025-10-14 08:24:33.638981166 +0000 UTC m=+0.023279742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:33 np0005486808 podman[101252]: 2025-10-14 08:24:33.734663315 +0000 UTC m=+0.118961851 container init 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:24:33 np0005486808 podman[101252]: 2025-10-14 08:24:33.743677632 +0000 UTC m=+0.127976158 container start 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:24:33 np0005486808 podman[101259]: 2025-10-14 08:24:33.650798741 +0000 UTC m=+0.022406341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:33 np0005486808 podman[101259]: 2025-10-14 08:24:33.749270777 +0000 UTC m=+0.120878337 container init 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 04:24:33 np0005486808 podman[101252]: 2025-10-14 08:24:33.755749614 +0000 UTC m=+0.140048160 container attach 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:24:33 np0005486808 podman[101259]: 2025-10-14 08:24:33.760897388 +0000 UTC m=+0.132504978 container start 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:24:33 np0005486808 podman[101259]: 2025-10-14 08:24:33.764827623 +0000 UTC m=+0.136435173 container attach 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043303065' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 14 04:24:34 np0005486808 amazing_bhaskara[101286]: 
Oct 14 04:24:34 np0005486808 amazing_bhaskara[101286]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.opginv","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 14 04:24:34 np0005486808 systemd[1]: libpod-8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f.scope: Deactivated successfully.
Oct 14 04:24:34 np0005486808 podman[101259]: 2025-10-14 08:24:34.269087568 +0000 UTC m=+0.640695188 container died 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:24:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d630042ba45880bce7b239fb1465c7a5287199d32be3c0cc989d5f3cba4fbb2a-merged.mount: Deactivated successfully.
Oct 14 04:24:34 np0005486808 podman[101259]: 2025-10-14 08:24:34.325827887 +0000 UTC m=+0.697435467 container remove 8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 14 04:24:34 np0005486808 systemd[1]: libpod-conmon-8aa044176c76fd48238e27efe00c78e2a76c7fe6a05bc5da4dd82a64063fc73f.scope: Deactivated successfully.
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 14 04:24:34 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev b51fc090-75c0-4304-ad44-9939b2073505 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:34 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v89: 10 pgs: 2 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:34 np0005486808 keen_lederberg[101281]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:24:34 np0005486808 keen_lederberg[101281]: --> relative data size: 1.0
Oct 14 04:24:34 np0005486808 keen_lederberg[101281]: --> All data devices are unavailable
Oct 14 04:24:34 np0005486808 systemd[1]: libpod-338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3.scope: Deactivated successfully.
Oct 14 04:24:34 np0005486808 systemd[1]: libpod-338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3.scope: Consumed 1.069s CPU time.
Oct 14 04:24:34 np0005486808 podman[101252]: 2025-10-14 08:24:34.878796547 +0000 UTC m=+1.263095113 container died 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:24:34 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:24:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ac4488feb9737437ab66e74b1c7b4859f55372b2577d02693f48716dca55a33a-merged.mount: Deactivated successfully.
Oct 14 04:24:34 np0005486808 podman[101252]: 2025-10-14 08:24:34.96761317 +0000 UTC m=+1.351911746 container remove 338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:34 np0005486808 systemd[1]: libpod-conmon-338773ebb83c7402d479b129d59d36b1e44fffe6864b4b018f4b75346ccb1dc3.scope: Deactivated successfully.
Oct 14 04:24:35 np0005486808 python3[101450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:35 np0005486808 podman[101498]: 2025-10-14 08:24:35.464367615 +0000 UTC m=+0.059003465 container create 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 14 04:24:35 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=38 pruub=10.945192337s) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active pruub 71.871986389s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:35 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=38 pruub=10.945192337s) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown pruub 71.871986389s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 14 04:24:35 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev d09dbbbf-53bf-4bc1-9a2e-7a62a4ed82d7 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 14 04:24:35 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/2380430549' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:35 np0005486808 systemd[1]: Started libpod-conmon-00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b.scope.
Oct 14 04:24:35 np0005486808 podman[101498]: 2025-10-14 08:24:35.437419785 +0000 UTC m=+0.032055725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d67dd9f43b83fc1edf96989ee3eba37c274f9628ddf1289a88a99d884bdd22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d67dd9f43b83fc1edf96989ee3eba37c274f9628ddf1289a88a99d884bdd22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:35 np0005486808 podman[101498]: 2025-10-14 08:24:35.584407891 +0000 UTC m=+0.179043771 container init 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:24:35 np0005486808 podman[101498]: 2025-10-14 08:24:35.594141966 +0000 UTC m=+0.188777856 container start 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:35 np0005486808 podman[101498]: 2025-10-14 08:24:35.597975898 +0000 UTC m=+0.192611778 container attach 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.855210994 +0000 UTC m=+0.078841483 container create 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:24:35 np0005486808 systemd[1]: Started libpod-conmon-4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f.scope.
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.82230172 +0000 UTC m=+0.045932290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.941155318 +0000 UTC m=+0.164785837 container init 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.950877583 +0000 UTC m=+0.174508062 container start 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.955076044 +0000 UTC m=+0.178706573 container attach 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:35 np0005486808 objective_newton[101582]: 167 167
Oct 14 04:24:35 np0005486808 systemd[1]: libpod-4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f.scope: Deactivated successfully.
Oct 14 04:24:35 np0005486808 podman[101556]: 2025-10-14 08:24:35.957742578 +0000 UTC m=+0.181373097 container died 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-698c9ef706abc5756e2a65f05c110984692d5596309b4129de2d4308ebe86e71-merged.mount: Deactivated successfully.
Oct 14 04:24:36 np0005486808 podman[101556]: 2025-10-14 08:24:36.008039152 +0000 UTC m=+0.231669641 container remove 4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:36 np0005486808 systemd[1]: libpod-conmon-4074e055e43852438b710b08627eab725a9ea79fce831be05a2ff3fdf2f4907f.scope: Deactivated successfully.
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005933756' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 14 04:24:36 np0005486808 dreamy_mccarthy[101515]: mimic
Oct 14 04:24:36 np0005486808 systemd[1]: libpod-00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b.scope: Deactivated successfully.
Oct 14 04:24:36 np0005486808 conmon[101515]: conmon 00353e3306f7a2da2502 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b.scope/container/memory.events
Oct 14 04:24:36 np0005486808 podman[101498]: 2025-10-14 08:24:36.18162821 +0000 UTC m=+0.776264100 container died 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-08d67dd9f43b83fc1edf96989ee3eba37c274f9628ddf1289a88a99d884bdd22-merged.mount: Deactivated successfully.
Oct 14 04:24:36 np0005486808 podman[101617]: 2025-10-14 08:24:36.242513909 +0000 UTC m=+0.082220315 container create 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:36 np0005486808 podman[101498]: 2025-10-14 08:24:36.255968063 +0000 UTC m=+0.850603923 container remove 00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b (image=quay.io/ceph/ceph:v18, name=dreamy_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:24:36 np0005486808 systemd[1]: libpod-conmon-00353e3306f7a2da25020828392d7258cd2102f09bb7d8ad9836daedcaeeb95b.scope: Deactivated successfully.
Oct 14 04:24:36 np0005486808 podman[101617]: 2025-10-14 08:24:36.204335698 +0000 UTC m=+0.044042174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:36 np0005486808 systemd[1]: Started libpod-conmon-1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7.scope.
Oct 14 04:24:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91db51fd290747168829f1c102f442d65f502c25c101b913d3a14e2049ab9814/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91db51fd290747168829f1c102f442d65f502c25c101b913d3a14e2049ab9814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91db51fd290747168829f1c102f442d65f502c25c101b913d3a14e2049ab9814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91db51fd290747168829f1c102f442d65f502c25c101b913d3a14e2049ab9814/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:36 np0005486808 podman[101617]: 2025-10-14 08:24:36.350801831 +0000 UTC m=+0.190508267 container init 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:36 np0005486808 podman[101617]: 2025-10-14 08:24:36.365773432 +0000 UTC m=+0.205479838 container start 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:36 np0005486808 podman[101617]: 2025-10-14 08:24:36.369598015 +0000 UTC m=+0.209304421 container attach 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=17/18 n=0 ec=12/12 lis/c=17/17 les/c/f=18/18/0 sis=38 pruub=12.988632202s) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active pruub 69.742698669s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:36 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 4f13cf6e-8e9a-4467-8bfc-78db73685079 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=17/18 n=0 ec=12/12 lis/c=17/17 les/c/f=18/18/0 sis=38 pruub=12.988632202s) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown pruub 69.742698669s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=17/18 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=14/15 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=14/14 les/c/f=15/15/0 sis=38) [1] r=0 lpr=38 pi=[14,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v92: 73 pgs: 63 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 14 op/s
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct 14 04:24:36 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]: {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    "0": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "devices": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "/dev/loop3"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            ],
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_name": "ceph_lv0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_size": "21470642176",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "name": "ceph_lv0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "tags": {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.crush_device_class": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.encrypted": "0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_id": "0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.vdo": "0"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            },
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "vg_name": "ceph_vg0"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        }
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    ],
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    "1": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "devices": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "/dev/loop4"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            ],
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_name": "ceph_lv1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_size": "21470642176",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "name": "ceph_lv1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "tags": {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.crush_device_class": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.encrypted": "0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_id": "1",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.vdo": "0"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            },
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "vg_name": "ceph_vg1"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        }
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    ],
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    "2": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "devices": [
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "/dev/loop5"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            ],
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_name": "ceph_lv2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_size": "21470642176",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "name": "ceph_lv2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "tags": {
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.crush_device_class": "",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.encrypted": "0",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osd_id": "2",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:                "ceph.vdo": "0"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            },
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "type": "block",
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:            "vg_name": "ceph_vg2"
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:        }
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]:    ]
Oct 14 04:24:37 np0005486808 affectionate_banach[101650]: }
Oct 14 04:24:37 np0005486808 podman[101617]: 2025-10-14 08:24:37.154507792 +0000 UTC m=+0.994214188 container died 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:24:37 np0005486808 systemd[1]: libpod-1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7.scope: Deactivated successfully.
Oct 14 04:24:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-91db51fd290747168829f1c102f442d65f502c25c101b913d3a14e2049ab9814-merged.mount: Deactivated successfully.
Oct 14 04:24:37 np0005486808 podman[101617]: 2025-10-14 08:24:37.22864678 +0000 UTC m=+1.068353206 container remove 1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:24:37 np0005486808 systemd[1]: libpod-conmon-1222654a5aa0eb1cda837e37aec3687eff7dd9bb51c0e12ed592d074cb0ad2f7.scope: Deactivated successfully.
Oct 14 04:24:37 np0005486808 python3[101684]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:24:37 np0005486808 podman[101709]: 2025-10-14 08:24:37.38366334 +0000 UTC m=+0.067340345 container create 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:37 np0005486808 systemd[1]: Started libpod-conmon-2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5.scope.
Oct 14 04:24:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:37 np0005486808 podman[101709]: 2025-10-14 08:24:37.360549663 +0000 UTC m=+0.044226768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:24:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2c2c730623c4a48299c0b4f2ff33695b8d295546ec4892a047be0b05ebc8de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2c2c730623c4a48299c0b4f2ff33695b8d295546ec4892a047be0b05ebc8de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:37 np0005486808 podman[101709]: 2025-10-14 08:24:37.477568046 +0000 UTC m=+0.161245101 container init 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:24:37 np0005486808 podman[101709]: 2025-10-14 08:24:37.489200717 +0000 UTC m=+0.172877732 container start 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:24:37 np0005486808 podman[101709]: 2025-10-14 08:24:37.493617483 +0000 UTC m=+0.177294548 container attach 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 14 04:24:37 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40 pruub=10.964776039s) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active pruub 79.284835815s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:37 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 3e18d556-3b2c-4ec5-9652-0149afbce722 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40 pruub=10.964776039s) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown pruub 79.284835815s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40 pruub=12.973323822s) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active pruub 70.740814209s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40 pruub=12.973323822s) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown pruub 70.740814209s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='client.? 192.168.122.100:0/4144595215' entity='client.rgw.rgw.compute-0.opginv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=38/40 n=0 ec=12/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=17/17 les/c/f=18/18/0 sis=38) [2] r=0 lpr=38 pi=[17,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:37 np0005486808 radosgw[100062]: LDAP not started since no server URIs were provided in the configuration.
Oct 14 04:24:37 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-rgw-rgw-compute-0-opginv[100057]: 2025-10-14T08:24:37.665+0000 7fc30f862940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 14 04:24:37 np0005486808 radosgw[100062]: framework: beast
Oct 14 04:24:37 np0005486808 radosgw[100062]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 14 04:24:37 np0005486808 radosgw[100062]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 14 04:24:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] Starting Global Recovery Event,125 pgs not in active + clean state
Oct 14 04:24:37 np0005486808 radosgw[100062]: starting handler: beast
Oct 14 04:24:37 np0005486808 radosgw[100062]: set uid:gid to 167:167 (ceph:ceph)
Oct 14 04:24:37 np0005486808 radosgw[100062]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.opginv,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f3a3c817-5017-437e-8bb4-1312e6be2750,zone_name=default,zonegroup_id=e47be4ce-ac87-4ab4-88cb-0256395d6082,zonegroup_name=default}
Oct 14 04:24:37 np0005486808 podman[102415]: 2025-10-14 08:24:37.991183938 +0000 UTC m=+0.041866102 container create 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:24:38 np0005486808 systemd[1]: Started libpod-conmon-463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366.scope.
Oct 14 04:24:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:37.97179113 +0000 UTC m=+0.022473324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:38.081499797 +0000 UTC m=+0.132181981 container init 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:38.087306197 +0000 UTC m=+0.137988391 container start 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:38 np0005486808 musing_rubin[102431]: 167 167
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:38.091215221 +0000 UTC m=+0.141897425 container attach 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:38.092196665 +0000 UTC m=+0.142878869 container died 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:38 np0005486808 systemd[1]: libpod-463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366.scope: Deactivated successfully.
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668103029' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 14 04:24:38 np0005486808 hopeful_newton[101760]: 
Oct 14 04:24:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-56bc9b0d4f1d7c656bd023f943eccb7cb06c3240e3622c99950b5b6f498eab32-merged.mount: Deactivated successfully.
Oct 14 04:24:38 np0005486808 systemd[1]: libpod-2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5.scope: Deactivated successfully.
Oct 14 04:24:38 np0005486808 conmon[101760]: conmon 2f7fa1dd8ef88576fe0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5.scope/container/memory.events
Oct 14 04:24:38 np0005486808 hopeful_newton[101760]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct 14 04:24:38 np0005486808 podman[102415]: 2025-10-14 08:24:38.156149668 +0000 UTC m=+0.206831842 container remove 463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:24:38 np0005486808 systemd[1]: libpod-conmon-463841bcf66b4a78de1cc38f3c203ddf7c7b9f8a23d2d299326be60b68dd3366.scope: Deactivated successfully.
Oct 14 04:24:38 np0005486808 podman[102450]: 2025-10-14 08:24:38.194753968 +0000 UTC m=+0.029343258 container died 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1f2c2c730623c4a48299c0b4f2ff33695b8d295546ec4892a047be0b05ebc8de-merged.mount: Deactivated successfully.
Oct 14 04:24:38 np0005486808 podman[102450]: 2025-10-14 08:24:38.237480109 +0000 UTC m=+0.072069379 container remove 2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5 (image=quay.io/ceph/ceph:v18, name=hopeful_newton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:24:38 np0005486808 systemd[1]: libpod-conmon-2f7fa1dd8ef88576fe0de6cb67f1a7ef0e10d3a2d81aa956d16725498a053be5.scope: Deactivated successfully.
Oct 14 04:24:38 np0005486808 podman[102473]: 2025-10-14 08:24:38.319992039 +0000 UTC m=+0.039575845 container create 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 14 04:24:38 np0005486808 systemd[1]: Started libpod-conmon-95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3.scope.
Oct 14 04:24:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e5ba3ba8db492263ac436c11d13995acab238469402e2d931e1981007d07e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e5ba3ba8db492263ac436c11d13995acab238469402e2d931e1981007d07e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e5ba3ba8db492263ac436c11d13995acab238469402e2d931e1981007d07e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e5ba3ba8db492263ac436c11d13995acab238469402e2d931e1981007d07e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:38 np0005486808 podman[102473]: 2025-10-14 08:24:38.302799835 +0000 UTC m=+0.022383641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:38 np0005486808 podman[102473]: 2025-10-14 08:24:38.405322268 +0000 UTC m=+0.124906084 container init 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:24:38 np0005486808 podman[102473]: 2025-10-14 08:24:38.411112118 +0000 UTC m=+0.130695914 container start 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:24:38 np0005486808 podman[102473]: 2025-10-14 08:24:38.417326038 +0000 UTC m=+0.136909854 container attach 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 14 04:24:38 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev af0594c0-f671-4311-a62d-f6f1d56bc4b3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=16/17 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=18/19 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=16/16 les/c/f=17/17/0 sis=40) [0] r=0 lpr=40 pi=[16,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=0 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v95: 135 pgs: 125 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 14 op/s
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 14 04:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 14 04:24:38 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]: {
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_id": 2,
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "type": "bluestore"
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    },
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_id": 1,
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "type": "bluestore"
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    },
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_id": 0,
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:        "type": "bluestore"
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]:    }
Oct 14 04:24:39 np0005486808 compassionate_banach[102490]: }
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 14 04:24:39 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev b04faf0a-619b-4573-8b74-0c5e48314e79 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:39 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 42 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=20/21 n=22 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42 pruub=13.281022072s) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 34'38 mlcod 34'38 active pruub 83.615097046s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:39 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 42 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42 pruub=13.281022072s) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 34'38 mlcod 0'0 unknown pruub 83.615097046s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:39 np0005486808 systemd[1]: libpod-95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3.scope: Deactivated successfully.
Oct 14 04:24:39 np0005486808 systemd[1]: libpod-95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3.scope: Consumed 1.122s CPU time.
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:24:39 np0005486808 podman[102523]: 2025-10-14 08:24:39.606696753 +0000 UTC m=+0.047862906 container died 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:24:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-96e5ba3ba8db492263ac436c11d13995acab238469402e2d931e1981007d07e1-merged.mount: Deactivated successfully.
Oct 14 04:24:39 np0005486808 systemd[75862]: Starting Mark boot as successful...
Oct 14 04:24:39 np0005486808 systemd[75862]: Finished Mark boot as successful.
Oct 14 04:24:39 np0005486808 podman[102523]: 2025-10-14 08:24:39.692991505 +0000 UTC m=+0.134157638 container remove 95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:39 np0005486808 systemd[1]: libpod-conmon-95f9796bb1c732a4f0c3dd48a25bdb4b8b715b6795f76aa85ffd9eb20f6cc2c3.scope: Deactivated successfully.
Oct 14 04:24:39 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 14 04:24:39 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d49764e2-032b-4d0c-a42f-f5b774ac0f2c does not exist
Oct 14 04:24:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c11e1cc8-6b2b-44f6-94fd-dc8650273cf8 does not exist
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 76202a2d-33ec-4c33-85c1-66d49ff6d9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 7cb515dc-6f52-4d21-a2c8-9809e6a2dd8c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 7cb515dc-6f52-4d21-a2c8-9809e6a2dd8c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 7 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev b51fc090-75c0-4304-ad44-9939b2073505 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event b51fc090-75c0-4304-ad44-9939b2073505 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 6 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev d09dbbbf-53bf-4bc1-9a2e-7a62a4ed82d7 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event d09dbbbf-53bf-4bc1-9a2e-7a62a4ed82d7 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 5 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 4f13cf6e-8e9a-4467-8bfc-78db73685079 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 4f13cf6e-8e9a-4467-8bfc-78db73685079 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 4 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 3e18d556-3b2c-4ec5-9652-0149afbce722 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 3e18d556-3b2c-4ec5-9652-0149afbce722 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 3 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev af0594c0-f671-4311-a62d-f6f1d56bc4b3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event af0594c0-f671-4311-a62d-f6f1d56bc4b3 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 2 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev b04faf0a-619b-4573-8b74-0c5e48314e79 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event b04faf0a-619b-4573-8b74-0c5e48314e79 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 1 seconds
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 76202a2d-33ec-4c33-85c1-66d49ff6d9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 76202a2d-33ec-4c33-85c1-66d49ff6d9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 0 seconds
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 34'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 43 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [0] r=0 lpr=42 pi=[20,42)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v98: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 14 04:24:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42 pruub=14.711083412s) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active pruub 81.029518127s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42 pruub=14.711083412s) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown pruub 81.029518127s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=22/23 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:40 np0005486808 podman[102758]: 2025-10-14 08:24:40.952526323 +0000 UTC m=+0.092041092 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:41 np0005486808 podman[102758]: 2025-10-14 08:24:41.071328809 +0000 UTC m=+0.210843558 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=44 pruub=12.892532349s) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 79.862884521s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[9.0( v 41'385 (0'0,41'385] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=44 pruub=14.920276642s) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 41'384 mlcod 41'384 active pruub 81.890808105s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=44 pruub=12.892532349s) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 79.862884521s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=42/44 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=22/22 les/c/f=23/23/0 sis=42) [1] r=0 lpr=42 pi=[22,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 44 pg[9.0( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=44 pruub=14.920276642s) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 41'384 mlcod 0'0 unknown pruub 81.890808105s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 14 04:24:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:24:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e1be8ae9-a486-41be-a057-057f566f56bf does not exist
Oct 14 04:24:42 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3dc57b6e-c2c8-43f7-9949-0a3c8a50635a does not exist
Oct 14 04:24:42 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2464b898-0e45-4331-9489-e6469da816c2 does not exist
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.15( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.14( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.17( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.16( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.11( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.10( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.13( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.2( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.d( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.f( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.9( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.c( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.b( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.e( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.a( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.8( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.3( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.6( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.7( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.4( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.5( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1a( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1b( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.18( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.19( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1e( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1f( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1c( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1d( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.14( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.12( v 41'385 lc 0'0 (0'0,41'385] local-lis/les=34/35 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.10( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.2( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.0( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 41'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.a( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1a( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.4( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=32/32 les/c/f=33/33/0 sis=44) [1] r=0 lpr=44 pi=[32,44)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.12( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 45 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=34/34 les/c/f=35/35/0 sis=44) [1] r=0 lpr=44 pi=[34,44)/1 crt=41'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v101: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Oct 14 04:24:42 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 13 completed events
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:24:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.815051657 +0000 UTC m=+0.062351005 container create 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:24:42 np0005486808 systemd[1]: Started libpod-conmon-7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3.scope.
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.798497568 +0000 UTC m=+0.045796916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.912903088 +0000 UTC m=+0.160202456 container init 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.924490688 +0000 UTC m=+0.171790056 container start 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.928072434 +0000 UTC m=+0.175371862 container attach 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:24:42 np0005486808 romantic_khorana[103075]: 167 167
Oct 14 04:24:42 np0005486808 systemd[1]: libpod-7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3.scope: Deactivated successfully.
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.932503451 +0000 UTC m=+0.179802819 container died 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:24:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-69489d91aceb58f2af3044f1d5fe777646435bd790fa6c04156aabac9f073656-merged.mount: Deactivated successfully.
Oct 14 04:24:42 np0005486808 podman[103058]: 2025-10-14 08:24:42.982819295 +0000 UTC m=+0.230118653 container remove 7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:43 np0005486808 systemd[1]: libpod-conmon-7ecad395b1b098146ecebb2a0733b03a83a79391dc706be1179a1abd9ffaddd3.scope: Deactivated successfully.
Oct 14 04:24:43 np0005486808 podman[103099]: 2025-10-14 08:24:43.200904667 +0000 UTC m=+0.069950349 container create 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:24:43 np0005486808 systemd[1]: Started libpod-conmon-89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2.scope.
Oct 14 04:24:43 np0005486808 podman[103099]: 2025-10-14 08:24:43.170151755 +0000 UTC m=+0.039197507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:43 np0005486808 podman[103099]: 2025-10-14 08:24:43.305259684 +0000 UTC m=+0.174305416 container init 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:24:43 np0005486808 podman[103099]: 2025-10-14 08:24:43.319106458 +0000 UTC m=+0.188152150 container start 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:43 np0005486808 podman[103099]: 2025-10-14 08:24:43.324289543 +0000 UTC m=+0.193335265 container attach 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:24:43 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 14 04:24:43 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 14 04:24:44 np0005486808 eloquent_banzai[103115]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:24:44 np0005486808 eloquent_banzai[103115]: --> relative data size: 1.0
Oct 14 04:24:44 np0005486808 eloquent_banzai[103115]: --> All data devices are unavailable
Oct 14 04:24:44 np0005486808 systemd[1]: libpod-89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2.scope: Deactivated successfully.
Oct 14 04:24:44 np0005486808 systemd[1]: libpod-89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2.scope: Consumed 1.089s CPU time.
Oct 14 04:24:44 np0005486808 podman[103099]: 2025-10-14 08:24:44.448591649 +0000 UTC m=+1.317637321 container died 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa95b200a601cf55df22e6fbfeb248b9f85412a4483b024dd9a4d28145c8e280-merged.mount: Deactivated successfully.
Oct 14 04:24:44 np0005486808 podman[103099]: 2025-10-14 08:24:44.517810289 +0000 UTC m=+1.386855931 container remove 89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:44 np0005486808 systemd[1]: libpod-conmon-89169f13223795da01d503bf00915aaeb115e10e747452872a2bd3060a2abae2.scope: Deactivated successfully.
Oct 14 04:24:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v102: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 6.3 KiB/s wr, 215 op/s
Oct 14 04:24:44 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 14 04:24:44 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.296729431 +0000 UTC m=+0.050109040 container create 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:45 np0005486808 systemd[1]: Started libpod-conmon-615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396.scope.
Oct 14 04:24:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.274455624 +0000 UTC m=+0.027835333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.375273975 +0000 UTC m=+0.128653624 container init 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.382145871 +0000 UTC m=+0.135525470 container start 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.384933968 +0000 UTC m=+0.138313567 container attach 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:24:45 np0005486808 sleepy_grothendieck[103313]: 167 167
Oct 14 04:24:45 np0005486808 systemd[1]: libpod-615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396.scope: Deactivated successfully.
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.387196903 +0000 UTC m=+0.140576542 container died 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:24:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-26cb86c2aa15b22414d4a9fb5fab79429d8eeff5eda0c2d928612b1356dc5ccc-merged.mount: Deactivated successfully.
Oct 14 04:24:45 np0005486808 podman[103296]: 2025-10-14 08:24:45.422173647 +0000 UTC m=+0.175553266 container remove 615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:24:45 np0005486808 systemd[1]: libpod-conmon-615bfcfae991b24dbc060b13ecca139bd6d06f085a8e04adff5552a74377a396.scope: Deactivated successfully.
Oct 14 04:24:45 np0005486808 podman[103336]: 2025-10-14 08:24:45.575395423 +0000 UTC m=+0.038322125 container create 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:24:45 np0005486808 systemd[1]: Started libpod-conmon-799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0.scope.
Oct 14 04:24:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf73358c585ccea2c79328bff3c67f18225a045125d36dc8342e3b1e46c44e12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf73358c585ccea2c79328bff3c67f18225a045125d36dc8342e3b1e46c44e12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf73358c585ccea2c79328bff3c67f18225a045125d36dc8342e3b1e46c44e12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf73358c585ccea2c79328bff3c67f18225a045125d36dc8342e3b1e46c44e12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:45 np0005486808 podman[103336]: 2025-10-14 08:24:45.653688242 +0000 UTC m=+0.116615034 container init 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:24:45 np0005486808 podman[103336]: 2025-10-14 08:24:45.55745674 +0000 UTC m=+0.020383452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:45 np0005486808 podman[103336]: 2025-10-14 08:24:45.668370666 +0000 UTC m=+0.131297398 container start 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:45 np0005486808 podman[103336]: 2025-10-14 08:24:45.672407654 +0000 UTC m=+0.135334446 container attach 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 14 04:24:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]: {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    "0": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "devices": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "/dev/loop3"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            ],
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_name": "ceph_lv0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_size": "21470642176",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "name": "ceph_lv0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "tags": {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.crush_device_class": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.encrypted": "0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_id": "0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.vdo": "0"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            },
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "vg_name": "ceph_vg0"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        }
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    ],
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    "1": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "devices": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "/dev/loop4"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            ],
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_name": "ceph_lv1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_size": "21470642176",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "name": "ceph_lv1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "tags": {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.crush_device_class": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.encrypted": "0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_id": "1",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.vdo": "0"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            },
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "vg_name": "ceph_vg1"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        }
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    ],
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    "2": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "devices": [
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "/dev/loop5"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            ],
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_name": "ceph_lv2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_size": "21470642176",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "name": "ceph_lv2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "tags": {
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.cluster_name": "ceph",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.crush_device_class": "",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.encrypted": "0",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osd_id": "2",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:                "ceph.vdo": "0"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            },
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "type": "block",
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:            "vg_name": "ceph_vg2"
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:        }
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]:    ]
Oct 14 04:24:46 np0005486808 pensive_hopper[103352]: }
Oct 14 04:24:46 np0005486808 systemd[1]: libpod-799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0.scope: Deactivated successfully.
Oct 14 04:24:46 np0005486808 podman[103336]: 2025-10-14 08:24:46.402358865 +0000 UTC m=+0.865285577 container died 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cf73358c585ccea2c79328bff3c67f18225a045125d36dc8342e3b1e46c44e12-merged.mount: Deactivated successfully.
Oct 14 04:24:46 np0005486808 podman[103336]: 2025-10-14 08:24:46.464161426 +0000 UTC m=+0.927088128 container remove 799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:24:46 np0005486808 systemd[1]: libpod-conmon-799401b0441521f234468f891e5445eb7e53dd8c8e2037e2b6ddc39ff92592c0.scope: Deactivated successfully.
Oct 14 04:24:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v103: 243 pgs: 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717026711s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.973930359s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.716975212s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.973930359s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.695780754s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.952758789s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.736214638s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.993217468s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.695728302s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.952758789s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.736144066s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.993217468s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735974312s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 83.993125916s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735957146s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.993125916s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.696426392s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953636169s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.696413040s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953636169s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.716674805s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.973930359s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.716662407s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.973930359s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735674858s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.993156433s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735632896s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.993156433s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735532761s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 83.993179321s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.735517502s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.993179321s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722312927s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980072021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722295761s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980072021s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.695005417s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.952804565s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694985390s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.952804565s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722254753s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980125427s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722240448s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980125427s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741459846s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999397278s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741441727s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999397278s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741376877s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 83.999359131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741362572s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999359131s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741404533s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999473572s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741516113s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999626160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741386414s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999473572s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741500854s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999626160s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722031593s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980239868s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741301537s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 83.999519348s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.722017288s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980239868s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741282463s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999519348s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694779396s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953125000s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721827507s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980224609s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721812248s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980224609s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694288254s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.952743530s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694272041s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.952743530s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694562912s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953071594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694549561s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953071594s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741090775s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999725342s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741038322s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 83.999694824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741073608s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999725342s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.741022110s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999694824s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694478035s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953338623s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694460869s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953338623s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740834236s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999786377s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740821838s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999786377s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721313477s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980300903s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721294403s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980300903s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694051743s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953125000s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721207619s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980323792s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.721186638s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980323792s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740963936s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000137329s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740947723s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000137329s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694058418s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953323364s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740855217s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000144958s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.694038391s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953323364s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740837097s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000144958s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740489006s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 83.999809265s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720939636s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980331421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.693650246s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953140259s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.693248749s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953140259s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739922523s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.999809265s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720348358s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980346680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720331192s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980346680s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.693401337s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953506470s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.693387985s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953506470s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720492363s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980690002s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720479012s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980690002s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739901543s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000221252s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739882469s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000221252s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.720004082s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980430603s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719953537s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980430603s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719846725s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980331421s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740184784s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000648499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740095139s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000686646s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740078926s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000686646s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.740102768s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000648499s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719763756s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980400085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719740868s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980400085s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739992142s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000709534s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739975929s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000709534s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739371300s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000167847s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739896774s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000724792s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739879608s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000770569s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739864349s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000770569s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692347527s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953277588s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692297935s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953277588s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719437599s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980453491s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692420006s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953422546s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719422340s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980453491s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719314575s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980461121s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719299316s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980461121s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692500114s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953666687s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692438126s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953666687s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739554405s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000801086s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739540100s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000801086s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719188690s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980537415s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.719174385s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980537415s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692102432s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953422546s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692014694s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953460693s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.692000389s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953460693s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691897392s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953460693s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691866875s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953460693s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739286423s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000923157s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739237785s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000923157s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739351273s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000167847s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.739827156s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000724792s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738952637s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000923157s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738915443s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000892639s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738931656s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000923157s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738898277s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000892639s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691373825s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953475952s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.718564987s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980674744s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691359520s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953475952s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.718547821s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980674744s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691371918s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953613281s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.691355705s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953613281s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738581657s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.000946045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743588448s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.006042480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738483429s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.000953674s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738466263s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000953674s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743482590s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.006004333s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=44/45 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743469238s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006004333s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.738434792s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.000946045s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743470192s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.006095886s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743453979s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006095886s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743432999s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.006088257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743414879s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006088257s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743243217s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.005935669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690907478s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953750610s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717924118s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980758667s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743220329s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.005935669s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717882156s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980758667s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690690994s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953643799s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690765381s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953750610s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743262291s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006042480s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690591812s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953643799s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743186951s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 84.006187439s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.742867470s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.006118774s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.743002892s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006187439s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717569351s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980850220s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690276146s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953651428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.742794037s) [0] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006118774s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.742827415s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 84.006225586s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690260887s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953651428s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=44/45 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=11.742802620s) [2] r=-1 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.006225586s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690180779s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 85.953659058s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=13.690167427s) [2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.953659058s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717329979s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 82.980857849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717315674s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980857849s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.11( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46 pruub=10.717262268s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.980850220s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.10( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.15( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.11( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.5( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.b( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.6( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.9( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.9( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.f( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.e( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.2( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.d( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.4( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.1b( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.1c( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[8.12( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.c( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.1( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.3( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.1d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.1d( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.679177284s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.767280579s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.679156303s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.767280579s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685169220s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.773445129s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.1f( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.18( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690094948s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778480530s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690087318s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778488159s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690066338s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778488159s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690060616s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778480530s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690387726s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778862000s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.690342903s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778862000s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.685352325s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.773925781s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.685336113s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.773925781s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684860229s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.773468018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684836388s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.773468018s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689896584s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778541565s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685453415s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774124146s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685437202s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774124146s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689856529s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778541565s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685414314s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774223328s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684666634s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.773445129s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685390472s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774223328s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689598083s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778526306s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689569473s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778526306s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685208321s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774185181s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685186386s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774185181s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.1b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685113907s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774208069s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.1a( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.685032845s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774208069s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684931755s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774230957s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684901237s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774230957s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689327240s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778526306s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689135551s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778526306s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[8.14( empty local-lis/les=0/0 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689026833s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778533936s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689134598s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778656006s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684916496s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774436951s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.689004898s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778533936s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684856415s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774436951s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=0/0 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688797951s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778663635s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683979034s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337593079s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683947563s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337593079s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688688278s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778663635s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684346199s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774475098s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683721542s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337509155s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683700562s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337509155s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683527946s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337432861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688366890s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778671265s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683504105s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337432861s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.684272766s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774475098s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688315392s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778656006s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688326836s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778671265s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683350563s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337402344s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683333397s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337402344s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683230400s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337394714s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683213234s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337394714s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688287735s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778892517s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683131218s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337402344s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683112144s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337402344s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.688209534s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778892517s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683850288s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774482727s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687703133s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778755188s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683485985s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774574280s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683551788s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774482727s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687587738s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778755188s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683216095s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774444580s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687521935s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778770447s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687493324s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778770447s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683179855s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774444580s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683179855s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774490356s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683161736s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774490356s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687357903s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778762817s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.687329292s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778762817s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683064461s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774581909s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.682981491s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774604797s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.682992935s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774581909s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.682959557s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774604797s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.683432579s) [0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774574280s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686880112s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778778076s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686859131s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778778076s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686838150s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778854370s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686796188s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778854370s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686286926s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778869629s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686216354s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778877258s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686246872s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778869629s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686198235s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778877258s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681732178s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774612427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686108589s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778984070s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681715965s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774612427s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686067581s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778984070s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686077118s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.779014587s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686021805s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.779014587s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681597710s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774681091s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681447029s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774581909s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681421280s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774581909s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681190491s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 82.774612427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.681169510s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774612427s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.685831070s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.779090881s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.685445786s) [0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.779090881s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.680823326s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.774681091s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.686318398s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 81.778778076s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=38/40 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46 pruub=14.684594154s) [1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.778778076s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.686437607s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345153809s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.686401367s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345153809s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678646088s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337387085s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678559303s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337387085s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678345680s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337272644s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678267479s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337272644s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.686025620s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345100403s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678340912s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337371826s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678201675s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337326050s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685997009s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345100403s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678167343s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337326050s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678050041s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337257385s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678188324s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337371826s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.678017616s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337257385s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685768127s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345062256s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685711861s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345039368s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685744286s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345062256s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685668945s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345039368s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677750587s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337234497s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677717209s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337234497s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685471535s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345001221s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685450554s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345001221s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677628517s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337196350s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677671432s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337242126s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677562714s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337158203s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677606583s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337196350s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685088158s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.344757080s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677526474s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337158203s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685061455s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.344757080s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677613258s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337242126s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677443504s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337234497s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677290916s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337104797s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677420616s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337234497s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685117722s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.344955444s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677270889s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337104797s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685090065s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.344955444s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685085297s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 87.345054626s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677114487s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.337104797s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.685057640s) [1] r=-1 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.345054626s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.677094460s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.337104797s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.676949501s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.336975098s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.676893234s) [1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.336975098s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.671382904s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 93.331520081s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=15.671361923s) [2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.331520081s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.9( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.11( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.11( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.5( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.5( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.9( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.9( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.3( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.3( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1d( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1b( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] r=-1 lpr=47 pi=[44,47)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.1e( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.19( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.18( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.1d( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.f( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.1b( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.9( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.d( v 37'39 lc 34'13 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.6( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.1( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.7( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.2( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.4( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.4( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.1a( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.d( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.c( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.f( v 37'39 lc 34'1 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.a( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.7( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.5( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.7( v 37'39 lc 34'21 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.3( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.5( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.f( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.5( v 37'39 lc 34'11 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=46) [1] r=0 lpr=46 pi=[42,46)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.9( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.9( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.8( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.16( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.d( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.14( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.12( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.15( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.13( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.12( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[2.17( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [1] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[4.10( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 47 pg[5.11( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.1d( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.8( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.7( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.5( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=46/47 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=46/47 n=1 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.e( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.11( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.16( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [2] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[3.18( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.1c( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.11( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.13( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.1b( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.f( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.c( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.1( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.3( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.6( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.1( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.e( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.1a( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.1b( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.18( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 47 pg[4.a( empty local-lis/les=46/47 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=46) [2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.a( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.9( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.17( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.15( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.12( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=42/22 lis/c=42/42 les/c/f=44/44/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=46/47 n=0 ec=44/32 lis/c=44/44 les/c/f=45/45/0 sis=46) [0] r=0 lpr=46 pi=[44,46)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[3.1f( empty local-lis/les=46/47 n=0 ec=38/14 lis/c=38/38 les/c/f=39/39/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.11( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.13( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.14( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.15( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.b( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.16( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.8( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.2( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.1f( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.3( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.2( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.5( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.f( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.1c( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.4( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.1d( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.7( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[5.1e( empty local-lis/les=46/47 n=0 ec=40/18 lis/c=40/40 les/c/f=41/41/0 sis=46) [0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.18( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 47 pg[2.19( empty local-lis/les=46/47 n=0 ec=38/12 lis/c=38/38 les/c/f=40/40/0 sis=46) [0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.276117855 +0000 UTC m=+0.073677298 container create 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.24687173 +0000 UTC m=+0.044431253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:47 np0005486808 systemd[1]: Started libpod-conmon-0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631.scope.
Oct 14 04:24:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.399920822 +0000 UTC m=+0.197480335 container init 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.408575411 +0000 UTC m=+0.206134884 container start 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:24:47 np0005486808 cranky_shaw[103529]: 167 167
Oct 14 04:24:47 np0005486808 systemd[1]: libpod-0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631.scope: Deactivated successfully.
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.415084868 +0000 UTC m=+0.212644391 container attach 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.415414886 +0000 UTC m=+0.212974349 container died 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b0b15cfa6cce583649660a6399af9eb7e07c8787f261d31c0382be1fda0d600c-merged.mount: Deactivated successfully.
Oct 14 04:24:47 np0005486808 podman[103513]: 2025-10-14 08:24:47.478631901 +0000 UTC m=+0.276191344 container remove 0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:24:47 np0005486808 systemd[1]: libpod-conmon-0f4a8ed3758d44e93fb1114c57dea9c287473c682ba56aeb8187c0846c63f631.scope: Deactivated successfully.
Oct 14 04:24:47 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event f724663a-520f-4edc-ab70-2a87061e65e5 (Global Recovery Event) in 10 seconds
Oct 14 04:24:47 np0005486808 podman[103551]: 2025-10-14 08:24:47.737940617 +0000 UTC m=+0.065872280 container create f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:24:47 np0005486808 systemd[1]: Started libpod-conmon-f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65.scope.
Oct 14 04:24:47 np0005486808 podman[103551]: 2025-10-14 08:24:47.712748809 +0000 UTC m=+0.040680532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:24:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:24:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fd6afde7a3a0937a12e4270339b58eaa5bf6aeeabbca46b56dc87e890770c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fd6afde7a3a0937a12e4270339b58eaa5bf6aeeabbca46b56dc87e890770c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fd6afde7a3a0937a12e4270339b58eaa5bf6aeeabbca46b56dc87e890770c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fd6afde7a3a0937a12e4270339b58eaa5bf6aeeabbca46b56dc87e890770c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:24:47 np0005486808 podman[103551]: 2025-10-14 08:24:47.854137681 +0000 UTC m=+0.182069354 container init f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:24:47 np0005486808 podman[103551]: 2025-10-14 08:24:47.866163731 +0000 UTC m=+0.194095374 container start f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:24:47 np0005486808 podman[103551]: 2025-10-14 08:24:47.869619094 +0000 UTC m=+0.197550737 container attach f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 48 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=47) [0]/[1] async=[0] r=0 lpr=47 pi=[44,47)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v107: 243 pgs: 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 14 04:24:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 14 04:24:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 14 04:24:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 14 04:24:48 np0005486808 amazing_cori[103568]: {
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_id": 2,
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "type": "bluestore"
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    },
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_id": 1,
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "type": "bluestore"
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    },
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_id": 0,
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:        "type": "bluestore"
Oct 14 04:24:48 np0005486808 amazing_cori[103568]:    }
Oct 14 04:24:48 np0005486808 amazing_cori[103568]: }
Oct 14 04:24:48 np0005486808 systemd[1]: libpod-f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65.scope: Deactivated successfully.
Oct 14 04:24:48 np0005486808 systemd[1]: libpod-f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65.scope: Consumed 1.017s CPU time.
Oct 14 04:24:48 np0005486808 podman[103551]: 2025-10-14 08:24:48.878930645 +0000 UTC m=+1.206862308 container died f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:24:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d04fd6afde7a3a0937a12e4270339b58eaa5bf6aeeabbca46b56dc87e890770c-merged.mount: Deactivated successfully.
Oct 14 04:24:48 np0005486808 podman[103551]: 2025-10-14 08:24:48.951674719 +0000 UTC m=+1.279606392 container remove f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_cori, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:24:48 np0005486808 systemd[1]: libpod-conmon-f7fcf9028a1c8420af53f77d8ac794e113315d834b1aa896ab77835697847b65.scope: Deactivated successfully.
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b510c69c-d14a-4372-954f-2e23921c82c8 does not exist
Oct 14 04:24:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1013d917-32c1-4ac8-8177-de5002de5ad4 does not exist
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:49 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.478281021s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 95.345191956s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.478234291s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.345191956s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.478059769s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 95.345245361s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.477815628s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 95.345207214s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.477582932s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.345207214s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.477826118s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.345245361s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.476600647s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 95.344879150s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=15.476561546s) [1] r=-1 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.344879150s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 49 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.052186012s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541954041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.052116394s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541954041s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.052810669s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.542694092s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[6.a( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.052711487s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.542694092s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051752090s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541969299s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051711082s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541969299s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051481247s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541862488s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051637650s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.542015076s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051445961s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541862488s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051582336s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.542015076s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[6.e( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051161766s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541816711s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051133156s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541816711s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[6.6( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.051280975s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541542053s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050753593s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541725159s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050567627s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541542053s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050703049s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541725159s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050986290s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.542060852s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050126076s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541519165s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050079346s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541519165s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050100327s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541717529s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050061226s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541717529s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.050804138s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.542060852s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.049989700s) [0] async=[0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.541313171s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:49 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 49 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49 pruub=15.049355507s) [0] r=-1 lpr=49 pi=[44,49)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.541313171s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 14 04:24:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 14 04:24:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 14 04:24:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 14 04:24:50 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.038054466s) [0] async=[0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.545455933s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.037999153s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.545455933s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.036249161s) [0] async=[0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.544021606s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.036168098s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.544021606s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.037175179s) [0] async=[0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.545425415s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=47/48 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.037086487s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.545425415s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.036821365s) [0] async=[0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 89.545455933s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=47/48 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50 pruub=14.036761284s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.545455933s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[6.e( v 37'39 lc 34'19 (0'0,37'39] local-lis/les=49/50 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.1b( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=49/50 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=49/50 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 50 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/50 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [1] r=0 lpr=49 pi=[42,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.1( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.3( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.1d( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.b( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.5( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 50 pg[9.11( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=49) [0] r=0 lpr=49 pi=[44,49)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v110: 243 pgs: 12 peering, 231 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1019 B/s, 2 keys/s, 21 objects/s recovering
Oct 14 04:24:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 14 04:24:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 14 04:24:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 14 04:24:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 51 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 51 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 51 pg[9.9( v 41'385 (0'0,41'385] local-lis/les=50/51 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 51 pg[9.d( v 41'385 (0'0,41'385] local-lis/les=50/51 n=6 ec=44/34 lis/c=47/44 les/c/f=48/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:51 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 14 04:24:51 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 14 04:24:51 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 14 04:24:51 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 14 04:24:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v112: 243 pgs: 12 peering, 231 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 845 B/s, 2 keys/s, 17 objects/s recovering
Oct 14 04:24:52 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 14 04:24:52 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 14 04:24:52 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 14 completed events
Oct 14 04:24:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:24:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:52 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 14 04:24:52 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 14 04:24:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Oct 14 04:24:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Oct 14 04:24:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:24:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v113: 243 pgs: 12 peering, 231 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 646 B/s, 1 keys/s, 13 objects/s recovering
Oct 14 04:24:54 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 14 04:24:54 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 14 04:24:55 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct 14 04:24:55 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct 14 04:24:55 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 14 04:24:55 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 14 04:24:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v114: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 584 B/s, 1 keys/s, 13 objects/s recovering
Oct 14 04:24:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 14 04:24:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 14 04:24:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 14 04:24:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 14 04:24:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 14 04:24:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v116: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 14 04:24:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.147233963s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 37'39 active pruub 96.477470398s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.147119522s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 96.477470398s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.147109985s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 37'39 active pruub 96.477569580s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.146901131s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 96.477569580s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/48/0 sis=52 pruub=12.146153450s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 37'39 active pruub 96.477806091s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/48/0 sis=52 pruub=12.146055222s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 96.477806091s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.145869255s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 37'39 active pruub 96.477653503s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:58 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 52 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.145760536s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 96.477653503s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/48/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 14 04:24:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 14 04:24:59 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=13.365284920s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 103.345710754s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=13.365033150s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 103.345710754s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=13.364120483s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 103.345291138s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=42/43 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=13.364052773s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 103.345291138s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.f( v 37'39 lc 34'1 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:59 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 53 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:59 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 53 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.7( v 37'39 lc 34'21 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:24:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 53 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=46/46 les/c/f=47/48/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 14 04:25:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 14 04:25:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 14 04:25:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 14 04:25:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 14 04:25:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 14 04:25:00 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 14 04:25:00 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 54 pg[6.4( v 37'39 lc 34'15 (0'0,37'39] local-lis/les=53/54 n=2 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:00 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 54 pg[6.c( v 37'39 lc 34'17 (0'0,37'39] local-lis/les=53/54 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v119: 243 pgs: 2 peering, 241 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 197 B/s, 1 keys/s, 5 objects/s recovering
Oct 14 04:25:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 14 04:25:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 14 04:25:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 14 04:25:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v120: 243 pgs: 2 peering, 241 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:02 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 14 04:25:02 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 14 04:25:03 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 14 04:25:03 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 14 04:25:04 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 14 04:25:04 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 14 04:25:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v121: 243 pgs: 2 peering, 241 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 1 keys/s, 0 objects/s recovering
Oct 14 04:25:06 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 14 04:25:06 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 14 04:25:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v122: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 275 B/s, 1 keys/s, 1 objects/s recovering
Oct 14 04:25:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 14 04:25:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 14 04:25:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 14 04:25:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 14 04:25:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 14 04:25:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 14 04:25:08 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 55 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=10.548498154s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=37'39 mlcod 37'39 active pruub 104.477561951s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:08 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 55 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=10.548332214s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 104.477561951s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:08 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 55 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=10.548284531s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=37'39 mlcod 37'39 active pruub 104.477890015s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:08 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 55 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=46/47 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=10.548085213s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 104.477890015s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 55 pg[6.5( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 55 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:08 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 14 04:25:08 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 14 04:25:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v124: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 0 objects/s recovering
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 14 04:25:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 14 04:25:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 14 04:25:09 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 56 pg[6.d( v 37'39 lc 34'13 (0'0,37'39] local-lis/les=55/56 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281481743s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.000122070s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281428337s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.000122070s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281635284s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.001121521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281602859s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.001121521s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281692505s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.001365662s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.281666756s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.001365662s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 56 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56) [2] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.286607742s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.006446838s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 56 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=13.286586761s) [2] r=-1 lpr=56 pi=[44,56)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.006446838s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 56 pg[9.e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56) [2] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:09 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 56 pg[6.5( v 37'39 lc 34'11 (0'0,37'39] local-lis/les=55/56 n=2 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 56 pg[9.6( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56) [2] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 56 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=56) [2] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Oct 14 04:25:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 14 04:25:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.6( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 57 pg[9.6( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 57 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v127: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 287 B/s, 1 objects/s recovering
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 14 04:25:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 14 04:25:10 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.784116745s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 active pruub 112.894050598s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.784041405s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 112.894050598s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790656090s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 active pruub 112.900863647s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790446281s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 active pruub 112.900901794s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790403366s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 active pruub 112.900932312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790287971s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 112.900863647s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790326118s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 112.900932312s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 58 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.790133476s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 112.900901794s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 14 04:25:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:11 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 58 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:11 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 58 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:11 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 58 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:11 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 58 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=57) [2]/[1] async=[2] r=0 lpr=57 pi=[44,57)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Oct 14 04:25:11 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=-1 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.268275261s) [2] async=[2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 112.741821289s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.273636818s) [2] async=[2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 112.747108459s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.273610115s) [2] async=[2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 112.747123718s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.273303032s) [2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.747108459s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.273230553s) [2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.747123718s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=57/58 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.267799377s) [2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.741821289s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.273447990s) [2] async=[2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 112.747146606s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 59 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=57/58 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59 pruub=15.272259712s) [2] r=-1 lpr=59 pi=[44,59)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.747146606s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=49/50 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:12 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 59 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 14 04:25:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v130: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 14 04:25:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 60 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.525979996s) [2] r=-1 lpr=60 pi=[44,60)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.001129150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 60 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.525903702s) [2] r=-1 lpr=60 pi=[44,60)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.001129150s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 60 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.531363487s) [2] r=-1 lpr=60 pi=[44,60)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 108.006790161s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 60 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.531291962s) [2] r=-1 lpr=60 pi=[44,60)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.006790161s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=60 pruub=15.487416267s) [2] r=-1 lpr=60 pi=[42,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 119.346000671s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=42/43 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=60 pruub=15.487369537s) [2] r=-1 lpr=60 pi=[42,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 119.346000671s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.8( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60) [2] r=0 lpr=60 pi=[44,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.18( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=60) [2] r=0 lpr=60 pi=[44,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=60) [2] r=0 lpr=60 pi=[42,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 60 pg[9.6( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=57/44 les/c/f=58/45/0 sis=59) [2] r=0 lpr=59 pi=[44,59)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 60 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[49,59)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 14 04:25:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 14 04:25:13 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 14 04:25:14 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 61 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 61 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 61 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 61 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.8( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[44,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.8( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[44,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.18( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[44,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.18( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[44,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008816719s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 41'385 active pruub 119.872665405s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.009172440s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 41'385 active pruub 119.873008728s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008746147s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 119.872665405s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008965492s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 41'385 active pruub 119.873092651s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008952141s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 119.873008728s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008898735s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 119.873092651s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008562088s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 41'385 active pruub 119.872863770s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:14 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 61 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=59/60 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61 pruub=15.008427620s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 119.872863770s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 61 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=60/61 n=1 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=60) [2] r=0 lpr=60 pi=[42,60)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Oct 14 04:25:14 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Oct 14 04:25:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v133: 243 pgs: 243 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 14 04:25:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 14 04:25:15 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 62 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 62 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=11.977810860s) [0] r=-1 lpr=62 pi=[46,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 112.478439331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:15 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 62 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=46/47 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=11.977731705s) [0] r=-1 lpr=62 pi=[46,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 112.478439331s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 62 pg[9.f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 62 pg[9.17( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 62 pg[9.7( v 41'385 (0'0,41'385] local-lis/les=61/62 n=6 ec=44/34 lis/c=59/49 les/c/f=60/50/0 sis=61) [2] r=0 lpr=61 pi=[49,61)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 62 pg[6.9( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=62) [0] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:15 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 62 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 62 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=61/62 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[44,61)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 14 04:25:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 14 04:25:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 14 04:25:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 14 04:25:16 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 63 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:16 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 63 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:16 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 63 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:16 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 63 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:16 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 63 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63 pruub=15.006092072s) [2] async=[2] r=-1 lpr=63 pi=[44,63)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 116.514747620s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:16 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 63 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63 pruub=15.005990982s) [2] r=-1 lpr=63 pi=[44,63)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.514747620s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:16 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 63 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=61/62 n=6 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63 pruub=15.008161545s) [2] async=[2] r=-1 lpr=63 pi=[44,63)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 116.517196655s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:16 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 63 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=61/62 n=6 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63 pruub=15.007995605s) [2] r=-1 lpr=63 pi=[44,63)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.517196655s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:16 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 63 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=62/63 n=1 ec=42/20 lis/c=46/46 les/c/f=47/47/0 sis=62) [0] r=0 lpr=62 pi=[46,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v136: 243 pgs: 1 active+remapped, 1 active+recovering+remapped, 241 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2/213 objects misplaced (0.939%); 247 B/s, 12 objects/s recovering
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 14 04:25:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 14 04:25:17 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 14 04:25:17 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 64 pg[9.18( v 41'385 (0'0,41'385] local-lis/les=63/64 n=5 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:17 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 64 pg[9.8( v 41'385 (0'0,41'385] local-lis/les=63/64 n=6 ec=44/34 lis/c=61/44 les/c/f=62/45/0 sis=63) [2] r=0 lpr=63 pi=[44,63)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:17 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 64 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/50 n=1 ec=42/20 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.366353989s) [0] r=-1 lpr=64 pi=[49,64)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 115.519599915s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:17 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 64 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/50 n=1 ec=42/20 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.366197586s) [0] r=-1 lpr=64 pi=[49,64)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 115.519599915s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:17 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 64 pg[6.a( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=49/49 les/c/f=50/50/0 sis=64) [0] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 14 04:25:18 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 65 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=64/65 n=1 ec=42/20 lis/c=49/49 les/c/f=50/50/0 sis=64) [0] r=0 lpr=64 pi=[49,64)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 14 04:25:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 14 04:25:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v139: 243 pgs: 1 active+remapped, 1 active+recovering+remapped, 241 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2/213 objects misplaced (0.939%); 247 B/s, 12 objects/s recovering
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 14 04:25:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 14 04:25:19 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 14 04:25:19 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 14 04:25:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 14 04:25:19 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 66 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=11.689624786s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=37'39 mlcod 37'39 active pruub 121.992355347s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:19 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 66 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=11.689531326s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 121.992355347s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:19 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 66 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=66) [1] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:19 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 14 04:25:19 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 14 04:25:20 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 14 04:25:20 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 14 04:25:20 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 67 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=66/67 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=66) [1] r=0 lpr=66 pi=[52,66)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v142: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 14 04:25:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 14 04:25:21 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 68 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=9.425497055s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 116.000671387s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:21 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 68 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=9.425439835s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.000671387s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:21 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 68 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=9.431315422s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 116.007339478s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:21 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 68 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=9.431289673s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.007339478s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 14 04:25:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 14 04:25:21 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 68 pg[9.c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:21 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 68 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:21 np0005486808 python3[103690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.375557377 +0000 UTC m=+0.065176717 container create 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:25:21 np0005486808 systemd[1]: Started libpod-conmon-1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325.scope.
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.35620292 +0000 UTC m=+0.045822300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:25:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dccd714b475ad62f89b6b92373f9f52f5422adda6f2ee03f6ca80a08bc1c1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dccd714b475ad62f89b6b92373f9f52f5422adda6f2ee03f6ca80a08bc1c1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.479584881 +0000 UTC m=+0.169204231 container init 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.49331434 +0000 UTC m=+0.182933680 container start 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.49699093 +0000 UTC m=+0.186610260 container attach 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:25:21 np0005486808 frosty_lichterman[103706]: could not fetch user info: no user info saved
Oct 14 04:25:21 np0005486808 systemd[1]: libpod-1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325.scope: Deactivated successfully.
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.752993281 +0000 UTC m=+0.442612611 container died 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:25:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-50dccd714b475ad62f89b6b92373f9f52f5422adda6f2ee03f6ca80a08bc1c1e-merged.mount: Deactivated successfully.
Oct 14 04:25:21 np0005486808 podman[103691]: 2025-10-14 08:25:21.797747934 +0000 UTC m=+0.487367264 container remove 1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325 (image=quay.io/ceph/ceph:v18, name=frosty_lichterman, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:25:21 np0005486808 systemd[1]: libpod-conmon-1277ca83d8d03dd0cee6969551c7d5b5145b134f60b1e709078fdb94ab0a5325.scope: Deactivated successfully.
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 14 04:25:22 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 69 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[44,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:22 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 69 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[44,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:22 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 69 pg[9.c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[44,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:22 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 69 pg[9.c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[44,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:22 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 69 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:22 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 69 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:22 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 69 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:22 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 69 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=44/45 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 14 04:25:22 np0005486808 python3[103827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid c49aadb6-9b04-5cb1-8f5f-4c91676c568e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.223208282 +0000 UTC m=+0.042768946 container create baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:25:22 np0005486808 systemd[1]: Started libpod-conmon-baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3.scope.
Oct 14 04:25:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abcd07f38f6c5f472a053c12146dd0738b96455f44ac1db7f7d39c47a6b4abc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abcd07f38f6c5f472a053c12146dd0738b96455f44ac1db7f7d39c47a6b4abc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.207291979 +0000 UTC m=+0.026852643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.304362132 +0000 UTC m=+0.123922836 container init baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.311787715 +0000 UTC m=+0.131348389 container start baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.315157058 +0000 UTC m=+0.134717742 container attach baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]: {
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "user_id": "openstack",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "display_name": "openstack",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "email": "",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "suspended": 0,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "max_buckets": 1000,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "subusers": [],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "keys": [
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        {
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:            "user": "openstack",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:            "access_key": "BMVNVSAWLZRJOZZPIVPD",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:            "secret_key": "InYys3vipgq4BylH7CGeJ4g7zpVd9PJ1Cm10WkZf"
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        }
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    ],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "swift_keys": [],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "caps": [],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "op_mask": "read, write, delete",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "default_placement": "",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "default_storage_class": "",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "placement_tags": [],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "bucket_quota": {
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "enabled": false,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "check_on_raw": false,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_size": -1,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_size_kb": 0,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_objects": -1
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    },
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "user_quota": {
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "enabled": false,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "check_on_raw": false,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_size": -1,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_size_kb": 0,
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:        "max_objects": -1
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    },
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "temp_url_keys": [],
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "type": "rgw",
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]:    "mfa_ids": []
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]: }
Oct 14 04:25:22 np0005486808 magical_nightingale[103843]: 
Oct 14 04:25:22 np0005486808 systemd[1]: libpod-baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3.scope: Deactivated successfully.
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.534664559 +0000 UTC m=+0.354225223 container died baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6abcd07f38f6c5f472a053c12146dd0738b96455f44ac1db7f7d39c47a6b4abc-merged.mount: Deactivated successfully.
Oct 14 04:25:22 np0005486808 podman[103828]: 2025-10-14 08:25:22.580642262 +0000 UTC m=+0.400202936 container remove baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:25:22 np0005486808 systemd[1]: libpod-conmon-baebdbf0a496adc67e6bb49faa1c2d46505101f49825454f7c500ce8e090fee3.scope: Deactivated successfully.
Oct 14 04:25:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v145: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 14 04:25:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 14 04:25:23 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 70 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=42/20 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=10.237339020s) [1] r=-1 lpr=70 pi=[55,70)/1 crt=37'39 mlcod 37'39 active pruub 124.096984863s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:23 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 70 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=55/56 n=1 ec=42/20 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=10.237221718s) [1] r=-1 lpr=70 pi=[55,70)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 124.096984863s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:23 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=55/55 les/c/f=56/56/0 sis=70) [1] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 14 04:25:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 14 04:25:23 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 70 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=5 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:23 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 70 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=6 ec=44/34 lis/c=44/44 les/c/f=45/45/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[44,69)/1 crt=41'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:23 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 14 04:25:23 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 14 04:25:24 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 71 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:24 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 71 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:24 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 71 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:24 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 71 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=6 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 71 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=6 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71 pruub=15.366253853s) [2] async=[2] r=-1 lpr=71 pi=[44,71)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 124.853790283s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 71 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=6 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71 pruub=15.366161346s) [2] r=-1 lpr=71 pi=[44,71)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.853790283s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 71 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=5 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71 pruub=15.362776756s) [2] async=[2] r=-1 lpr=71 pi=[44,71)/1 crt=41'385 lcod 0'0 mlcod 0'0 active pruub 124.850761414s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 71 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=69/70 n=5 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71 pruub=15.362699509s) [2] r=-1 lpr=71 pi=[44,71)/1 crt=41'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.850761414s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 71 pg[6.d( v 37'39 lc 34'13 (0'0,37'39] local-lis/les=70/71 n=1 ec=42/20 lis/c=55/55 les/c/f=56/56/0 sis=70) [1] r=0 lpr=70 pi=[55,70)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v148: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 14 04:25:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 14 04:25:24 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 14 04:25:25 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 14 04:25:25 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 72 pg[9.c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=6 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:25 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 72 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=5 ec=44/34 lis/c=69/44 les/c/f=70/45/0 sis=71) [2] r=0 lpr=71 pi=[44,71)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:25 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 14 04:25:25 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 14 04:25:26 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 14 04:25:26 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 14 04:25:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v150: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 445 B/s wr, 3 op/s; 39 B/s, 3 objects/s recovering
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 14 04:25:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 14 04:25:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 14 04:25:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 14 04:25:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 14 04:25:28 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 73 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=10.985537529s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=37'39 mlcod 37'39 active pruub 129.987686157s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:28 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 73 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=52/53 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=10.985159874s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 129.987686157s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:28 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 73 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:28 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 14 04:25:28 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 14 04:25:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v152: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 366 B/s wr, 3 op/s; 32 B/s, 2 objects/s recovering
Oct 14 04:25:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 14 04:25:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 14 04:25:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 14 04:25:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 14 04:25:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 14 04:25:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 14 04:25:29 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 14 04:25:29 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 74 pg[6.f( v 37'39 lc 34'1 (0'0,37'39] local-lis/les=73/74 n=1 ec=42/20 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:30 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 14 04:25:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v154: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 130 B/s, 2 objects/s recovering
Oct 14 04:25:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 14 04:25:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 14 04:25:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 14 04:25:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 14 04:25:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 14 04:25:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 14 04:25:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 14 04:25:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:32 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 14 04:25:32 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 14 04:25:32 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 14 04:25:32 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 14 04:25:32 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:25:32
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'volumes', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v156: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 14 04:25:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 14 04:25:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:25:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:25:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 14 04:25:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 14 04:25:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 14 04:25:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 14 04:25:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 14 04:25:33 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 14 04:25:33 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 14 04:25:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 14 04:25:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 14 04:25:34 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 14 04:25:34 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 14 04:25:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 14 04:25:34 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 14 04:25:34 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 14 04:25:34 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 14 04:25:34 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 14 04:25:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v158: 243 pgs: 243 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct 14 04:25:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 14 04:25:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 14 04:25:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 14 04:25:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 14 04:25:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 14 04:25:35 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 14 04:25:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 14 04:25:35 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 77 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=10.785229683s) [2] r=-1 lpr=77 pi=[49,77)/1 crt=41'385 mlcod 0'0 active pruub 136.901428223s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:35 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 77 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=10.785140038s) [2] r=-1 lpr=77 pi=[49,77)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 136.901428223s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:35 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 77 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=77) [2] r=0 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:35 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.19 deep-scrub starts
Oct 14 04:25:35 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.19 deep-scrub ok
Oct 14 04:25:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 14 04:25:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 14 04:25:36 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 14 04:25:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:36 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 14 04:25:36 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 78 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2]/[0] r=0 lpr=78 pi=[49,78)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:36 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 78 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=49/50 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2]/[0] r=0 lpr=78 pi=[49,78)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:36 np0005486808 systemd-logind[799]: New session 35 of user zuul.
Oct 14 04:25:36 np0005486808 systemd[1]: Started Session 35 of User zuul.
Oct 14 04:25:36 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 14 04:25:36 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 14 04:25:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v161: 243 pgs: 1 unknown, 242 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 14 04:25:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 14 04:25:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 14 04:25:37 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 79 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=78/79 n=5 ec=44/34 lis/c=49/49 les/c/f=50/50/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[49,78)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:37 np0005486808 python3.9[104092]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:25:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 14 04:25:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 14 04:25:38 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 14 04:25:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 80 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=78/79 n=5 ec=44/34 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.298174858s) [2] async=[2] r=-1 lpr=80 pi=[49,80)/1 crt=41'385 mlcod 41'385 active pruub 144.346725464s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:38 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 80 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=78/79 n=5 ec=44/34 lis/c=78/49 les/c/f=79/50/0 sis=80 pruub=15.297905922s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 144.346725464s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 80 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=78/49 les/c/f=79/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:38 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 80 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=78/49 les/c/f=79/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v164: 243 pgs: 1 unknown, 242 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 14 04:25:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 14 04:25:39 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 14 04:25:39 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 81 pg[9.13( v 41'385 (0'0,41'385] local-lis/les=80/81 n=5 ec=44/34 lis/c=78/49 les/c/f=79/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 14 04:25:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 14 04:25:39 np0005486808 python3.9[104310]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:25:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 14 04:25:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 14 04:25:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 14 04:25:40 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Oct 14 04:25:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:25:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:25:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v166: 243 pgs: 243 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 229 B/s wr, 7 op/s; 49 B/s, 2 objects/s recovering
Oct 14 04:25:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 14 04:25:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 14 04:25:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Oct 14 04:25:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 14 04:25:41 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev d4c4da42-9b21-456c-bc81-e7e744d9368b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 14 04:25:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:42 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 14 04:25:42 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] update: starting ev 778e0e9d-69a5-4cd4-867f-c6e8ab142c29 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev d4c4da42-9b21-456c-bc81-e7e744d9368b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event d4c4da42-9b21-456c-bc81-e7e744d9368b (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] complete: finished ev 778e0e9d-69a5-4cd4-867f-c6e8ab142c29 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] Completed event 778e0e9d-69a5-4cd4-867f-c6e8ab142c29 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v169: 243 pgs: 243 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 232 B/s wr, 7 op/s; 49 B/s, 2 objects/s recovering
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 14 04:25:42 np0005486808 ceph-mgr[74543]: [progress INFO root] Writing back 16 completed events
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 14 04:25:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 14 04:25:43 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 84 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=84 pruub=11.729813576s) [1] r=-1 lpr=84 pi=[50,84)/1 crt=41'385 mlcod 0'0 active pruub 145.896286011s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:43 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 84 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=84 pruub=11.729750633s) [1] r=-1 lpr=84 pi=[50,84)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 145.896286011s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 14 04:25:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:43 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 84 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=84) [1] r=0 lpr=84 pi=[50,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:43 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 84 pg[11.0( v 69'2 (0'0,69'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=84 pruub=13.162389755s) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 69'1 mlcod 69'1 active pruub 141.954544067s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:43 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 84 pg[11.0( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=84 pruub=13.162389755s) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 69'1 mlcod 0'0 unknown pruub 141.954544067s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:43 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 84 pg[10.0( v 69'64 (0'0,69'64] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=84 pruub=11.091572762s) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 69'63 mlcod 69'63 active pruub 134.751983643s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:43 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 84 pg[10.0( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=84 pruub=11.091572762s) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 69'63 mlcod 0'0 unknown pruub 134.751983643s@ mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1b( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.a( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.b( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.d( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.12( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.11( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.10( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1e( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1c( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1a( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1d( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.19( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1f( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.18( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.7( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.5( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.6( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.4( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.3( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.f( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.8( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.9( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.c( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.e( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1( v 69'64 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.2( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.13( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.14( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.16( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.15( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.17( v 69'64 lc 0'0 (0'0,69'64] local-lis/les=36/37 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.a( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.19( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.16( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.15( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.14( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.17( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.13( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.12( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.11( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.f( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.e( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.d( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.b( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 14 04:25:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.9( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.2( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=1 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.3( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.c( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.8( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.a( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1( v 69'2 (0'0,69'2] local-lis/les=38/39 n=1 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.5( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.4( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.6( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.7( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.18( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1a( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1b( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1c( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1d( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1e( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1f( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.10( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=38/39 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=85) [1]/[0] r=-1 lpr=85 pi=[50,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[9.15( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=85) [1]/[0] r=-1 lpr=85 pi=[50,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.19( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1b( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.b( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.12( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.11( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.d( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1e( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1c( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.10( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.18( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.7( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1a( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.19( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.5( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.6( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.4( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.8( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.f( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.0( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 69'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.c( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.9( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.e( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.2( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1d( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.13( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.1f( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.17( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.16( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.15( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.14( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 85 pg[10.3( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=36/36 les/c/f=37/37/0 sis=84) [2] r=0 lpr=84 pi=[36,84)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.16( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.15( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.14( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.12( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.11( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.0( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 69'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.13( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.17( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 85 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=85) [1]/[0] r=0 lpr=85 pi=[50,85)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.d( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.9( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.3( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.2( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.c( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.a( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.8( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.5( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 85 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=85) [1]/[0] r=0 lpr=85 pi=[50,85)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.6( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.4( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.7( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.18( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1c( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1d( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1a( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.10( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 85 pg[11.1f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=38/38 les/c/f=39/39/0 sis=84) [1] r=0 lpr=84 pi=[38,84)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 14 04:25:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 14 04:25:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 14 04:25:45 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 86 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=85/86 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=85) [1]/[0] async=[1] r=0 lpr=85 pi=[50,85)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 14 04:25:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 14 04:25:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 14 04:25:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 87 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=85/86 n=5 ec=44/34 lis/c=85/50 les/c/f=86/51/0 sis=87 pruub=15.090174675s) [1] async=[1] r=-1 lpr=87 pi=[50,87)/1 crt=41'385 mlcod 41'385 active pruub 152.322875977s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:46 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 87 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=85/86 n=5 ec=44/34 lis/c=85/50 les/c/f=86/51/0 sis=87 pruub=15.090085030s) [1] r=-1 lpr=87 pi=[50,87)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 152.322875977s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 87 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=85/50 les/c/f=86/51/0 sis=87) [1] r=0 lpr=87 pi=[50,87)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:46 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 87 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=85/50 les/c/f=86/51/0 sis=87) [1] r=0 lpr=87 pi=[50,87)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:46 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 14 04:25:46 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 14 04:25:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:47 np0005486808 systemd-logind[799]: Session 35 logged out. Waiting for processes to exit.
Oct 14 04:25:47 np0005486808 systemd[1]: session-35.scope: Deactivated successfully.
Oct 14 04:25:47 np0005486808 systemd[1]: session-35.scope: Consumed 8.937s CPU time.
Oct 14 04:25:47 np0005486808 systemd-logind[799]: Removed session 35.
Oct 14 04:25:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 14 04:25:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 14 04:25:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 14 04:25:47 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 88 pg[9.15( v 41'385 (0'0,41'385] local-lis/les=87/88 n=5 ec=44/34 lis/c=85/50 les/c/f=86/51/0 sis=87) [1] r=0 lpr=87 pi=[50,87)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:47 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 14 04:25:47 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 14 04:25:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 14 04:25:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 14 04:25:48 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 14 04:25:48 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 14 04:25:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:49 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 14 04:25:49 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 14 04:25:50 np0005486808 podman[104541]: 2025-10-14 08:25:50.190711394 +0000 UTC m=+0.088621855 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:25:50 np0005486808 podman[104541]: 2025-10-14 08:25:50.310460516 +0000 UTC m=+0.208370977 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 14 04:25:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 14 04:25:50 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.19( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.674814224s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.817260742s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.19( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.674725533s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.817260742s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.14( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679147720s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.821838379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.17( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.680035591s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.822647095s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.15( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679020882s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.821838379s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.15( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678979874s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.821838379s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.12( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679157257s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.822235107s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.17( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679852486s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.822647095s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.11( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679151535s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.822280884s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.12( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679084778s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.822235107s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.11( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679101944s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.822280884s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.14( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678534508s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.821838379s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679167747s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823028564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678960800s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.822860718s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.d( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679060936s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823013306s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679119110s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823028564s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.d( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679008484s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823013306s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.9( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678986549s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823074341s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.9( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678957939s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823074341s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.2( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678759575s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823089600s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679035187s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823043823s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.3( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678625107s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823074341s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.2( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678658485s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823089600s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.3( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678569794s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823074341s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678066254s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.822860718s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678572655s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823043823s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.8( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678256035s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823135376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678117752s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.823135376s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.8( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678113937s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823135376s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1( v 69'2 (0'0,69'2] local-lis/les=84/85 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678084373s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.823135376s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.4( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678952217s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824249268s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.6( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678920746s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824249268s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.4( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678906441s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824249268s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.6( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678883553s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824249268s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.18( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679072380s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824676514s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.18( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679037094s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824676514s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678895950s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824707031s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1b( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678868294s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824707031s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1a( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678785324s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824707031s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1c( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678784370s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824722290s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1a( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678738594s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824707031s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1c( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678743362s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824722290s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679430008s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.825698853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1f( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679373741s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.825698853s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.10( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679237366s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.825683594s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.10( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.679185867s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.825683594s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678602219s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active pruub 145.824798584s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[11.1e( v 69'2 (0'0,69'2] local-lis/les=84/85 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.678027153s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.824798584s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.10( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.15( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.12( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.4( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.14( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.6( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.3( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.e( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.d( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.f( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.8( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.1( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.9( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.19( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.2( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[11.17( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.18( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.1b( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.1c( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.1f( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.1e( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.11( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.1a( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[11.b( empty local-lis/les=0/0 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.b( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.658033371s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.634933472s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.b( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.657981873s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.634933472s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.d( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656975746s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 69'64 active pruub 140.634994507s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.11( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656935692s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635025024s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.d( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656917572s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 0'0 unknown NOTIFY pruub 140.634994507s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.11( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656896591s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635025024s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.10( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656827927s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635360718s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.10( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656785011s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635360718s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.b( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1e( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656119347s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635314941s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1e( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.656083107s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635314941s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.7( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655889511s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635421753s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.7( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655863762s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635421753s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.6( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655691147s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635543823s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.6( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655643463s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635543823s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1a( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655126572s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635375977s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1a( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655091286s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635375977s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.19( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655366898s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635452271s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.11( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.4( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.655419350s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635589600s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.4( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654875755s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635589600s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.10( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.12( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653989792s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.634994507s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.f( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654502869s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635604858s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.8( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654640198s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635604858s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.6( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.12( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653895378s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.634994507s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.8( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654390335s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635604858s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.f( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654445648s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635604858s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.9( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654048920s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 69'64 active pruub 140.635696411s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.9( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653984070s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 0'0 unknown NOTIFY pruub 140.635696411s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.19( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.654791832s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635452271s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653783798s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635818481s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.1( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653738022s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635818481s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.e( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653500557s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 69'64 active pruub 140.635711670s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.e( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.653439522s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 0'0 unknown NOTIFY pruub 140.635711670s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.d( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.1e( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.7( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.4( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.8( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.2( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.652538300s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635772705s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.2( v 69'64 (0'0,69'64] local-lis/les=84/85 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.652327538s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635772705s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.13( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.652333260s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635833740s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.16( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.652057648s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635894775s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.13( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.652151108s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635833740s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.16( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651999474s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635894775s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.14( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651645660s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 69'64 active pruub 140.635955811s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.15( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651494980s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 69'64 active pruub 140.635940552s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.15( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651415825s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 0'0 unknown NOTIFY pruub 140.635940552s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.17( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651339531s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active pruub 140.635910034s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.17( v 69'64 (0'0,69'64] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651246071s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.635910034s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.9( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=10.323830605s) [0] r=-1 lpr=89 pi=[59,89)/1 crt=41'385 mlcod 0'0 active pruub 141.309204102s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=89 pruub=10.323701859s) [0] r=-1 lpr=89 pi=[59,89)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 141.309204102s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.1( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.e( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 89 pg[10.14( v 85'65 (0'0,85'65] local-lis/les=84/85 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=9.651161194s) [1] r=-1 lpr=89 pi=[84,89)/1 crt=69'64 lcod 69'64 mlcod 0'0 unknown NOTIFY pruub 140.635955811s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.15( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.17( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.12( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[10.16( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.f( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 89 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=89) [0] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.19( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.2( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.13( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:50 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 89 pg[10.14( empty local-lis/les=0/0 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[9.16( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.1( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=90) [0]/[2] r=0 lpr=90 pi=[59,90)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=90) [0]/[2] r=0 lpr=90 pi=[59,90)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.16( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.1( v 69'2 (0'0,69'2] local-lis/les=89/90 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.1e( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.17( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.e( v 85'65 lc 69'48 (0'0,85'65] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=85'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.f( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.7( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.17( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.d( v 85'65 lc 69'50 (0'0,85'65] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=85'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.4( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.14( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.4( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.15( v 85'65 lc 69'46 (0'0,85'65] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=85'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.19( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.6( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.10( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[11.e( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.9( v 85'65 lc 69'56 (0'0,85'65] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=85'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 90 pg[10.8( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.10( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.f( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.13( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.12( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.14( v 85'65 lc 69'54 (0'0,85'65] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=85'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.2( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.19( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.6( v 69'64 (0'0,69'64] local-lis/les=89/90 n=1 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.b( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.11( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 90 pg[10.1a( v 69'64 (0'0,69'64] local-lis/les=89/90 n=0 ec=84/36 lis/c=84/84 les/c/f=85/85/0 sis=89) [1] r=0 lpr=89 pi=[84,89)/1 crt=69'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.15( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.12( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.d( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.11( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.1a( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.9( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.2( v 69'2 (0'0,69'2] local-lis/les=89/90 n=1 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.1f( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.1c( v 69'2 lc 0'0 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.b( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.1e( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.18( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.1b( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.8( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:51 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 90 pg[11.3( v 69'2 (0'0,69'2] local-lis/les=89/90 n=0 ec=84/38 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=89 pi=[84,89)/1 crt=69'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 14 04:25:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6b6b26aa-2415-4cf7-8a5b-b970622164bb does not exist
Oct 14 04:25:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0361c65a-4c32-46c4-ae0f-9dec103fc64a does not exist
Oct 14 04:25:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev aaac1143-381f-44b1-9eec-5624acfba558 does not exist
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 14 04:25:52 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 91 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=90/91 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=90) [0]/[2] async=[0] r=0 lpr=90 pi=[59,90)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.798855054 +0000 UTC m=+0.040088759 container create 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:52 np0005486808 systemd[1]: Started libpod-conmon-3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6.scope.
Oct 14 04:25:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.864288427 +0000 UTC m=+0.105522132 container init 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.875653097 +0000 UTC m=+0.116886782 container start 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.782819999 +0000 UTC m=+0.024053694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:52 np0005486808 crazy_darwin[104988]: 167 167
Oct 14 04:25:52 np0005486808 systemd[1]: libpod-3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6.scope: Deactivated successfully.
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.880023585 +0000 UTC m=+0.121257280 container attach 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.880701882 +0000 UTC m=+0.121935587 container died 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:25:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-95c5c1ed31ebcd297f300c86a60bb0fab58e09ed0c72b802faa07b7aa227f6b1-merged.mount: Deactivated successfully.
Oct 14 04:25:52 np0005486808 podman[104972]: 2025-10-14 08:25:52.919449477 +0000 UTC m=+0.160683172 container remove 3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:25:52 np0005486808 systemd[1]: libpod-conmon-3698c75703d9fdb97fcd3523bc8ccca00da37c8f76fe837b15fd73a86bdd44b6.scope: Deactivated successfully.
Oct 14 04:25:53 np0005486808 podman[105012]: 2025-10-14 08:25:53.096239615 +0000 UTC m=+0.051335406 container create 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:53 np0005486808 systemd[1]: Started libpod-conmon-1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44.scope.
Oct 14 04:25:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:53 np0005486808 podman[105012]: 2025-10-14 08:25:53.073128655 +0000 UTC m=+0.028224456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:53 np0005486808 podman[105012]: 2025-10-14 08:25:53.180235305 +0000 UTC m=+0.135331096 container init 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:25:53 np0005486808 podman[105012]: 2025-10-14 08:25:53.189830982 +0000 UTC m=+0.144926763 container start 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:25:53 np0005486808 podman[105012]: 2025-10-14 08:25:53.193458291 +0000 UTC m=+0.148554082 container attach 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:25:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 14 04:25:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 14 04:25:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 14 04:25:53 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 92 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=90/91 n=5 ec=44/34 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=14.983633995s) [0] async=[0] r=-1 lpr=92 pi=[59,92)/1 crt=41'385 mlcod 41'385 active pruub 148.995315552s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:53 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 92 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=90/91 n=5 ec=44/34 lis/c=90/59 les/c/f=91/60/0 sis=92 pruub=14.983551979s) [0] r=-1 lpr=92 pi=[59,92)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 148.995315552s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 14 04:25:53 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 92 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=90/59 les/c/f=91/60/0 sis=92) [0] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:53 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 92 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=90/59 les/c/f=91/60/0 sis=92) [0] r=0 lpr=92 pi=[59,92)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:54 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Oct 14 04:25:54 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Oct 14 04:25:54 np0005486808 jolly_lovelace[105029]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:25:54 np0005486808 jolly_lovelace[105029]: --> relative data size: 1.0
Oct 14 04:25:54 np0005486808 jolly_lovelace[105029]: --> All data devices are unavailable
Oct 14 04:25:54 np0005486808 systemd[1]: libpod-1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44.scope: Deactivated successfully.
Oct 14 04:25:54 np0005486808 systemd[1]: libpod-1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44.scope: Consumed 1.075s CPU time.
Oct 14 04:25:54 np0005486808 conmon[105029]: conmon 1f56de3835e9d056b031 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44.scope/container/memory.events
Oct 14 04:25:54 np0005486808 podman[105012]: 2025-10-14 08:25:54.319336963 +0000 UTC m=+1.274432764 container died 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:25:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f2557123baa9c039c83b5ea8930b6719a5c22297054be7485ff12822268c2eae-merged.mount: Deactivated successfully.
Oct 14 04:25:54 np0005486808 podman[105012]: 2025-10-14 08:25:54.385421802 +0000 UTC m=+1.340517543 container remove 1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:25:54 np0005486808 systemd[1]: libpod-conmon-1f56de3835e9d056b0317f84ccd69e2bc967ee72e1718ebf0ee1fe78e87c9d44.scope: Deactivated successfully.
Oct 14 04:25:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 14 04:25:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 14 04:25:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 14 04:25:54 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 14 04:25:54 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 93 pg[9.16( v 41'385 (0'0,41'385] local-lis/les=92/93 n=5 ec=44/34 lis/c=90/59 les/c/f=91/60/0 sis=92) [0] r=0 lpr=92 pi=[59,92)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.01779435 +0000 UTC m=+0.048461736 container create 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:25:55 np0005486808 systemd[1]: Started libpod-conmon-852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2.scope.
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:54.992754962 +0000 UTC m=+0.023422338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.118196305 +0000 UTC m=+0.148863721 container init 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.129229677 +0000 UTC m=+0.159897063 container start 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.134214019 +0000 UTC m=+0.164881405 container attach 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:25:55 np0005486808 keen_wing[105228]: 167 167
Oct 14 04:25:55 np0005486808 systemd[1]: libpod-852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2.scope: Deactivated successfully.
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.136989548 +0000 UTC m=+0.167656944 container died 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 04:25:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0893a26879e1fa664c59a678d57da21bbf52fe9f13d4633f4d36b878f50eb7da-merged.mount: Deactivated successfully.
Oct 14 04:25:55 np0005486808 podman[105212]: 2025-10-14 08:25:55.180058129 +0000 UTC m=+0.210725505 container remove 852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:55 np0005486808 systemd[1]: libpod-conmon-852cda361c438e4559126f5f865e415b0ba9a458da984c87097848832e9ca7e2.scope: Deactivated successfully.
Oct 14 04:25:55 np0005486808 podman[105250]: 2025-10-14 08:25:55.424581687 +0000 UTC m=+0.073776220 container create eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:25:55 np0005486808 systemd[1]: Started libpod-conmon-eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65.scope.
Oct 14 04:25:55 np0005486808 podman[105250]: 2025-10-14 08:25:55.393635184 +0000 UTC m=+0.042829757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df9220cbfe586cb12ca085397c14eaefba695ea013406f9bcb50f19bfd47465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df9220cbfe586cb12ca085397c14eaefba695ea013406f9bcb50f19bfd47465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df9220cbfe586cb12ca085397c14eaefba695ea013406f9bcb50f19bfd47465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df9220cbfe586cb12ca085397c14eaefba695ea013406f9bcb50f19bfd47465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:55 np0005486808 podman[105250]: 2025-10-14 08:25:55.531492132 +0000 UTC m=+0.180686725 container init eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:25:55 np0005486808 podman[105250]: 2025-10-14 08:25:55.545759474 +0000 UTC m=+0.194953977 container start eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 04:25:55 np0005486808 podman[105250]: 2025-10-14 08:25:55.549289191 +0000 UTC m=+0.198483714 container attach eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:25:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 14 04:25:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 14 04:25:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]: {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    "0": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "devices": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "/dev/loop3"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            ],
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_name": "ceph_lv0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_size": "21470642176",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "name": "ceph_lv0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "tags": {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_name": "ceph",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.crush_device_class": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.encrypted": "0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_id": "0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.vdo": "0"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            },
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "vg_name": "ceph_vg0"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        }
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    ],
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    "1": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "devices": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "/dev/loop4"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            ],
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_name": "ceph_lv1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_size": "21470642176",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "name": "ceph_lv1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "tags": {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_name": "ceph",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.crush_device_class": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.encrypted": "0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_id": "1",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.vdo": "0"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            },
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "vg_name": "ceph_vg1"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        }
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    ],
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    "2": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "devices": [
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "/dev/loop5"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            ],
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_name": "ceph_lv2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_size": "21470642176",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "name": "ceph_lv2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "tags": {
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.cluster_name": "ceph",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.crush_device_class": "",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.encrypted": "0",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osd_id": "2",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:                "ceph.vdo": "0"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            },
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "type": "block",
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:            "vg_name": "ceph_vg2"
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:        }
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]:    ]
Oct 14 04:25:56 np0005486808 infallible_varahamihira[105267]: }
Oct 14 04:25:56 np0005486808 systemd[1]: libpod-eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65.scope: Deactivated successfully.
Oct 14 04:25:56 np0005486808 podman[105250]: 2025-10-14 08:25:56.345165569 +0000 UTC m=+0.994360092 container died eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7df9220cbfe586cb12ca085397c14eaefba695ea013406f9bcb50f19bfd47465-merged.mount: Deactivated successfully.
Oct 14 04:25:56 np0005486808 podman[105250]: 2025-10-14 08:25:56.40603911 +0000 UTC m=+1.055233603 container remove eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:56 np0005486808 systemd[1]: libpod-conmon-eae35b006b316f97aa73da18408fa53599167d3f79ce06e30c565be972f7ee65.scope: Deactivated successfully.
Oct 14 04:25:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 98 B/s, 2 objects/s recovering
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 14 04:25:56 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 94 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=94 pruub=14.289186478s) [2] r=-1 lpr=94 pi=[50,94)/1 crt=41'385 mlcod 0'0 active pruub 161.898910522s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:56 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 94 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=94 pruub=14.289136887s) [2] r=-1 lpr=94 pi=[50,94)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 161.898910522s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 14 04:25:56 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 94 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=94) [2] r=0 lpr=94 pi=[50,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:25:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 14 04:25:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 14 04:25:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 14 04:25:57 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 95 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=95) [2]/[0] r=0 lpr=95 pi=[50,95)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:57 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 95 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=50/51 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=95) [2]/[0] r=0 lpr=95 pi=[50,95)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:57 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 95 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[50,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:57 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 95 pg[9.19( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[50,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:57 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 14 04:25:57 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.21703944 +0000 UTC m=+0.073224276 container create 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:57 np0005486808 systemd[1]: Started libpod-conmon-4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f.scope.
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.186977678 +0000 UTC m=+0.043162564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.311948299 +0000 UTC m=+0.168133135 container init 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.322841278 +0000 UTC m=+0.179026124 container start 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.327031781 +0000 UTC m=+0.183216627 container attach 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:25:57 np0005486808 nifty_bartik[105442]: 167 167
Oct 14 04:25:57 np0005486808 systemd[1]: libpod-4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f.scope: Deactivated successfully.
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.331180973 +0000 UTC m=+0.187365829 container died 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:25:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-89350ef338adbd83970ad8061962ba6d926eee95bb2f229c2d297e292c301f56-merged.mount: Deactivated successfully.
Oct 14 04:25:57 np0005486808 podman[105426]: 2025-10-14 08:25:57.387692056 +0000 UTC m=+0.243876902 container remove 4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:25:57 np0005486808 systemd[1]: libpod-conmon-4282e48ae5772e09a26da34bf28ab0ce8c69fb37d77cb4555e965ee314d7b55f.scope: Deactivated successfully.
Oct 14 04:25:57 np0005486808 podman[105467]: 2025-10-14 08:25:57.605779962 +0000 UTC m=+0.051786788 container create 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:25:57 np0005486808 systemd[1]: Started libpod-conmon-16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81.scope.
Oct 14 04:25:57 np0005486808 podman[105467]: 2025-10-14 08:25:57.578424778 +0000 UTC m=+0.024431684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:25:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:25:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d623ec78d45e85b1b46bc3efa02846ee61da7c6daf9e82ed17255faf4f3dbc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d623ec78d45e85b1b46bc3efa02846ee61da7c6daf9e82ed17255faf4f3dbc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d623ec78d45e85b1b46bc3efa02846ee61da7c6daf9e82ed17255faf4f3dbc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d623ec78d45e85b1b46bc3efa02846ee61da7c6daf9e82ed17255faf4f3dbc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:25:57 np0005486808 podman[105467]: 2025-10-14 08:25:57.71158198 +0000 UTC m=+0.157588896 container init 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:25:57 np0005486808 podman[105467]: 2025-10-14 08:25:57.725699448 +0000 UTC m=+0.171706304 container start 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:25:57 np0005486808 podman[105467]: 2025-10-14 08:25:57.729802849 +0000 UTC m=+0.175809735 container attach 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:25:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 14 04:25:58 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 96 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=95/96 n=5 ec=44/34 lis/c=50/50 les/c/f=51/51/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[50,95)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:25:58 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 14 04:25:58 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 14 04:25:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 2 objects/s recovering
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]: {
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_id": 2,
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "type": "bluestore"
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    },
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_id": 1,
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "type": "bluestore"
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    },
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_id": 0,
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:        "type": "bluestore"
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]:    }
Oct 14 04:25:58 np0005486808 peaceful_knuth[105483]: }
Oct 14 04:25:58 np0005486808 systemd[1]: libpod-16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81.scope: Deactivated successfully.
Oct 14 04:25:58 np0005486808 systemd[1]: libpod-16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81.scope: Consumed 1.058s CPU time.
Oct 14 04:25:58 np0005486808 podman[105467]: 2025-10-14 08:25:58.778312704 +0000 UTC m=+1.224319560 container died 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:25:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7d623ec78d45e85b1b46bc3efa02846ee61da7c6daf9e82ed17255faf4f3dbc1-merged.mount: Deactivated successfully.
Oct 14 04:25:58 np0005486808 podman[105467]: 2025-10-14 08:25:58.846915506 +0000 UTC m=+1.292922372 container remove 16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:25:58 np0005486808 systemd[1]: libpod-conmon-16c8c27340929e7fb31d02fe2891fbc223fef26f7e68657d51b2d7def2d31f81.scope: Deactivated successfully.
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:25:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ff336629-17df-4320-a298-b0598163e105 does not exist
Oct 14 04:25:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e3ce5f0a-300f-481a-97bb-47640bc161c7 does not exist
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 14 04:25:59 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 14 04:25:59 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 97 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=95/50 les/c/f=96/51/0 sis=97) [2] r=0 lpr=97 pi=[50,97)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:59 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 97 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=95/50 les/c/f=96/51/0 sis=97) [2] r=0 lpr=97 pi=[50,97)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:25:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 97 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=95/96 n=5 ec=44/34 lis/c=95/50 les/c/f=96/51/0 sis=97 pruub=14.987811089s) [2] async=[2] r=-1 lpr=97 pi=[50,97)/1 crt=41'385 mlcod 41'385 active pruub 164.877822876s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:25:59 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 97 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=95/96 n=5 ec=44/34 lis/c=95/50 les/c/f=96/51/0 sis=97 pruub=14.987705231s) [2] r=-1 lpr=97 pi=[50,97)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 164.877822876s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:25:59 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.9 deep-scrub starts
Oct 14 04:25:59 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.9 deep-scrub ok
Oct 14 04:25:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 14 04:25:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 14 04:26:00 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 98 pg[9.19( v 41'385 (0'0,41'385] local-lis/les=97/98 n=5 ec=44/34 lis/c=95/50 les/c/f=96/51/0 sis=97) [2] r=0 lpr=97 pi=[50,97)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 14 04:26:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 14 04:26:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 14 04:26:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 14 04:26:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 14 04:26:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 14 04:26:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 14 04:26:01 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 14 04:26:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 14 04:26:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 14 04:26:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 14 04:26:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 14 04:26:02 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 14 04:26:02 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Oct 14 04:26:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 14 04:26:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:02 np0005486808 systemd-logind[799]: New session 36 of user zuul.
Oct 14 04:26:02 np0005486808 systemd[1]: Started Session 36 of User zuul.
Oct 14 04:26:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 14 04:26:03 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 14 04:26:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 14 04:26:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 14 04:26:03 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 14 04:26:03 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 14 04:26:03 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 14 04:26:03 np0005486808 python3.9[105733]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 14 04:26:04 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 14 04:26:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 14 04:26:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 14 04:26:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 14 04:26:05 np0005486808 python3.9[105907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:26:05 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 14 04:26:05 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 14 04:26:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 14 04:26:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 14 04:26:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 14 04:26:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 14 04:26:05 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 14 04:26:05 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 100 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=5 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=100 pruub=15.932718277s) [0] r=-1 lpr=100 pi=[71,100)/1 crt=41'385 mlcod 0'0 active pruub 161.328903198s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:05 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 101 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=5 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=100 pruub=15.932584763s) [0] r=-1 lpr=100 pi=[71,100)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 161.328903198s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:05 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 101 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=100) [0] r=0 lpr=101 pi=[71,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:06 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 14 04:26:06 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 14 04:26:06 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[71,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:06 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[71,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:06 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 102 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=5 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=102) [0]/[2] r=0 lpr=102 pi=[71,102)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:06 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 102 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=71/72 n=5 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=102) [0]/[2] r=0 lpr=102 pi=[71,102)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:06 np0005486808 python3.9[106063]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:26:06 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 14 04:26:06 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 14 04:26:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 14 04:26:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Oct 14 04:26:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 14 04:26:07 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 14 04:26:07 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 103 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=103 pruub=9.870775223s) [0] r=-1 lpr=103 pi=[59,103)/1 crt=41'385 mlcod 0'0 active pruub 157.303405762s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:07 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 103 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=103 pruub=9.870719910s) [0] r=-1 lpr=103 pi=[59,103)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 157.303405762s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:07 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 103 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=103) [0] r=0 lpr=103 pi=[59,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:07 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 103 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=102/103 n=5 ec=44/34 lis/c=71/71 les/c/f=72/72/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[71,102)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:07 np0005486808 python3.9[106216]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:26:07 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 14 04:26:07 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 104 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=104) [0]/[2] r=0 lpr=104 pi=[59,104)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 104 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=59/60 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=104) [0]/[2] r=0 lpr=104 pi=[59,104)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 104 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=102/103 n=5 ec=44/34 lis/c=102/71 les/c/f=103/72/0 sis=104 pruub=15.016270638s) [0] async=[0] r=-1 lpr=104 pi=[71,104)/1 crt=41'385 mlcod 41'385 active pruub 163.452697754s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 104 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=102/103 n=5 ec=44/34 lis/c=102/71 les/c/f=103/72/0 sis=104 pruub=15.015896797s) [0] r=-1 lpr=104 pi=[71,104)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 163.452697754s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 104 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=102/71 les/c/f=103/72/0 sis=104) [0] r=0 lpr=104 pi=[71,104)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 104 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 104 pg[9.1e( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:08 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 104 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=102/71 les/c/f=103/72/0 sis=104) [0] r=0 lpr=104 pi=[71,104)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:08 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 14 04:26:08 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 14 04:26:08 np0005486808 python3.9[106370]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:26:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 14 04:26:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 14 04:26:08 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 14 04:26:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 14 04:26:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:26:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 14 04:26:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 14 04:26:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 14 04:26:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 105 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=9.844497681s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=41'385 mlcod 0'0 active pruub 159.317306519s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 105 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=9.844403267s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 159.317306519s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:09 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 105 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:09 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 105 pg[9.1c( v 41'385 (0'0,41'385] local-lis/les=104/105 n=5 ec=44/34 lis/c=102/71 les/c/f=103/72/0 sis=104) [0] r=0 lpr=104 pi=[71,104)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:09 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 105 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=104/105 n=5 ec=44/34 lis/c=59/59 les/c/f=60/60/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[59,104)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:09 np0005486808 python3.9[106520]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:26:09 np0005486808 network[106537]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:26:09 np0005486808 network[106538]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:26:09 np0005486808 network[106539]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:26:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 14 04:26:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 14 04:26:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 14 04:26:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 14 04:26:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 14 04:26:10 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 106 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=104/59 les/c/f=105/60/0 sis=106) [0] r=0 lpr=106 pi=[59,106)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:10 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 106 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=104/59 les/c/f=105/60/0 sis=106) [0] r=0 lpr=106 pi=[59,106)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:10 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 14 04:26:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 106 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:10 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 106 pg[9.1f( empty local-lis/les=0/0 n=0 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 106 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=104/105 n=5 ec=44/34 lis/c=104/59 les/c/f=105/60/0 sis=106 pruub=15.008542061s) [0] async=[0] r=-1 lpr=106 pi=[59,106)/1 crt=41'385 mlcod 41'385 active pruub 165.480026245s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 106 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=104/105 n=5 ec=44/34 lis/c=104/59 les/c/f=105/60/0 sis=106 pruub=15.008461952s) [0] r=-1 lpr=106 pi=[59,106)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 165.480026245s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 106 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=41'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:10 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 106 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=61/62 n=5 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=41'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:11 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 14 04:26:11 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 14 04:26:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 14 04:26:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 14 04:26:11 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 14 04:26:11 np0005486808 ceph-osd[87348]: osd.0 pg_epoch: 107 pg[9.1e( v 41'385 (0'0,41'385] local-lis/les=106/107 n=5 ec=44/34 lis/c=104/59 les/c/f=105/60/0 sis=106) [0] r=0 lpr=106 pi=[59,106)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:11 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 107 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=106/107 n=5 ec=44/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=41'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:11 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 14 04:26:11 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 14 04:26:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 14 04:26:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 14 04:26:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 14 04:26:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 108 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=106/107 n=5 ec=44/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.321609497s) [1] async=[1] r=-1 lpr=108 pi=[61,108)/1 crt=41'385 mlcod 41'385 active pruub 167.629150391s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:12 np0005486808 ceph-osd[89514]: osd.2 pg_epoch: 108 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=106/107 n=5 ec=44/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.321047783s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=41'385 mlcod 0'0 unknown NOTIFY pruub 167.629150391s@ mbc={}] state<Start>: transitioning to Stray
Oct 14 04:26:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 108 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 luod=0'0 crt=41'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 14 04:26:12 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 108 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=0/0 n=5 ec=44/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=41'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 14 04:26:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 14 04:26:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 14 04:26:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 14 04:26:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 14 04:26:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 14 04:26:13 np0005486808 ceph-osd[88375]: osd.1 pg_epoch: 109 pg[9.1f( v 41'385 (0'0,41'385] local-lis/les=108/109 n=5 ec=44/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=41'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 14 04:26:13 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 14 04:26:13 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 14 04:26:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.2 deep-scrub starts
Oct 14 04:26:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.2 deep-scrub ok
Oct 14 04:26:14 np0005486808 python3.9[106802]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:26:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:15 np0005486808 python3.9[106952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:26:15 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 14 04:26:15 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 14 04:26:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 14 04:26:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 14 04:26:16 np0005486808 python3.9[107106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:26:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 3 objects/s recovering
Oct 14 04:26:16 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 14 04:26:16 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 14 04:26:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:17 np0005486808 python3.9[107264]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:26:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 14 04:26:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 14 04:26:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 14 04:26:18 np0005486808 python3.9[107348]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:26:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 14 04:26:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 14 04:26:19 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 14 04:26:19 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 14 04:26:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 14 04:26:21 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 14 04:26:21 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 14 04:26:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:22 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 14 04:26:22 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 14 04:26:22 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Oct 14 04:26:22 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Oct 14 04:26:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 14 04:26:22 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 14 04:26:22 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 14 04:26:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 14 04:26:25 np0005486808 systemd[1]: packagekit.service: Deactivated successfully.
Oct 14 04:26:26 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 14 04:26:26 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 14 04:26:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 14 04:26:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:29 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct 14 04:26:29 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct 14 04:26:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:31 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 14 04:26:31 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 14 04:26:31 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 14 04:26:31 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 14 04:26:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:32 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 14 04:26:32 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:26:32
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr']
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:26:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:26:32 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 14 04:26:32 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 14 04:26:33 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 14 04:26:33 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 14 04:26:34 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 14 04:26:34 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 14 04:26:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 14 04:26:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 14 04:26:36 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 14 04:26:36 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 14 04:26:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:37 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 14 04:26:37 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 14 04:26:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:38 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Oct 14 04:26:38 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Oct 14 04:26:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 14 04:26:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 14 04:26:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 14 04:26:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 14 04:26:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:26:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:42 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 14 04:26:42 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 14 04:26:43 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Oct 14 04:26:43 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Oct 14 04:26:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 14 04:26:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 14 04:26:46 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 14 04:26:46 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 14 04:26:46 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 14 04:26:46 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 14 04:26:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:47 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 14 04:26:47 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 14 04:26:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 14 04:26:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 14 04:26:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 14 04:26:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 14 04:26:50 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 14 04:26:50 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 14 04:26:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:50 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 14 04:26:50 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 14 04:26:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 14 04:26:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 14 04:26:55 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 14 04:26:55 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 14 04:26:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:26:57 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 14 04:26:57 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 14 04:26:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct 14 04:26:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct 14 04:26:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:26:58 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 14 04:26:58 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 14 04:26:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 14 04:26:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 14 04:26:59 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 14 04:26:59 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d57b61d0-012b-40b1-a329-61d5993c32dc does not exist
Oct 14 04:27:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bf19a950-e160-4948-a395-00465bdc5148 does not exist
Oct 14 04:27:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7837045d-ef50-4d60-a57c-7dfa4fa3ec8c does not exist
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:27:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:27:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.862499187 +0000 UTC m=+0.056829123 container create ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:27:00 np0005486808 systemd[1]: Started libpod-conmon-ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0.scope.
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.829853326 +0000 UTC m=+0.024183322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.953962197 +0000 UTC m=+0.148292103 container init ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.962222032 +0000 UTC m=+0.156551938 container start ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.965668404 +0000 UTC m=+0.159998320 container attach ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 14 04:27:00 np0005486808 zen_lamarr[107873]: 167 167
Oct 14 04:27:00 np0005486808 systemd[1]: libpod-ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0.scope: Deactivated successfully.
Oct 14 04:27:00 np0005486808 podman[107828]: 2025-10-14 08:27:00.968710246 +0000 UTC m=+0.163040142 container died ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:27:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e69e7c0c1b211f70e6f6d84e9ee6a4f9f64c305a06aaae68e2457c9fffa59c75-merged.mount: Deactivated successfully.
Oct 14 04:27:01 np0005486808 podman[107828]: 2025-10-14 08:27:01.012257654 +0000 UTC m=+0.206587560 container remove ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:27:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:27:01 np0005486808 systemd[1]: libpod-conmon-ca4054869bd1d5ff19ad266468c4e86b644fd0a0c317b40a15cf6db1879c46c0.scope: Deactivated successfully.
Oct 14 04:27:01 np0005486808 podman[107954]: 2025-10-14 08:27:01.193339461 +0000 UTC m=+0.056485865 container create 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:27:01 np0005486808 podman[107954]: 2025-10-14 08:27:01.16283289 +0000 UTC m=+0.025979364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:01 np0005486808 systemd[1]: Started libpod-conmon-22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62.scope.
Oct 14 04:27:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 14 04:27:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 14 04:27:01 np0005486808 podman[107954]: 2025-10-14 08:27:01.335630801 +0000 UTC m=+0.198777185 container init 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:27:01 np0005486808 podman[107954]: 2025-10-14 08:27:01.347547723 +0000 UTC m=+0.210694087 container start 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:27:01 np0005486808 podman[107954]: 2025-10-14 08:27:01.350698917 +0000 UTC m=+0.213845301 container attach 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:01 np0005486808 python3.9[107948]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:27:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 14 04:27:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 14 04:27:02 np0005486808 elastic_bhabha[107971]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:27:02 np0005486808 elastic_bhabha[107971]: --> relative data size: 1.0
Oct 14 04:27:02 np0005486808 elastic_bhabha[107971]: --> All data devices are unavailable
Oct 14 04:27:02 np0005486808 podman[107954]: 2025-10-14 08:27:02.489617355 +0000 UTC m=+1.352763729 container died 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:27:02 np0005486808 systemd[1]: libpod-22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62.scope: Deactivated successfully.
Oct 14 04:27:02 np0005486808 systemd[1]: libpod-22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62.scope: Consumed 1.090s CPU time.
Oct 14 04:27:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a602573dfd6d9be695e576f47479ca11c2023e6a68cd9066143d63fc52801f9c-merged.mount: Deactivated successfully.
Oct 14 04:27:02 np0005486808 podman[107954]: 2025-10-14 08:27:02.554848035 +0000 UTC m=+1.417994419 container remove 22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhabha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:27:02 np0005486808 systemd[1]: libpod-conmon-22e33c4d9f4298b283626fdcec726118f5916df8098124c19e0795419b115b62.scope: Deactivated successfully.
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:03 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 14 04:27:03 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.323423167 +0000 UTC m=+0.065658002 container create 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:03 np0005486808 systemd[1]: Started libpod-conmon-23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15.scope.
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.29646404 +0000 UTC m=+0.038698945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.436215231 +0000 UTC m=+0.178450126 container init 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.442849467 +0000 UTC m=+0.185084312 container start 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:03 np0005486808 python3.9[108415]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 14 04:27:03 np0005486808 goofy_wright[108453]: 167 167
Oct 14 04:27:03 np0005486808 systemd[1]: libpod-23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15.scope: Deactivated successfully.
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.451737057 +0000 UTC m=+0.193971952 container attach 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.45227962 +0000 UTC m=+0.194514465 container died 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:27:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e8111048499944dcd85484928c3b267c08385575b96f62223e9555138032a2a0-merged.mount: Deactivated successfully.
Oct 14 04:27:03 np0005486808 podman[108436]: 2025-10-14 08:27:03.502449615 +0000 UTC m=+0.244684460 container remove 23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:03 np0005486808 systemd[1]: libpod-conmon-23d9e90dedec1e918bce7b8f813575c03e8bd20e51ce0016fc7dd03cd735dc15.scope: Deactivated successfully.
Oct 14 04:27:03 np0005486808 podman[108500]: 2025-10-14 08:27:03.68812797 +0000 UTC m=+0.071485189 container create 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:03 np0005486808 systemd[1]: Started libpod-conmon-41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606.scope.
Oct 14 04:27:03 np0005486808 podman[108500]: 2025-10-14 08:27:03.659616477 +0000 UTC m=+0.042973746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8b8ae5a4d9895e4fe6513495034e5e76bc4b777b1e52bd26c9a1f07e9b494b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8b8ae5a4d9895e4fe6513495034e5e76bc4b777b1e52bd26c9a1f07e9b494b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8b8ae5a4d9895e4fe6513495034e5e76bc4b777b1e52bd26c9a1f07e9b494b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8b8ae5a4d9895e4fe6513495034e5e76bc4b777b1e52bd26c9a1f07e9b494b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:03 np0005486808 podman[108500]: 2025-10-14 08:27:03.808884082 +0000 UTC m=+0.192241371 container init 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:27:03 np0005486808 podman[108500]: 2025-10-14 08:27:03.822407442 +0000 UTC m=+0.205764661 container start 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:27:03 np0005486808 podman[108500]: 2025-10-14 08:27:03.826486268 +0000 UTC m=+0.209843487 container attach 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:27:04 np0005486808 python3.9[108649]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]: {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    "0": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "devices": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "/dev/loop3"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            ],
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_name": "ceph_lv0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_size": "21470642176",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "name": "ceph_lv0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "tags": {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_name": "ceph",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.crush_device_class": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.encrypted": "0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_id": "0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.vdo": "0"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            },
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "vg_name": "ceph_vg0"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        }
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    ],
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    "1": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "devices": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "/dev/loop4"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            ],
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_name": "ceph_lv1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_size": "21470642176",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "name": "ceph_lv1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "tags": {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_name": "ceph",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.crush_device_class": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.encrypted": "0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_id": "1",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.vdo": "0"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            },
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "vg_name": "ceph_vg1"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        }
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    ],
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    "2": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "devices": [
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "/dev/loop5"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            ],
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_name": "ceph_lv2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_size": "21470642176",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "name": "ceph_lv2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "tags": {
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.cluster_name": "ceph",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.crush_device_class": "",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.encrypted": "0",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osd_id": "2",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:                "ceph.vdo": "0"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            },
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "type": "block",
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:            "vg_name": "ceph_vg2"
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:        }
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]:    ]
Oct 14 04:27:04 np0005486808 nifty_einstein[108517]: }
Oct 14 04:27:04 np0005486808 systemd[1]: libpod-41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606.scope: Deactivated successfully.
Oct 14 04:27:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:04 np0005486808 podman[108678]: 2025-10-14 08:27:04.683478708 +0000 UTC m=+0.029999170 container died 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:27:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2c8b8ae5a4d9895e4fe6513495034e5e76bc4b777b1e52bd26c9a1f07e9b494b-merged.mount: Deactivated successfully.
Oct 14 04:27:04 np0005486808 podman[108678]: 2025-10-14 08:27:04.759458832 +0000 UTC m=+0.105979254 container remove 41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:27:04 np0005486808 systemd[1]: libpod-conmon-41625847c3f45f9aae5ac61498a95eceacc55b1748eb36dc57db8ac3a647a606.scope: Deactivated successfully.
Oct 14 04:27:05 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 14 04:27:05 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 14 04:27:05 np0005486808 python3.9[108918]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.633523344 +0000 UTC m=+0.067774781 container create b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:27:05 np0005486808 systemd[1]: Started libpod-conmon-b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54.scope.
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.605491272 +0000 UTC m=+0.039742749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.741980946 +0000 UTC m=+0.176232413 container init b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.753949079 +0000 UTC m=+0.188200506 container start b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:27:05 np0005486808 lucid_poincare[109026]: 167 167
Oct 14 04:27:05 np0005486808 systemd[1]: libpod-b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54.scope: Deactivated successfully.
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.765208415 +0000 UTC m=+0.199459832 container attach b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.765667495 +0000 UTC m=+0.199918912 container died b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:27:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-422d45f4ced7607b3378d247d289968521c62a8892897fa417d01a67a8657ea9-merged.mount: Deactivated successfully.
Oct 14 04:27:05 np0005486808 podman[108985]: 2025-10-14 08:27:05.820296836 +0000 UTC m=+0.254548263 container remove b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:27:05 np0005486808 systemd[1]: libpod-conmon-b434b97b34543b34458705356cc75a436d1828b4673a6b6864c08f30ce743f54.scope: Deactivated successfully.
Oct 14 04:27:06 np0005486808 podman[109077]: 2025-10-14 08:27:06.043058767 +0000 UTC m=+0.052706976 container create b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:27:06 np0005486808 systemd[1]: Started libpod-conmon-b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37.scope.
Oct 14 04:27:06 np0005486808 podman[109077]: 2025-10-14 08:27:06.024195311 +0000 UTC m=+0.033843610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:27:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:27:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe16441c360394906ba94ed02e92f4a7200a00b626962bdf5434535816a60e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe16441c360394906ba94ed02e92f4a7200a00b626962bdf5434535816a60e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe16441c360394906ba94ed02e92f4a7200a00b626962bdf5434535816a60e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe16441c360394906ba94ed02e92f4a7200a00b626962bdf5434535816a60e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:27:06 np0005486808 podman[109077]: 2025-10-14 08:27:06.142669749 +0000 UTC m=+0.152318058 container init b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:27:06 np0005486808 podman[109077]: 2025-10-14 08:27:06.153788132 +0000 UTC m=+0.163436371 container start b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:27:06 np0005486808 podman[109077]: 2025-10-14 08:27:06.158462872 +0000 UTC m=+0.168111121 container attach b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:27:06 np0005486808 python3.9[109173]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 14 04:27:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:06 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Oct 14 04:27:06 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Oct 14 04:27:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]: {
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_id": 2,
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "type": "bluestore"
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    },
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_id": 1,
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "type": "bluestore"
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    },
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_id": 0,
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:        "type": "bluestore"
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]:    }
Oct 14 04:27:07 np0005486808 vibrant_joliot[109116]: }
Oct 14 04:27:07 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 14 04:27:07 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 14 04:27:07 np0005486808 systemd[1]: libpod-b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37.scope: Deactivated successfully.
Oct 14 04:27:07 np0005486808 systemd[1]: libpod-b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37.scope: Consumed 1.193s CPU time.
Oct 14 04:27:07 np0005486808 podman[109077]: 2025-10-14 08:27:07.342893405 +0000 UTC m=+1.352541654 container died b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:27:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-efe16441c360394906ba94ed02e92f4a7200a00b626962bdf5434535816a60e7-merged.mount: Deactivated successfully.
Oct 14 04:27:07 np0005486808 podman[109077]: 2025-10-14 08:27:07.426797467 +0000 UTC m=+1.436445716 container remove b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_joliot, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:27:07 np0005486808 systemd[1]: libpod-conmon-b603a576c913151c833417318b1c758684c1a2c1242c966ad87806cfa2f8af37.scope: Deactivated successfully.
Oct 14 04:27:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:27:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:27:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 09548ba2-9996-4e50-8ba7-71059cc3781c does not exist
Oct 14 04:27:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e8d566b3-a375-4da8-8c45-7d4c7a106623 does not exist
Oct 14 04:27:07 np0005486808 python3.9[109418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:27:08 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 14 04:27:08 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 14 04:27:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:27:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:08 np0005486808 python3.9[109570]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:27:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 14 04:27:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 14 04:27:09 np0005486808 python3.9[109648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:27:10 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct 14 04:27:10 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct 14 04:27:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:10 np0005486808 python3.9[109800]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 14 04:27:11 np0005486808 python3.9[109953]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 14 04:27:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:12 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 14 04:27:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 14 04:27:12 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 14 04:27:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 14 04:27:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:12 np0005486808 python3.9[110106]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:27:13 np0005486808 python3.9[110258]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 14 04:27:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Oct 14 04:27:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Oct 14 04:27:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:14 np0005486808 python3.9[110410]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:27:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 14 04:27:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 14 04:27:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 14 04:27:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 14 04:27:16 np0005486808 python3.9[110563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:27:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:17 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 14 04:27:17 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 14 04:27:17 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 14 04:27:17 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 14 04:27:17 np0005486808 python3.9[110715]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:27:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 14 04:27:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 14 04:27:17 np0005486808 python3.9[110793]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:27:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 14 04:27:18 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 14 04:27:18 np0005486808 python3.9[110945]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:27:19 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 14 04:27:19 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 14 04:27:19 np0005486808 python3.9[111023]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:27:19 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Oct 14 04:27:19 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Oct 14 04:27:20 np0005486808 python3.9[111175]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:27:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:21 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Oct 14 04:27:21 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Oct 14 04:27:21 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 14 04:27:21 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 14 04:27:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:22 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 14 04:27:22 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 14 04:27:22 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 14 04:27:22 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 14 04:27:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:22 np0005486808 python3.9[111326]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:27:23 np0005486808 python3.9[111478]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 14 04:27:24 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 14 04:27:24 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 14 04:27:24 np0005486808 python3.9[111628]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:27:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:25 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 14 04:27:25 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 14 04:27:25 np0005486808 python3.9[111780]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:27:26 np0005486808 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 14 04:27:26 np0005486808 systemd[1]: tuned.service: Deactivated successfully.
Oct 14 04:27:26 np0005486808 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 14 04:27:26 np0005486808 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 14 04:27:26 np0005486808 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 14 04:27:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:26 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 14 04:27:26 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 14 04:27:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:27 np0005486808 python3.9[111942]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 14 04:27:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:29 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Oct 14 04:27:29 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Oct 14 04:27:29 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Oct 14 04:27:29 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Oct 14 04:27:29 np0005486808 python3.9[112094]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:27:30 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 14 04:27:30 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 14 04:27:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:30 np0005486808 python3.9[112248]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:27:31 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Oct 14 04:27:31 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Oct 14 04:27:31 np0005486808 systemd[1]: session-36.scope: Deactivated successfully.
Oct 14 04:27:31 np0005486808 systemd[1]: session-36.scope: Consumed 1min 7.509s CPU time.
Oct 14 04:27:31 np0005486808 systemd-logind[799]: Session 36 logged out. Waiting for processes to exit.
Oct 14 04:27:31 np0005486808 systemd-logind[799]: Removed session 36.
Oct 14 04:27:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:27:32
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms']
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:27:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:27:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 14 04:27:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 14 04:27:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 14 04:27:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 14 04:27:35 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 14 04:27:35 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 14 04:27:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:36 np0005486808 systemd-logind[799]: New session 37 of user zuul.
Oct 14 04:27:36 np0005486808 systemd[1]: Started Session 37 of User zuul.
Oct 14 04:27:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:37 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 14 04:27:37 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 14 04:27:38 np0005486808 python3.9[112428]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:27:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:39 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 14 04:27:39 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 14 04:27:39 np0005486808 python3.9[112585]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 14 04:27:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.17 deep-scrub starts
Oct 14 04:27:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.17 deep-scrub ok
Oct 14 04:27:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 14 04:27:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 14 04:27:40 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 14 04:27:40 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 14 04:27:40 np0005486808 python3.9[112738]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:27:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 14 04:27:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 14 04:27:41 np0005486808 python3.9[112822]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 14 04:27:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:42 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 14 04:27:42 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:27:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:43 np0005486808 python3.9[112975]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:27:44 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 14 04:27:44 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 14 04:27:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:44 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 14 04:27:44 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 14 04:27:45 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 14 04:27:45 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 14 04:27:46 np0005486808 python3.9[113128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:27:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:46 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 14 04:27:46 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 14 04:27:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:47 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 14 04:27:47 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 14 04:27:48 np0005486808 python3.9[113281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:27:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 14 04:27:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 14 04:27:49 np0005486808 python3.9[113433]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 14 04:27:50 np0005486808 python3.9[113584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:27:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:51 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 14 04:27:51 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 14 04:27:51 np0005486808 python3.9[113742]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:27:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 14 04:27:52 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 14 04:27:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:53 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 14 04:27:53 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 14 04:27:54 np0005486808 python3.9[113895]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:27:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 14 04:27:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 14 04:27:55 np0005486808 systemd[75862]: Created slice User Background Tasks Slice.
Oct 14 04:27:55 np0005486808 systemd[75862]: Starting Cleanup of User's Temporary Files and Directories...
Oct 14 04:27:55 np0005486808 systemd[75862]: Finished Cleanup of User's Temporary Files and Directories.
Oct 14 04:27:55 np0005486808 python3.9[114183]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 14 04:27:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 14 04:27:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 14 04:27:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:56 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 14 04:27:56 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 14 04:27:56 np0005486808 python3.9[114333]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:27:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:27:57 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 14 04:27:57 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 14 04:27:57 np0005486808 python3.9[114487]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:27:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 14 04:27:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 14 04:27:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:27:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 14 04:27:59 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 14 04:27:59 np0005486808 python3.9[114640]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:28:00 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 14 04:28:00 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 14 04:28:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 14 04:28:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 14 04:28:01 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 14 04:28:01 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 14 04:28:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:02 np0005486808 python3.9[114793]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:28:02 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 14 04:28:02 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:03 np0005486808 python3.9[114947]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 14 04:28:03 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 14 04:28:03 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 14 04:28:04 np0005486808 systemd-logind[799]: Session 37 logged out. Waiting for processes to exit.
Oct 14 04:28:04 np0005486808 systemd[1]: session-37.scope: Deactivated successfully.
Oct 14 04:28:04 np0005486808 systemd[1]: session-37.scope: Consumed 19.760s CPU time.
Oct 14 04:28:04 np0005486808 systemd-logind[799]: Removed session 37.
Oct 14 04:28:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:04 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 14 04:28:04 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 14 04:28:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 14 04:28:07 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 14 04:28:07 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 14 04:28:07 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:08 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d0d74d6-523d-400d-9960-98e765a8463c does not exist
Oct 14 04:28:08 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 73933cbe-8c09-41f0-833a-be0946c7dc67 does not exist
Oct 14 04:28:08 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 194a24cd-c824-4e19-a776-919656beef15 does not exist
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:28:09 np0005486808 systemd-logind[799]: New session 38 of user zuul.
Oct 14 04:28:09 np0005486808 systemd[1]: Started Session 38 of User zuul.
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.440730779 +0000 UTC m=+0.057994924 container create 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:28:09 np0005486808 systemd[1]: Started libpod-conmon-69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6.scope.
Oct 14 04:28:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.411884716 +0000 UTC m=+0.029148901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:09 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 14 04:28:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.539231897 +0000 UTC m=+0.156496092 container init 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.550883881 +0000 UTC m=+0.168147996 container start 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.554511749 +0000 UTC m=+0.171775964 container attach 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:09 np0005486808 dazzling_hellman[115317]: 167 167
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.559092751 +0000 UTC m=+0.176356896 container died 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:28:09 np0005486808 systemd[1]: libpod-69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6.scope: Deactivated successfully.
Oct 14 04:28:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-84a4a4f1813ec27c424f623e0d9d3117ae3b8c547b9c28b40dc47ad517bc7ed6-merged.mount: Deactivated successfully.
Oct 14 04:28:09 np0005486808 podman[115301]: 2025-10-14 08:28:09.609497178 +0000 UTC m=+0.226761323 container remove 69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:28:09 np0005486808 systemd[1]: libpod-conmon-69ce6a5e74c2613f4235aaeefebc3c8b1b041184bc0df74625e1a34fdeffc4f6.scope: Deactivated successfully.
Oct 14 04:28:09 np0005486808 podman[115366]: 2025-10-14 08:28:09.823183171 +0000 UTC m=+0.056451336 container create 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:28:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Oct 14 04:28:09 np0005486808 systemd[1]: Started libpod-conmon-16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746.scope.
Oct 14 04:28:09 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Oct 14 04:28:09 np0005486808 podman[115366]: 2025-10-14 08:28:09.794851171 +0000 UTC m=+0.028119386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:09 np0005486808 podman[115366]: 2025-10-14 08:28:09.927813269 +0000 UTC m=+0.161081474 container init 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:28:09 np0005486808 podman[115366]: 2025-10-14 08:28:09.944660979 +0000 UTC m=+0.177929134 container start 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:28:09 np0005486808 podman[115366]: 2025-10-14 08:28:09.94924042 +0000 UTC m=+0.182508615 container attach 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:28:10 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 14 04:28:10 np0005486808 python3.9[115459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:28:10 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 14 04:28:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:11 np0005486808 reverent_torvalds[115404]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:28:11 np0005486808 reverent_torvalds[115404]: --> relative data size: 1.0
Oct 14 04:28:11 np0005486808 reverent_torvalds[115404]: --> All data devices are unavailable
Oct 14 04:28:11 np0005486808 systemd[1]: libpod-16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746.scope: Deactivated successfully.
Oct 14 04:28:11 np0005486808 systemd[1]: libpod-16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746.scope: Consumed 1.101s CPU time.
Oct 14 04:28:11 np0005486808 podman[115366]: 2025-10-14 08:28:11.136361176 +0000 UTC m=+1.369629331 container died 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 04:28:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2e450547381006d477a8da31189bdc5965513620bb59f9d3551200c1384b61e2-merged.mount: Deactivated successfully.
Oct 14 04:28:11 np0005486808 podman[115366]: 2025-10-14 08:28:11.213983556 +0000 UTC m=+1.447251681 container remove 16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:11 np0005486808 systemd[1]: libpod-conmon-16e3a5325df3e5f947f5c48e7ce1a080927613e9aed9e079ba7e9e554a3ae746.scope: Deactivated successfully.
Oct 14 04:28:11 np0005486808 python3.9[115651]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:28:11 np0005486808 podman[115833]: 2025-10-14 08:28:11.964769957 +0000 UTC m=+0.061768405 container create aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:28:12 np0005486808 systemd[1]: Started libpod-conmon-aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1.scope.
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:11.942244708 +0000 UTC m=+0.039243166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:12.059590326 +0000 UTC m=+0.156588744 container init aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:28:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:12.071604498 +0000 UTC m=+0.168602936 container start aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:28:12 np0005486808 dazzling_rosalind[115874]: 167 167
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:12.075861502 +0000 UTC m=+0.172859910 container attach aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:12 np0005486808 systemd[1]: libpod-aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1.scope: Deactivated successfully.
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:12.07700053 +0000 UTC m=+0.173998938 container died aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:28:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e43d66f673d3d699426ff1210f4fce4377e2d962ba2b7d1bcac853fe8b143e24-merged.mount: Deactivated successfully.
Oct 14 04:28:12 np0005486808 podman[115833]: 2025-10-14 08:28:12.111487319 +0000 UTC m=+0.208485777 container remove aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 14 04:28:12 np0005486808 systemd[1]: libpod-conmon-aca87c818c733b9682d643fee31a9ba5da12ede7d8e9259eda2dd7fefe8fc0f1.scope: Deactivated successfully.
Oct 14 04:28:12 np0005486808 podman[115943]: 2025-10-14 08:28:12.326844573 +0000 UTC m=+0.063754823 container create 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 14 04:28:12 np0005486808 systemd[1]: Started libpod-conmon-64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5.scope.
Oct 14 04:28:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:12 np0005486808 podman[115943]: 2025-10-14 08:28:12.30127394 +0000 UTC m=+0.038184290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf0c70669545904dafd68e2eb517398cc5c5035c7da51302f876738c0bb735a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf0c70669545904dafd68e2eb517398cc5c5035c7da51302f876738c0bb735a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 14 04:28:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf0c70669545904dafd68e2eb517398cc5c5035c7da51302f876738c0bb735a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf0c70669545904dafd68e2eb517398cc5c5035c7da51302f876738c0bb735a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:12 np0005486808 podman[115943]: 2025-10-14 08:28:12.413558405 +0000 UTC m=+0.150468655 container init 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:28:12 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 14 04:28:12 np0005486808 podman[115943]: 2025-10-14 08:28:12.425363452 +0000 UTC m=+0.162273692 container start 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:28:12 np0005486808 podman[115943]: 2025-10-14 08:28:12.4289893 +0000 UTC m=+0.165899640 container attach 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:28:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:12 np0005486808 python3.9[116044]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]: {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    "0": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "devices": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "/dev/loop3"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            ],
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_name": "ceph_lv0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_size": "21470642176",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "name": "ceph_lv0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "tags": {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_name": "ceph",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.crush_device_class": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.encrypted": "0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_id": "0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.vdo": "0"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            },
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "vg_name": "ceph_vg0"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        }
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    ],
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    "1": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "devices": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "/dev/loop4"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            ],
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_name": "ceph_lv1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_size": "21470642176",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "name": "ceph_lv1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "tags": {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_name": "ceph",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.crush_device_class": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.encrypted": "0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_id": "1",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.vdo": "0"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            },
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "vg_name": "ceph_vg1"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        }
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    ],
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    "2": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "devices": [
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "/dev/loop5"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            ],
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_name": "ceph_lv2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_size": "21470642176",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "name": "ceph_lv2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "tags": {
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.cluster_name": "ceph",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.crush_device_class": "",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.encrypted": "0",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osd_id": "2",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:                "ceph.vdo": "0"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            },
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "type": "block",
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:            "vg_name": "ceph_vg2"
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:        }
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]:    ]
Oct 14 04:28:13 np0005486808 upbeat_pascal[115966]: }
Oct 14 04:28:13 np0005486808 systemd[1]: libpod-64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5.scope: Deactivated successfully.
Oct 14 04:28:13 np0005486808 podman[115943]: 2025-10-14 08:28:13.259558513 +0000 UTC m=+0.996468753 container died 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:28:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bbf0c70669545904dafd68e2eb517398cc5c5035c7da51302f876738c0bb735a-merged.mount: Deactivated successfully.
Oct 14 04:28:13 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 14 04:28:13 np0005486808 systemd[1]: session-38.scope: Deactivated successfully.
Oct 14 04:28:13 np0005486808 systemd[1]: session-38.scope: Consumed 2.744s CPU time.
Oct 14 04:28:13 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 14 04:28:13 np0005486808 systemd-logind[799]: Session 38 logged out. Waiting for processes to exit.
Oct 14 04:28:13 np0005486808 systemd-logind[799]: Removed session 38.
Oct 14 04:28:13 np0005486808 podman[115943]: 2025-10-14 08:28:13.469319471 +0000 UTC m=+1.206229721 container remove 64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:13 np0005486808 systemd[1]: libpod-conmon-64fbbf42b090ed9e10c72053e59935153f9665c87f5404814cf6820e413cbce5.scope: Deactivated successfully.
Oct 14 04:28:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 14 04:28:13 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.210792295 +0000 UTC m=+0.041061881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.339494799 +0000 UTC m=+0.169764345 container create 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:28:14 np0005486808 systemd[1]: Started libpod-conmon-2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1.scope.
Oct 14 04:28:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.569935379 +0000 UTC m=+0.400204975 container init 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.576571511 +0000 UTC m=+0.406841057 container start 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:28:14 np0005486808 nifty_lehmann[116240]: 167 167
Oct 14 04:28:14 np0005486808 systemd[1]: libpod-2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1.scope: Deactivated successfully.
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.588160413 +0000 UTC m=+0.418429939 container attach 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.589169258 +0000 UTC m=+0.419438774 container died 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 04:28:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5a0e88f72e3cfff2b64420004debcdfeb1ff9b1f3869be4486cd0465ccdaf49c-merged.mount: Deactivated successfully.
Oct 14 04:28:14 np0005486808 podman[116224]: 2025-10-14 08:28:14.680973533 +0000 UTC m=+0.511243049 container remove 2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lehmann, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:28:14 np0005486808 systemd[1]: libpod-conmon-2f68722384629192a49724ea21115265f25a3f70ebd63d6617348240604e8fd1.scope: Deactivated successfully.
Oct 14 04:28:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:14 np0005486808 podman[116266]: 2025-10-14 08:28:14.883490834 +0000 UTC m=+0.047022606 container create b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:28:14 np0005486808 systemd[1]: Started libpod-conmon-b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309.scope.
Oct 14 04:28:14 np0005486808 podman[116266]: 2025-10-14 08:28:14.863281912 +0000 UTC m=+0.026813694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:28:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:28:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9476e7656d870604a425ccde241b98b2eebb470bb5e899ec87915cfa0546b293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9476e7656d870604a425ccde241b98b2eebb470bb5e899ec87915cfa0546b293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9476e7656d870604a425ccde241b98b2eebb470bb5e899ec87915cfa0546b293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9476e7656d870604a425ccde241b98b2eebb470bb5e899ec87915cfa0546b293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:28:14 np0005486808 podman[116266]: 2025-10-14 08:28:14.992587951 +0000 UTC m=+0.156119793 container init b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:28:15 np0005486808 podman[116266]: 2025-10-14 08:28:15.011307086 +0000 UTC m=+0.174838878 container start b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:28:15 np0005486808 podman[116266]: 2025-10-14 08:28:15.015441837 +0000 UTC m=+0.178973649 container attach b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:28:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 14 04:28:15 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]: {
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_id": 2,
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "type": "bluestore"
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    },
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_id": 1,
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "type": "bluestore"
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    },
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_id": 0,
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:        "type": "bluestore"
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]:    }
Oct 14 04:28:15 np0005486808 beautiful_pike[116284]: }
Oct 14 04:28:16 np0005486808 systemd[1]: libpod-b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309.scope: Deactivated successfully.
Oct 14 04:28:16 np0005486808 podman[116266]: 2025-10-14 08:28:16.0143615 +0000 UTC m=+1.177893272 container died b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:28:16 np0005486808 systemd[1]: libpod-b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309.scope: Consumed 1.006s CPU time.
Oct 14 04:28:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9476e7656d870604a425ccde241b98b2eebb470bb5e899ec87915cfa0546b293-merged.mount: Deactivated successfully.
Oct 14 04:28:16 np0005486808 podman[116266]: 2025-10-14 08:28:16.086661771 +0000 UTC m=+1.250193583 container remove b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:28:16 np0005486808 systemd[1]: libpod-conmon-b2db7bdf2e02de6c963227472f4ef9a941fdc8c2599fd57555ccfdde1e4ce309.scope: Deactivated successfully.
Oct 14 04:28:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:28:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:28:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:16 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 868c726c-356a-4a4d-a242-e17ec4b515d7 does not exist
Oct 14 04:28:16 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 05b1ffe4-5b32-4ae4-9839-92e6b0e7a5c8 does not exist
Oct 14 04:28:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 14 04:28:16 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 14 04:28:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:28:17 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct 14 04:28:17 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct 14 04:28:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 14 04:28:17 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 14 04:28:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:19 np0005486808 systemd-logind[799]: New session 39 of user zuul.
Oct 14 04:28:19 np0005486808 systemd[1]: Started Session 39 of User zuul.
Oct 14 04:28:20 np0005486808 python3.9[116533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:28:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:21 np0005486808 python3.9[116687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:28:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:22 np0005486808 python3.9[116843]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:28:22 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 14 04:28:22 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 14 04:28:23 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 14 04:28:23 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 14 04:28:23 np0005486808 python3.9[116927]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:28:24 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 14 04:28:24 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 14 04:28:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:25 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 14 04:28:25 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 14 04:28:25 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 14 04:28:25 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 14 04:28:25 np0005486808 python3.9[117080]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:28:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:27 np0005486808 python3.9[117275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:28:27 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 14 04:28:27 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 14 04:28:27 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 14 04:28:27 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 14 04:28:28 np0005486808 python3.9[117427]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:28:28 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Oct 14 04:28:28 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Oct 14 04:28:28 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 14 04:28:28 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 14 04:28:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:29 np0005486808 python3.9[117592]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:28:29 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 14 04:28:29 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 14 04:28:30 np0005486808 python3.9[117670]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:28:30 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 14 04:28:30 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 14 04:28:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:30 np0005486808 python3.9[117822]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:28:31 np0005486808 python3.9[117900]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:28:31 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 14 04:28:31 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 14 04:28:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:32 np0005486808 python3.9[118052]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:28:32
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', 'volumes', 'backups']
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:28:32 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 14 04:28:32 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:28:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:28:33 np0005486808 python3.9[118204]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:28:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 14 04:28:33 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 14 04:28:33 np0005486808 python3.9[118356]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:28:34 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Oct 14 04:28:34 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Oct 14 04:28:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:34 np0005486808 python3.9[118508]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:28:35 np0005486808 python3.9[118660]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:28:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 14 04:28:35 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 14 04:28:36 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 14 04:28:36 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 14 04:28:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:36 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 14 04:28:36 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 14 04:28:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:37 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Oct 14 04:28:37 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Oct 14 04:28:37 np0005486808 python3.9[118813]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:28:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:38 np0005486808 python3.9[118967]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:28:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 14 04:28:39 np0005486808 python3.9[119119]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:28:39 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 14 04:28:39 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Oct 14 04:28:39 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Oct 14 04:28:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:40 np0005486808 python3.9[119271]: ansible-service_facts Invoked
Oct 14 04:28:40 np0005486808 network[119288]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:28:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 14 04:28:40 np0005486808 network[119289]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:28:40 np0005486808 network[119290]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:28:40 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 14 04:28:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 14 04:28:41 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 14 04:28:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.19 deep-scrub starts
Oct 14 04:28:41 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.19 deep-scrub ok
Oct 14 04:28:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:28:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:43 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 14 04:28:43 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 14 04:28:43 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 14 04:28:43 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 14 04:28:43 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Oct 14 04:28:43 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Oct 14 04:28:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:44 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 14 04:28:44 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 14 04:28:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 14 04:28:45 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 14 04:28:45 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Oct 14 04:28:45 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Oct 14 04:28:46 np0005486808 python3.9[119745]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:28:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 14 04:28:46 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:46.986720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:28:46 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 14 04:28:46 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430526986891, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7101, "num_deletes": 251, "total_data_size": 9142438, "memory_usage": 9384480, "flush_reason": "Manual Compaction"}
Oct 14 04:28:46 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527022849, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7323587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7239, "table_properties": {"data_size": 7297679, "index_size": 16833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 73982, "raw_average_key_size": 23, "raw_value_size": 7236418, "raw_average_value_size": 2269, "num_data_blocks": 741, "num_entries": 3189, "num_filter_entries": 3189, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430106, "oldest_key_time": 1760430106, "file_creation_time": 1760430526, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 36149 microseconds, and 16012 cpu microseconds.
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.022908) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7323587 bytes OK
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.022927) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.024402) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.024418) EVENT_LOG_v1 {"time_micros": 1760430527024413, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.024438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9111680, prev total WAL file size 9111680, number of live WAL files 2.
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.027075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7151KB) 13(53KB) 8(1944B)]
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527027207, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7380792, "oldest_snapshot_seqno": -1}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3005 keys, 7336177 bytes, temperature: kUnknown
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527069471, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7336177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7310674, "index_size": 16881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 72054, "raw_average_key_size": 23, "raw_value_size": 7250968, "raw_average_value_size": 2412, "num_data_blocks": 745, "num_entries": 3005, "num_filter_entries": 3005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760430527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.069744) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7336177 bytes
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.071320) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.3 rd, 173.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.0, 0.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3295, records dropped: 290 output_compression: NoCompression
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.071351) EVENT_LOG_v1 {"time_micros": 1760430527071335, "job": 4, "event": "compaction_finished", "compaction_time_micros": 42345, "compaction_time_cpu_micros": 27271, "output_level": 6, "num_output_files": 1, "total_output_size": 7336177, "num_input_records": 3295, "num_output_records": 3005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527073844, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527073953, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430527074018, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:28:47.026888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:28:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 14 04:28:48 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 14 04:28:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 14 04:28:48 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 14 04:28:49 np0005486808 python3.9[119899]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 14 04:28:49 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 14 04:28:49 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 14 04:28:50 np0005486808 python3.9[120051]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:28:50 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 14 04:28:50 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 14 04:28:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:51 np0005486808 python3.9[120129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:28:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:52 np0005486808 python3.9[120281]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:28:52 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 14 04:28:52 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 14 04:28:52 np0005486808 python3.9[120359]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:28:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:54 np0005486808 python3.9[120511]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:28:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 14 04:28:54 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 14 04:28:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:55 np0005486808 python3.9[120663]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:28:56 np0005486808 python3.9[120747]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:28:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 14 04:28:56 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 14 04:28:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:28:57 np0005486808 systemd[1]: session-39.scope: Deactivated successfully.
Oct 14 04:28:57 np0005486808 systemd[1]: session-39.scope: Consumed 27.185s CPU time.
Oct 14 04:28:57 np0005486808 systemd-logind[799]: Session 39 logged out. Waiting for processes to exit.
Oct 14 04:28:57 np0005486808 systemd-logind[799]: Removed session 39.
Oct 14 04:28:57 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 14 04:28:57 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 14 04:28:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 14 04:28:58 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 14 04:28:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:28:59 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 14 04:28:59 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 14 04:29:00 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 14 04:29:00 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 14 04:29:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 14 04:29:00 np0005486808 ceph-osd[87348]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 14 04:29:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:29:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 14 04:29:01 np0005486808 ceph-osd[89514]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 14 04:29:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 14 04:29:01 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 14 04:29:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:29:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 14 04:29:02 np0005486808 ceph-osd[88375]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 14 04:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:31:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:41 np0005486808 rsyslogd[1002]: imjournal: 1963 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 04:31:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:31:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:31:41 np0005486808 systemd-logind[799]: New session 47 of user zuul.
Oct 14 04:31:41 np0005486808 systemd[1]: Started Session 47 of User zuul.
Oct 14 04:31:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:31:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:42 np0005486808 python3.9[141330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:31:44 np0005486808 python3.9[141486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:31:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:45 np0005486808 python3.9[141638]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:31:45 np0005486808 python3.9[141788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:31:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:46 np0005486808 python3.9[141940]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 14 04:31:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:31:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2022 writes, 8945 keys, 2022 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2022 writes, 2022 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2022 writes, 8945 keys, 2022 commit groups, 1.0 writes per commit group, ingest: 11.30 MB, 0.02 MB/s#012Interval WAL: 2022 writes, 2022 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    150.8      0.06              0.02         3    0.018       0      0       0.0       0.0#012  L6      1/0    6.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    162.1    142.2      0.09              0.06         2    0.047    7064    731       0.0       0.0#012 Sum      1/0    6.46 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    102.3    145.4      0.15              0.09         5    0.030    7064    731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    107.5    152.4      0.14              0.09         4    0.036    7064    731       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    162.1    142.2      0.09              0.06         2    0.047    7064    731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    172.4      0.05              0.02         2    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 308.00 MB usage: 574.91 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,488.19 KB,0.154788%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 04:31:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:31:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:48 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 14 04:31:49 np0005486808 python3.9[142097]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:31:50 np0005486808 python3.9[142181]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:31:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:31:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:52 np0005486808 python3.9[142334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:31:53 np0005486808 python3[142489]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 14 04:31:54 np0005486808 python3.9[142641]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:31:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:55 np0005486808 python3.9[142793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:31:56 np0005486808 python3.9[142871]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:31:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:57 np0005486808 python3.9[143023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:31:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:31:57 np0005486808 python3.9[143101]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ja8ucdwp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:31:58 np0005486808 python3.9[143253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:31:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:31:58 np0005486808 python3.9[143331]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:31:59 np0005486808 python3.9[143483]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:00 np0005486808 python3[143636]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 14 04:32:01 np0005486808 python3.9[143788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:02 np0005486808 python3.9[143913]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760430721.097073-157-234187492282452/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:03 np0005486808 python3.9[144065]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:03 np0005486808 python3.9[144190]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760430722.6929598-172-276268033566112/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:04 np0005486808 python3.9[144342]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:05 np0005486808 python3.9[144467]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760430724.2017875-187-158506614164906/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:06 np0005486808 python3.9[144619]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:06 np0005486808 python3.9[144744]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760430725.6858935-202-114715328315099/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:07 np0005486808 python3.9[144896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:08 np0005486808 python3.9[145021]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760430727.1667151-217-241853947002226/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:09 np0005486808 python3.9[145173]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:10 np0005486808 python3.9[145325]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:11 np0005486808 python3.9[145480]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:12 np0005486808 python3.9[145632]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:12 np0005486808 python3.9[145785]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:32:13 np0005486808 python3.9[145939]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:14 np0005486808 python3.9[146094]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:15 np0005486808 python3.9[146244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:32:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:16 np0005486808 python3.9[146397]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:16 np0005486808 ovs-vsctl[146398]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 14 04:32:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:17 np0005486808 python3.9[146550]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:18 np0005486808 python3.9[146705]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:18 np0005486808 ovs-vsctl[146706]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 14 04:32:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:19 np0005486808 python3.9[146856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:32:20 np0005486808 python3.9[147010]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:21 np0005486808 python3.9[147162]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:21 np0005486808 python3.9[147240]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:22 np0005486808 python3.9[147392]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:22 np0005486808 python3.9[147470]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:23 np0005486808 python3.9[147622]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:24 np0005486808 python3.9[147774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:24 np0005486808 python3.9[147852]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:25 np0005486808 python3.9[148004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.098315) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746098373, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 735, "num_deletes": 251, "total_data_size": 954199, "memory_usage": 967560, "flush_reason": "Manual Compaction"}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746104272, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 945889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8880, "largest_seqno": 9614, "table_properties": {"data_size": 942051, "index_size": 1618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8146, "raw_average_key_size": 18, "raw_value_size": 934451, "raw_average_value_size": 2128, "num_data_blocks": 75, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430683, "oldest_key_time": 1760430683, "file_creation_time": 1760430746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5981 microseconds, and 2706 cpu microseconds.
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.104304) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 945889 bytes OK
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.104321) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.105583) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.105593) EVENT_LOG_v1 {"time_micros": 1760430746105590, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.105608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 950435, prev total WAL file size 950435, number of live WAL files 2.
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.106134) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(923KB)], [23(6614KB)]
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746106165, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7719517, "oldest_snapshot_seqno": -1}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3253 keys, 5977841 bytes, temperature: kUnknown
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746134320, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 5977841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5954457, "index_size": 14123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 78859, "raw_average_key_size": 24, "raw_value_size": 5893955, "raw_average_value_size": 1811, "num_data_blocks": 617, "num_entries": 3253, "num_filter_entries": 3253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760430746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.134520) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 5977841 bytes
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.136033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.6 rd, 211.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.5 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(14.5) write-amplify(6.3) OK, records in: 3767, records dropped: 514 output_compression: NoCompression
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.136058) EVENT_LOG_v1 {"time_micros": 1760430746136045, "job": 8, "event": "compaction_finished", "compaction_time_micros": 28216, "compaction_time_cpu_micros": 15784, "output_level": 6, "num_output_files": 1, "total_output_size": 5977841, "num_input_records": 3767, "num_output_records": 3253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746136329, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430746137598, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.106077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.137653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.137660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.137663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.137667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:32:26.137671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:32:26 np0005486808 python3.9[148082]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:27 np0005486808 python3.9[148234]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:32:27 np0005486808 systemd[1]: Reloading.
Oct 14 04:32:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:27 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:32:27 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:32:28 np0005486808 python3.9[148423]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:28 np0005486808 python3.9[148501]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:29 np0005486808 python3.9[148653]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:30 np0005486808 python3.9[148731]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:31 np0005486808 python3.9[148883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:32:31 np0005486808 systemd[1]: Reloading.
Oct 14 04:32:31 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:32:31 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:32:31 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:32:31 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:32:31 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:32:31 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:32:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:32 np0005486808 python3.9[149075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:32:32
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:32:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:33 np0005486808 python3.9[149227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:34 np0005486808 python3.9[149350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430752.8136692-468-268984141597190/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:35 np0005486808 python3.9[149502]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:32:35 np0005486808 python3.9[149654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:32:36 np0005486808 python3.9[149777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760430755.2602289-493-59393879694290/.source.json _original_basename=.0zlsacwq follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:37 np0005486808 python3.9[149929]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:40 np0005486808 python3.9[150356]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 14 04:32:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:41 np0005486808 python3.9[150623]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 19f917bb-1f2e-412a-b686-dd2f879aff66 does not exist
Oct 14 04:32:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 65fd3d19-140a-48a2-aa5b-55d092a83daa does not exist
Oct 14 04:32:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 98c566f3-5d8c-40b3-8d5b-de023d5d4520 does not exist
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:32:41 np0005486808 podman[150914]: 2025-10-14 08:32:41.91945465 +0000 UTC m=+0.048686088 container create 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:32:41 np0005486808 systemd[1]: Started libpod-conmon-06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22.scope.
Oct 14 04:32:41 np0005486808 podman[150914]: 2025-10-14 08:32:41.898345452 +0000 UTC m=+0.027576940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:42 np0005486808 podman[150914]: 2025-10-14 08:32:42.010208941 +0000 UTC m=+0.139440399 container init 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:32:42 np0005486808 podman[150914]: 2025-10-14 08:32:42.016417895 +0000 UTC m=+0.145649343 container start 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:32:42 np0005486808 podman[150914]: 2025-10-14 08:32:42.019796554 +0000 UTC m=+0.149028022 container attach 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:32:42 np0005486808 great_torvalds[150949]: 167 167
Oct 14 04:32:42 np0005486808 systemd[1]: libpod-06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22.scope: Deactivated successfully.
Oct 14 04:32:42 np0005486808 podman[150914]: 2025-10-14 08:32:42.022514936 +0000 UTC m=+0.151746414 container died 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:32:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa25ce5e8957ec67d1ac373bc53061de1dcfecf9cc0a9ea74598246116fd8663-merged.mount: Deactivated successfully.
Oct 14 04:32:42 np0005486808 podman[150914]: 2025-10-14 08:32:42.084181087 +0000 UTC m=+0.213412525 container remove 06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:32:42 np0005486808 systemd[1]: libpod-conmon-06d077e4ab5e101b351e684c721d785f930f6462fbf730eae1ef602140971b22.scope: Deactivated successfully.
Oct 14 04:32:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:42 np0005486808 python3.9[150946]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 14 04:32:42 np0005486808 podman[150993]: 2025-10-14 08:32:42.288687407 +0000 UTC m=+0.062863544 container create 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:32:42 np0005486808 systemd[1]: Started libpod-conmon-20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d.scope.
Oct 14 04:32:42 np0005486808 podman[150993]: 2025-10-14 08:32:42.270407213 +0000 UTC m=+0.044583370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:42 np0005486808 podman[150993]: 2025-10-14 08:32:42.396808856 +0000 UTC m=+0.170985013 container init 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:32:42 np0005486808 podman[150993]: 2025-10-14 08:32:42.407295814 +0000 UTC m=+0.181471941 container start 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:32:42 np0005486808 podman[150993]: 2025-10-14 08:32:42.411337801 +0000 UTC m=+0.185513958 container attach 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:32:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:43 np0005486808 strange_robinson[151016]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:32:43 np0005486808 strange_robinson[151016]: --> relative data size: 1.0
Oct 14 04:32:43 np0005486808 strange_robinson[151016]: --> All data devices are unavailable
Oct 14 04:32:43 np0005486808 systemd[1]: libpod-20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d.scope: Deactivated successfully.
Oct 14 04:32:43 np0005486808 systemd[1]: libpod-20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d.scope: Consumed 1.022s CPU time.
Oct 14 04:32:43 np0005486808 podman[150993]: 2025-10-14 08:32:43.512106547 +0000 UTC m=+1.286282704 container died 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:32:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-de2383a220a0a96fc282f5a7b154d6390ae8ecc40c97f8c719f021a9ff68bde8-merged.mount: Deactivated successfully.
Oct 14 04:32:43 np0005486808 podman[150993]: 2025-10-14 08:32:43.583163456 +0000 UTC m=+1.357339593 container remove 20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:32:43 np0005486808 systemd[1]: libpod-conmon-20c341b5433727be1f4d6900af499a801bd0c7024dd261c6e841627be1ebc06d.scope: Deactivated successfully.
Oct 14 04:32:43 np0005486808 python3[151210]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.280487321 +0000 UTC m=+0.048854203 container create 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:32:44 np0005486808 systemd[1]: Started libpod-conmon-839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c.scope.
Oct 14 04:32:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.259387053 +0000 UTC m=+0.027753945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.361792302 +0000 UTC m=+0.130159194 container init 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.369561977 +0000 UTC m=+0.137928879 container start 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.374288432 +0000 UTC m=+0.142655324 container attach 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:32:44 np0005486808 elated_matsumoto[151393]: 167 167
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.376816189 +0000 UTC m=+0.145183101 container died 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 04:32:44 np0005486808 systemd[1]: libpod-839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c.scope: Deactivated successfully.
Oct 14 04:32:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-093f1e9094e0d90dcde7ce9695878dca79bb1f1df44fc39c67b5cef7e52d676c-merged.mount: Deactivated successfully.
Oct 14 04:32:44 np0005486808 podman[151376]: 2025-10-14 08:32:44.42107425 +0000 UTC m=+0.189441162 container remove 839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:32:44 np0005486808 systemd[1]: libpod-conmon-839506223c415f82a3d14621b8ad99f894db782228216c138948e3b55901347c.scope: Deactivated successfully.
Oct 14 04:32:44 np0005486808 podman[151418]: 2025-10-14 08:32:44.645913657 +0000 UTC m=+0.053590679 container create 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:32:44 np0005486808 systemd[1]: Started libpod-conmon-8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42.scope.
Oct 14 04:32:44 np0005486808 podman[151418]: 2025-10-14 08:32:44.625450826 +0000 UTC m=+0.033127848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747f3ecacd29a3c49616eaccc328fd8a8ab5e06b8b34cd0fd1c99e8e9b947457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747f3ecacd29a3c49616eaccc328fd8a8ab5e06b8b34cd0fd1c99e8e9b947457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747f3ecacd29a3c49616eaccc328fd8a8ab5e06b8b34cd0fd1c99e8e9b947457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747f3ecacd29a3c49616eaccc328fd8a8ab5e06b8b34cd0fd1c99e8e9b947457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:44 np0005486808 podman[151418]: 2025-10-14 08:32:44.778534905 +0000 UTC m=+0.186211907 container init 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:32:44 np0005486808 podman[151418]: 2025-10-14 08:32:44.785343835 +0000 UTC m=+0.193020817 container start 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:32:44 np0005486808 podman[151418]: 2025-10-14 08:32:44.788283623 +0000 UTC m=+0.195960635 container attach 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:32:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]: {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    "0": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "devices": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "/dev/loop3"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            ],
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_name": "ceph_lv0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_size": "21470642176",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "name": "ceph_lv0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "tags": {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_name": "ceph",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.crush_device_class": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.encrypted": "0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_id": "0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.vdo": "0"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            },
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "vg_name": "ceph_vg0"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        }
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    ],
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    "1": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "devices": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "/dev/loop4"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            ],
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_name": "ceph_lv1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_size": "21470642176",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "name": "ceph_lv1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "tags": {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_name": "ceph",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.crush_device_class": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.encrypted": "0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_id": "1",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.vdo": "0"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            },
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "vg_name": "ceph_vg1"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        }
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    ],
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    "2": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "devices": [
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "/dev/loop5"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            ],
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_name": "ceph_lv2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_size": "21470642176",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "name": "ceph_lv2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "tags": {
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.cluster_name": "ceph",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.crush_device_class": "",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.encrypted": "0",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osd_id": "2",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:                "ceph.vdo": "0"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            },
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "type": "block",
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:            "vg_name": "ceph_vg2"
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:        }
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]:    ]
Oct 14 04:32:45 np0005486808 tender_elbakyan[151439]: }
Oct 14 04:32:45 np0005486808 podman[151418]: 2025-10-14 08:32:45.565656544 +0000 UTC m=+0.973333516 container died 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:32:45 np0005486808 systemd[1]: libpod-8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42.scope: Deactivated successfully.
Oct 14 04:32:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-747f3ecacd29a3c49616eaccc328fd8a8ab5e06b8b34cd0fd1c99e8e9b947457-merged.mount: Deactivated successfully.
Oct 14 04:32:45 np0005486808 podman[151418]: 2025-10-14 08:32:45.61653351 +0000 UTC m=+1.024210492 container remove 8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:32:45 np0005486808 systemd[1]: libpod-conmon-8d5db51e58adea0053701fcac0776d504b0eb8e2341c65e17615d682b0938e42.scope: Deactivated successfully.
Oct 14 04:32:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:49 np0005486808 podman[151323]: 2025-10-14 08:32:49.104656312 +0000 UTC m=+5.190300876 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.198532415 +0000 UTC m=+0.050957689 container create b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:32:49 np0005486808 podman[151711]: 2025-10-14 08:32:49.23500537 +0000 UTC m=+0.050230510 container create 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 14 04:32:49 np0005486808 podman[151711]: 2025-10-14 08:32:49.204976596 +0000 UTC m=+0.020201756 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf
Oct 14 04:32:49 np0005486808 systemd[1]: Started libpod-conmon-b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc.scope.
Oct 14 04:32:49 np0005486808 python3[151210]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf
Oct 14 04:32:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.180274582 +0000 UTC m=+0.032699876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.280037091 +0000 UTC m=+0.132462385 container init b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.285798863 +0000 UTC m=+0.138224137 container start b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:32:49 np0005486808 amazing_meitner[151730]: 167 167
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.29020708 +0000 UTC m=+0.142632354 container attach b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.291199586 +0000 UTC m=+0.143624860 container died b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:32:49 np0005486808 systemd[1]: libpod-b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc.scope: Deactivated successfully.
Oct 14 04:32:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2e3de78cc806a1ae21e3efdfcb4efb7af70231d7311911f0b6cbcef2a3df6ed4-merged.mount: Deactivated successfully.
Oct 14 04:32:49 np0005486808 podman[151685]: 2025-10-14 08:32:49.334869121 +0000 UTC m=+0.187294395 container remove b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:32:49 np0005486808 systemd[1]: libpod-conmon-b78218ad9bb91d5e57b615628a85bc475b8b9e4f8e3796dc55a5618c164282cc.scope: Deactivated successfully.
Oct 14 04:32:49 np0005486808 podman[151798]: 2025-10-14 08:32:49.503552653 +0000 UTC m=+0.049674525 container create 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:32:49 np0005486808 systemd[1]: Started libpod-conmon-202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866.scope.
Oct 14 04:32:49 np0005486808 podman[151798]: 2025-10-14 08:32:49.480654217 +0000 UTC m=+0.026776099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:32:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b1f77fccf7cc75b82ad3ad07432de87c920f9a26f9a06fbde6272e764ea43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b1f77fccf7cc75b82ad3ad07432de87c920f9a26f9a06fbde6272e764ea43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b1f77fccf7cc75b82ad3ad07432de87c920f9a26f9a06fbde6272e764ea43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b1f77fccf7cc75b82ad3ad07432de87c920f9a26f9a06fbde6272e764ea43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:49 np0005486808 podman[151798]: 2025-10-14 08:32:49.598280849 +0000 UTC m=+0.144402751 container init 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:32:49 np0005486808 podman[151798]: 2025-10-14 08:32:49.606393093 +0000 UTC m=+0.152514965 container start 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:32:49 np0005486808 podman[151798]: 2025-10-14 08:32:49.610745409 +0000 UTC m=+0.156867271 container attach 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:32:50 np0005486808 python3.9[151947]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]: {
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_id": 2,
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "type": "bluestore"
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    },
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_id": 1,
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "type": "bluestore"
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    },
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_id": 0,
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:        "type": "bluestore"
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]:    }
Oct 14 04:32:50 np0005486808 hungry_satoshi[151833]: }
Oct 14 04:32:50 np0005486808 systemd[1]: libpod-202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866.scope: Deactivated successfully.
Oct 14 04:32:50 np0005486808 systemd[1]: libpod-202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866.scope: Consumed 1.024s CPU time.
Oct 14 04:32:50 np0005486808 podman[152025]: 2025-10-14 08:32:50.724724544 +0000 UTC m=+0.037808541 container died 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:32:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-046b1f77fccf7cc75b82ad3ad07432de87c920f9a26f9a06fbde6272e764ea43-merged.mount: Deactivated successfully.
Oct 14 04:32:50 np0005486808 podman[152025]: 2025-10-14 08:32:50.772227611 +0000 UTC m=+0.085311578 container remove 202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:32:50 np0005486808 systemd[1]: libpod-conmon-202b97913efbbf39249f10917dde51628b2b7ad349f270acef1f6790c11d3866.scope: Deactivated successfully.
Oct 14 04:32:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:32:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:32:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:50 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f97fee3a-8bc0-4eaf-9891-fb092fd4aeca does not exist
Oct 14 04:32:50 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7807010a-daef-46a7-a040-95631d9676d4 does not exist
Oct 14 04:32:51 np0005486808 python3.9[152193]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:51 np0005486808 python3.9[152270]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:32:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:32:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:52 np0005486808 python3.9[152421]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430771.7336154-581-253675332392377/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:32:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:53 np0005486808 python3.9[152497]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:32:53 np0005486808 systemd[1]: Reloading.
Oct 14 04:32:53 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:32:53 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:32:54 np0005486808 python3.9[152607]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:32:54 np0005486808 systemd[1]: Reloading.
Oct 14 04:32:54 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:32:54 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:32:54 np0005486808 systemd[1]: Starting ovn_controller container...
Oct 14 04:32:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:32:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9675ddd69d791c5964d5b99edef332f1e79cc453f952fe4c2fd2f9dd5049908e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 14 04:32:54 np0005486808 systemd[1]: Started /usr/bin/podman healthcheck run 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47.
Oct 14 04:32:54 np0005486808 podman[152647]: 2025-10-14 08:32:54.640689666 +0000 UTC m=+0.179294595 container init 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 14 04:32:54 np0005486808 ovn_controller[152662]: + sudo -E kolla_set_configs
Oct 14 04:32:54 np0005486808 podman[152647]: 2025-10-14 08:32:54.685308912 +0000 UTC m=+0.223913811 container start 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:32:54 np0005486808 edpm-start-podman-container[152647]: ovn_controller
Oct 14 04:32:54 np0005486808 systemd[1]: Created slice User Slice of UID 0.
Oct 14 04:32:54 np0005486808 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 14 04:32:54 np0005486808 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 14 04:32:54 np0005486808 edpm-start-podman-container[152646]: Creating additional drop-in dependency for "ovn_controller" (71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47)
Oct 14 04:32:54 np0005486808 systemd[1]: Starting User Manager for UID 0...
Oct 14 04:32:54 np0005486808 systemd[1]: Reloading.
Oct 14 04:32:54 np0005486808 podman[152669]: 2025-10-14 08:32:54.799991454 +0000 UTC m=+0.094911472 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 14 04:32:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:54 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:32:54 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:32:54 np0005486808 systemd[152693]: Queued start job for default target Main User Target.
Oct 14 04:32:54 np0005486808 systemd[152693]: Created slice User Application Slice.
Oct 14 04:32:54 np0005486808 systemd[152693]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 14 04:32:54 np0005486808 systemd[152693]: Started Daily Cleanup of User's Temporary Directories.
Oct 14 04:32:54 np0005486808 systemd[152693]: Reached target Paths.
Oct 14 04:32:54 np0005486808 systemd[152693]: Reached target Timers.
Oct 14 04:32:54 np0005486808 systemd[152693]: Starting D-Bus User Message Bus Socket...
Oct 14 04:32:54 np0005486808 systemd[152693]: Starting Create User's Volatile Files and Directories...
Oct 14 04:32:54 np0005486808 systemd[152693]: Listening on D-Bus User Message Bus Socket.
Oct 14 04:32:54 np0005486808 systemd[152693]: Reached target Sockets.
Oct 14 04:32:54 np0005486808 systemd[152693]: Finished Create User's Volatile Files and Directories.
Oct 14 04:32:54 np0005486808 systemd[152693]: Reached target Basic System.
Oct 14 04:32:54 np0005486808 systemd[152693]: Reached target Main User Target.
Oct 14 04:32:54 np0005486808 systemd[152693]: Startup finished in 169ms.
Oct 14 04:32:55 np0005486808 systemd[1]: Started User Manager for UID 0.
Oct 14 04:32:55 np0005486808 systemd[1]: Started ovn_controller container.
Oct 14 04:32:55 np0005486808 systemd[1]: 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47-37a65523b1b1a153.service: Main process exited, code=exited, status=1/FAILURE
Oct 14 04:32:55 np0005486808 systemd[1]: 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47-37a65523b1b1a153.service: Failed with result 'exit-code'.
Oct 14 04:32:55 np0005486808 systemd[1]: Started Session c1 of User root.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: INFO:__main__:Validating config file
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: INFO:__main__:Writing out command to execute
Oct 14 04:32:55 np0005486808 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: ++ cat /run_command
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + ARGS=
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + sudo kolla_copy_cacerts
Oct 14 04:32:55 np0005486808 systemd[1]: Started Session c2 of User root.
Oct 14 04:32:55 np0005486808 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + [[ ! -n '' ]]
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + . kolla_extend_start
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + umask 0022
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2551] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2561] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2577] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2585] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2590] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 14 04:32:55 np0005486808 kernel: br-int: entered promiscuous mode
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00020|main|INFO|OVS feature set changed, force recompute.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2779] manager: (ovn-d299a3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 14 04:32:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:32:55Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 14 04:32:55 np0005486808 kernel: genev_sys_6081: entered promiscuous mode
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2958] device (genev_sys_6081): carrier: link connected
Oct 14 04:32:55 np0005486808 NetworkManager[44885]: <info>  [1760430775.2963] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 14 04:32:55 np0005486808 systemd-udevd[152815]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:32:55 np0005486808 systemd-udevd[152814]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:32:55 np0005486808 python3.9[152927]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:55 np0005486808 ovs-vsctl[152928]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 14 04:32:56 np0005486808 python3.9[153080]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:56 np0005486808 ovs-vsctl[153082]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 14 04:32:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:32:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:32:57 np0005486808 python3.9[153235]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:32:57 np0005486808 ovs-vsctl[153236]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 14 04:32:58 np0005486808 systemd[1]: session-47.scope: Deactivated successfully.
Oct 14 04:32:58 np0005486808 systemd[1]: session-47.scope: Consumed 1min 3.217s CPU time.
Oct 14 04:32:58 np0005486808 systemd-logind[799]: Session 47 logged out. Waiting for processes to exit.
Oct 14 04:32:58 np0005486808 systemd-logind[799]: Removed session 47.
Oct 14 04:32:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:03 np0005486808 systemd-logind[799]: New session 49 of user zuul.
Oct 14 04:33:03 np0005486808 systemd[1]: Started Session 49 of User zuul.
Oct 14 04:33:04 np0005486808 python3.9[153414]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:33:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:05 np0005486808 systemd[1]: Stopping User Manager for UID 0...
Oct 14 04:33:05 np0005486808 systemd[152693]: Activating special unit Exit the Session...
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped target Main User Target.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped target Basic System.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped target Paths.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped target Sockets.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped target Timers.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 14 04:33:05 np0005486808 systemd[152693]: Closed D-Bus User Message Bus Socket.
Oct 14 04:33:05 np0005486808 systemd[152693]: Stopped Create User's Volatile Files and Directories.
Oct 14 04:33:05 np0005486808 systemd[152693]: Removed slice User Application Slice.
Oct 14 04:33:05 np0005486808 systemd[152693]: Reached target Shutdown.
Oct 14 04:33:05 np0005486808 systemd[152693]: Finished Exit the Session.
Oct 14 04:33:05 np0005486808 systemd[152693]: Reached target Exit the Session.
Oct 14 04:33:05 np0005486808 systemd[1]: user@0.service: Deactivated successfully.
Oct 14 04:33:05 np0005486808 systemd[1]: Stopped User Manager for UID 0.
Oct 14 04:33:05 np0005486808 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 14 04:33:05 np0005486808 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 14 04:33:05 np0005486808 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 14 04:33:05 np0005486808 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 14 04:33:05 np0005486808 systemd[1]: Removed slice User Slice of UID 0.
Oct 14 04:33:06 np0005486808 python3.9[153572]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:06 np0005486808 python3.9[153724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:07 np0005486808 python3.9[153876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:08 np0005486808 python3.9[154028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:09 np0005486808 python3.9[154180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:10 np0005486808 python3.9[154330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:33:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:11 np0005486808 python3.9[154482]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 14 04:33:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:12 np0005486808 python3.9[154632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:13 np0005486808 python3.9[154754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430791.9304678-86-187276958244034/.source follow=False _original_basename=haproxy.j2 checksum=c565b4bbf7a6482d8da025b29110249df37fa82b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:14 np0005486808 python3.9[154904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:15 np0005486808 python3.9[155025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430793.802442-101-152736249387468/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:16 np0005486808 python3.9[155177]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:33:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:17 np0005486808 python3.9[155261]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:33:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:19 np0005486808 python3.9[155414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:33:20 np0005486808 python3.9[155567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:21 np0005486808 python3.9[155688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430799.9257724-138-218869608395923/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:21 np0005486808 python3.9[155838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:22 np0005486808 python3.9[155959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430801.3833897-138-19215206408204/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:23 np0005486808 python3.9[156109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:24 np0005486808 python3.9[156230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430803.184316-182-168068829239341/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:25 np0005486808 python3.9[156380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:25 np0005486808 ovn_controller[152662]: 2025-10-14T08:33:25Z|00025|memory|INFO|16384 kB peak resident set size after 30.2 seconds
Oct 14 04:33:25 np0005486808 ovn_controller[152662]: 2025-10-14T08:33:25Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 14 04:33:25 np0005486808 podman[156475]: 2025-10-14 08:33:25.518942339 +0000 UTC m=+0.115320858 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:33:25 np0005486808 python3.9[156511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430804.5734992-182-249211813613668/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:26 np0005486808 python3.9[156677]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:33:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:27 np0005486808 python3.9[156831]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:27 np0005486808 python3.9[156983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:28 np0005486808 python3.9[157061]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:33:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5437 writes, 23K keys, 5437 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5437 writes, 809 syncs, 6.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5437 writes, 23K keys, 5437 commit groups, 1.0 writes per commit group, ingest: 18.32 MB, 0.03 MB/s#012Interval WAL: 5437 writes, 809 syncs, 6.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 14 04:33:29 np0005486808 python3.9[157213]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:29 np0005486808 python3.9[157291]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:30 np0005486808 python3.9[157443]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:31 np0005486808 python3.9[157595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:32 np0005486808 python3.9[157673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:33:32
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.rgw.root']
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:33:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:32 np0005486808 python3.9[157825]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:33 np0005486808 python3.9[157903]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:34 np0005486808 python3.9[158055]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:33:34 np0005486808 systemd[1]: Reloading.
Oct 14 04:33:34 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:33:34 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:33:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:33:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6614 writes, 27K keys, 6614 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6614 writes, 1183 syncs, 5.59 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6614 writes, 27K keys, 6614 commit groups, 1.0 writes per commit group, ingest: 19.19 MB, 0.03 MB/s#012Interval WAL: 6614 writes, 1183 syncs, 5.59 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 14 04:33:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:35 np0005486808 python3.9[158245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:35 np0005486808 python3.9[158323]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:36 np0005486808 python3.9[158475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:37 np0005486808 python3.9[158553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:38 np0005486808 python3.9[158705]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:33:38 np0005486808 systemd[1]: Reloading.
Oct 14 04:33:38 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:33:38 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:33:38 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:33:38 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:33:38 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:33:38 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:33:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:39 np0005486808 python3.9[158897]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:33:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5419 writes, 23K keys, 5419 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5419 writes, 786 syncs, 6.89 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5419 writes, 23K keys, 5419 commit groups, 1.0 writes per commit group, ingest: 18.10 MB, 0.03 MB/s#012Interval WAL: 5419 writes, 786 syncs, 6.89 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 14 04:33:40 np0005486808 python3.9[159049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 04:33:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:41 np0005486808 python3.9[159172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760430819.7582872-333-134521609673419/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:42 np0005486808 python3.9[159324]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:33:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:33:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:42 np0005486808 python3.9[159476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:33:43 np0005486808 python3.9[159599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760430822.380934-358-204547174231912/.source.json _original_basename=.ndnlm510 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:44 np0005486808 python3.9[159751]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:33:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:46 np0005486808 python3.9[160178]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 14 04:33:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:48 np0005486808 python3.9[160330]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:33:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:49 np0005486808 python3.9[160482]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 14 04:33:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:51 np0005486808 python3[160661]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 57024999-25ae-4314-851d-187e0d14fafc does not exist
Oct 14 04:33:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dd13baf1-8db7-4503-8390-30de19e48b57 does not exist
Oct 14 04:33:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 41b9c337-ffa0-43e8-8563-de1adde9bc7d does not exist
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:33:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:33:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:33:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:33:59 np0005486808 podman[161102]: 2025-10-14 08:33:59.23953646 +0000 UTC m=+3.648946634 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 04:33:59 np0005486808 podman[160751]: 2025-10-14 08:33:59.635393676 +0000 UTC m=+8.393384016 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.709640567 +0000 UTC m=+0.067894318 container create 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:33:59 np0005486808 systemd[1]: Started libpod-conmon-16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722.scope.
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.682852181 +0000 UTC m=+0.041106022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:33:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.830219549 +0000 UTC m=+0.188473300 container init 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:33:59 np0005486808 podman[161223]: 2025-10-14 08:33:59.8413264 +0000 UTC m=+0.076890632 container create 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:33:59 np0005486808 podman[161223]: 2025-10-14 08:33:59.798418448 +0000 UTC m=+0.033982760 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.844097783 +0000 UTC m=+0.202351534 container start 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:33:59 np0005486808 python3[160661]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.848262057 +0000 UTC m=+0.206515868 container attach 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:33:59 np0005486808 zen_keldysh[161229]: 167 167
Oct 14 04:33:59 np0005486808 systemd[1]: libpod-16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722.scope: Deactivated successfully.
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.858704654 +0000 UTC m=+0.216958435 container died 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:33:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-902595cc168a552002e591cd819bd006144609dc9ba32a2f33f4b94eebe262ed-merged.mount: Deactivated successfully.
Oct 14 04:33:59 np0005486808 podman[161186]: 2025-10-14 08:33:59.915453109 +0000 UTC m=+0.273706860 container remove 16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:33:59 np0005486808 systemd[1]: libpod-conmon-16cdffeea3494d1d93ae5585ba0650b505ce884fb5539b70f2765960dd48f722.scope: Deactivated successfully.
Oct 14 04:34:00 np0005486808 podman[161284]: 2025-10-14 08:34:00.116352599 +0000 UTC m=+0.058813493 container create dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:34:00 np0005486808 systemd[1]: Started libpod-conmon-dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa.scope.
Oct 14 04:34:00 np0005486808 podman[161284]: 2025-10-14 08:34:00.088313574 +0000 UTC m=+0.030774548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:34:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:00 np0005486808 podman[161284]: 2025-10-14 08:34:00.24138113 +0000 UTC m=+0.183842094 container init dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:34:00 np0005486808 podman[161284]: 2025-10-14 08:34:00.253546706 +0000 UTC m=+0.196007640 container start dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:00 np0005486808 podman[161284]: 2025-10-14 08:34:00.258410136 +0000 UTC m=+0.200871130 container attach dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:00 np0005486808 python3.9[161457]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:34:01 np0005486808 brave_stonebraker[161325]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:34:01 np0005486808 brave_stonebraker[161325]: --> relative data size: 1.0
Oct 14 04:34:01 np0005486808 brave_stonebraker[161325]: --> All data devices are unavailable
Oct 14 04:34:01 np0005486808 podman[161284]: 2025-10-14 08:34:01.33893696 +0000 UTC m=+1.281397854 container died dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:34:01 np0005486808 systemd[1]: libpod-dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa.scope: Deactivated successfully.
Oct 14 04:34:01 np0005486808 systemd[1]: libpod-dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa.scope: Consumed 1.021s CPU time.
Oct 14 04:34:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-40344874d49b614e33c3b55fa97eee73dd5be9f264603ea42b53e541c96b5aa0-merged.mount: Deactivated successfully.
Oct 14 04:34:01 np0005486808 podman[161284]: 2025-10-14 08:34:01.394645222 +0000 UTC m=+1.337106116 container remove dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:34:01 np0005486808 systemd[1]: libpod-conmon-dbadd8d5a15c4c88a0327e42e223800f3f054aba0df10da26344da37cb1346fa.scope: Deactivated successfully.
Oct 14 04:34:01 np0005486808 python3.9[161673]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.032188042 +0000 UTC m=+0.073442735 container create f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:02 np0005486808 systemd[1]: Started libpod-conmon-f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae.scope.
Oct 14 04:34:02 np0005486808 python3.9[161854]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.002249214 +0000 UTC m=+0.043503997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:34:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.126634531 +0000 UTC m=+0.167889244 container init f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.134475799 +0000 UTC m=+0.175730502 container start f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.138004049 +0000 UTC m=+0.179258772 container attach f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:34:02 np0005486808 trusting_rosalind[161883]: 167 167
Oct 14 04:34:02 np0005486808 systemd[1]: libpod-f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae.scope: Deactivated successfully.
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.140861893 +0000 UTC m=+0.182116586 container died f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:34:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b1f6c9cc771e786d4ff3d185b2bd61fdac086ac6f5dab1ae14f1d6516ab92bc1-merged.mount: Deactivated successfully.
Oct 14 04:34:02 np0005486808 podman[161867]: 2025-10-14 08:34:02.177958834 +0000 UTC m=+0.219213537 container remove f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 14 04:34:02 np0005486808 systemd[1]: libpod-conmon-f044fe9e69828d83a7cc87632668ad3a17103b0fa8c1c566a262e0736d8085ae.scope: Deactivated successfully.
Oct 14 04:34:02 np0005486808 podman[161960]: 2025-10-14 08:34:02.373302538 +0000 UTC m=+0.059181801 container create 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:34:02 np0005486808 systemd[1]: Started libpod-conmon-7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e.scope.
Oct 14 04:34:02 np0005486808 podman[161960]: 2025-10-14 08:34:02.343499013 +0000 UTC m=+0.029378126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:34:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d070779ae1f39e2abc07a98c0993cfd3f7e6f0b363ad4f7a31a30951c527717a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d070779ae1f39e2abc07a98c0993cfd3f7e6f0b363ad4f7a31a30951c527717a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d070779ae1f39e2abc07a98c0993cfd3f7e6f0b363ad4f7a31a30951c527717a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d070779ae1f39e2abc07a98c0993cfd3f7e6f0b363ad4f7a31a30951c527717a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:02 np0005486808 podman[161960]: 2025-10-14 08:34:02.493436769 +0000 UTC m=+0.179315832 container init 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:34:02 np0005486808 podman[161960]: 2025-10-14 08:34:02.502404542 +0000 UTC m=+0.188283575 container start 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:02 np0005486808 podman[161960]: 2025-10-14 08:34:02.506071335 +0000 UTC m=+0.191950398 container attach 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:02 np0005486808 python3.9[162080]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760430842.1674092-446-205494552380507/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]: {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    "0": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "devices": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "/dev/loop3"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            ],
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_name": "ceph_lv0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_size": "21470642176",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "name": "ceph_lv0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "tags": {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_name": "ceph",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.crush_device_class": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.encrypted": "0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_id": "0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.vdo": "0"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            },
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "vg_name": "ceph_vg0"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        }
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    ],
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    "1": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "devices": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "/dev/loop4"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            ],
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_name": "ceph_lv1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_size": "21470642176",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "name": "ceph_lv1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "tags": {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_name": "ceph",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.crush_device_class": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.encrypted": "0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_id": "1",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.vdo": "0"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            },
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "vg_name": "ceph_vg1"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        }
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    ],
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    "2": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "devices": [
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "/dev/loop5"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            ],
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_name": "ceph_lv2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_size": "21470642176",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "name": "ceph_lv2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "tags": {
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.cluster_name": "ceph",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.crush_device_class": "",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.encrypted": "0",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osd_id": "2",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:                "ceph.vdo": "0"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            },
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "type": "block",
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:            "vg_name": "ceph_vg2"
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:        }
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]:    ]
Oct 14 04:34:03 np0005486808 nostalgic_saha[161985]: }
Oct 14 04:34:03 np0005486808 systemd[1]: libpod-7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e.scope: Deactivated successfully.
Oct 14 04:34:03 np0005486808 podman[161960]: 2025-10-14 08:34:03.296844496 +0000 UTC m=+0.982723589 container died 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:34:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d070779ae1f39e2abc07a98c0993cfd3f7e6f0b363ad4f7a31a30951c527717a-merged.mount: Deactivated successfully.
Oct 14 04:34:03 np0005486808 podman[161960]: 2025-10-14 08:34:03.363836284 +0000 UTC m=+1.049715307 container remove 7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_saha, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:34:03 np0005486808 systemd[1]: libpod-conmon-7baf58fdb6dbea051e37604f07ea3d7bf6039035c2bc5b018389e1e6d8788c9e.scope: Deactivated successfully.
Oct 14 04:34:03 np0005486808 python3.9[162160]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:34:03 np0005486808 systemd[1]: Reloading.
Oct 14 04:34:03 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:34:03 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.292348563 +0000 UTC m=+0.042911133 container create 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:34:04 np0005486808 systemd[1]: Started libpod-conmon-3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de.scope.
Oct 14 04:34:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.269737791 +0000 UTC m=+0.020300410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.37388046 +0000 UTC m=+0.124443069 container init 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.384002339 +0000 UTC m=+0.134564898 container start 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 14 04:34:04 np0005486808 clever_heyrovsky[162440]: 167 167
Oct 14 04:34:04 np0005486808 systemd[1]: libpod-3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de.scope: Deactivated successfully.
Oct 14 04:34:04 np0005486808 conmon[162440]: conmon 3a879d36448423c07d5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de.scope/container/memory.events
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.390329353 +0000 UTC m=+0.140891972 container attach 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.391618092 +0000 UTC m=+0.142180651 container died 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:34:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e220ae0f7000edfc81d9708e92701776abce0837a7c0ad0e7c4799f6a14232de-merged.mount: Deactivated successfully.
Oct 14 04:34:04 np0005486808 podman[162424]: 2025-10-14 08:34:04.429532851 +0000 UTC m=+0.180095410 container remove 3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:34:04 np0005486808 systemd[1]: libpod-conmon-3a879d36448423c07d5baa7231c1387cbb9815c315fb99b7962998787430b9de.scope: Deactivated successfully.
Oct 14 04:34:04 np0005486808 python3.9[162416]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:04 np0005486808 systemd[1]: Reloading.
Oct 14 04:34:04 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:34:04 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:34:04 np0005486808 podman[162467]: 2025-10-14 08:34:04.60431564 +0000 UTC m=+0.036563990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:34:04 np0005486808 podman[162467]: 2025-10-14 08:34:04.711079318 +0000 UTC m=+0.143327628 container create 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:34:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:04 np0005486808 systemd[1]: Started libpod-conmon-7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4.scope.
Oct 14 04:34:04 np0005486808 systemd[1]: Starting ovn_metadata_agent container...
Oct 14 04:34:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9da94300850fc85a87c56bce9ce71a12d2461fae698f0e00b6245b2a453d75e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9da94300850fc85a87c56bce9ce71a12d2461fae698f0e00b6245b2a453d75e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9da94300850fc85a87c56bce9ce71a12d2461fae698f0e00b6245b2a453d75e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9da94300850fc85a87c56bce9ce71a12d2461fae698f0e00b6245b2a453d75e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:04 np0005486808 podman[162467]: 2025-10-14 08:34:04.962982253 +0000 UTC m=+0.395230593 container init 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:34:04 np0005486808 podman[162467]: 2025-10-14 08:34:04.971379514 +0000 UTC m=+0.403627804 container start 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 14 04:34:04 np0005486808 podman[162467]: 2025-10-14 08:34:04.974642167 +0000 UTC m=+0.406890457 container attach 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:34:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:34:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09ef57d4dd2ad8d9ebd4fd1b73868bbe123d19bdd648e9b17091c1c497733e2/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09ef57d4dd2ad8d9ebd4fd1b73868bbe123d19bdd648e9b17091c1c497733e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:34:05 np0005486808 systemd[1]: Started /usr/bin/podman healthcheck run 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41.
Oct 14 04:34:05 np0005486808 podman[162524]: 2025-10-14 08:34:05.078202463 +0000 UTC m=+0.127383646 container init 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + sudo -E kolla_set_configs
Oct 14 04:34:05 np0005486808 podman[162524]: 2025-10-14 08:34:05.108430098 +0000 UTC m=+0.157611281 container start 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 14 04:34:05 np0005486808 edpm-start-podman-container[162524]: ovn_metadata_agent
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Validating config file
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Copying service configuration files
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Writing out command to execute
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 14 04:34:05 np0005486808 edpm-start-podman-container[162522]: Creating additional drop-in dependency for "ovn_metadata_agent" (850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41)
Oct 14 04:34:05 np0005486808 podman[162549]: 2025-10-14 08:34:05.189653457 +0000 UTC m=+0.069815082 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: ++ cat /run_command
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + CMD=neutron-ovn-metadata-agent
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + ARGS=
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + sudo kolla_copy_cacerts
Oct 14 04:34:05 np0005486808 systemd[1]: Reloading.
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + [[ ! -n '' ]]
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + . kolla_extend_start
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: Running command: 'neutron-ovn-metadata-agent'
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + umask 0022
Oct 14 04:34:05 np0005486808 ovn_metadata_agent[162542]: + exec neutron-ovn-metadata-agent
Oct 14 04:34:05 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:34:05 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:34:05 np0005486808 systemd[1]: Started ovn_metadata_agent container.
Oct 14 04:34:05 np0005486808 systemd[1]: session-49.scope: Deactivated successfully.
Oct 14 04:34:05 np0005486808 systemd[1]: session-49.scope: Consumed 1min 74ms CPU time.
Oct 14 04:34:05 np0005486808 systemd-logind[799]: Session 49 logged out. Waiting for processes to exit.
Oct 14 04:34:05 np0005486808 systemd-logind[799]: Removed session 49.
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]: {
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_id": 2,
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "type": "bluestore"
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    },
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_id": 1,
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "type": "bluestore"
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    },
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_id": 0,
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:        "type": "bluestore"
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]:    }
Oct 14 04:34:06 np0005486808 confident_mclaren[162520]: }
Oct 14 04:34:06 np0005486808 systemd[1]: libpod-7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4.scope: Deactivated successfully.
Oct 14 04:34:06 np0005486808 systemd[1]: libpod-7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4.scope: Consumed 1.029s CPU time.
Oct 14 04:34:06 np0005486808 conmon[162520]: conmon 7c56672500568ac1939a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4.scope/container/memory.events
Oct 14 04:34:06 np0005486808 podman[162467]: 2025-10-14 08:34:06.036626041 +0000 UTC m=+1.468874341 container died 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:34:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9da94300850fc85a87c56bce9ce71a12d2461fae698f0e00b6245b2a453d75e0-merged.mount: Deactivated successfully.
Oct 14 04:34:06 np0005486808 podman[162467]: 2025-10-14 08:34:06.10057572 +0000 UTC m=+1.532824000 container remove 7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclaren, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:34:06 np0005486808 systemd[1]: libpod-conmon-7c56672500568ac1939ac1a9bfee544ebbb37142367696b0fa84ccfc8ea146d4.scope: Deactivated successfully.
Oct 14 04:34:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:34:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:34:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:34:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:34:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c00ad145-0827-4eaa-8b4c-48a60d051c5c does not exist
Oct 14 04:34:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 42dd54fc-3907-4818-ba13-6b4d704f2218 does not exist
Oct 14 04:34:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.954 162547 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.954 162547 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.954 162547 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.955 162547 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.956 162547 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.957 162547 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.958 162547 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.959 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.960 162547 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.961 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.962 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.963 162547 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.964 162547 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.965 162547 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.966 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.967 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.968 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.969 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.970 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.971 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.972 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.973 162547 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.974 162547 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.975 162547 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.976 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.977 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.978 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.979 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.980 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.981 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.982 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.983 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.984 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.985 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.986 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.987 162547 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.996 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.996 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.996 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.997 162547 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 14 04:34:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:06.997 162547 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.008 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bb42e45d-8149-4fcf-a722-37b1def68e20 (UUID: bb42e45d-8149-4fcf-a722-37b1def68e20) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.041 162547 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.042 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.042 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.042 162547 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.045 162547 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.051 162547 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.056 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bb42e45d-8149-4fcf-a722-37b1def68e20'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], external_ids={}, name=bb42e45d-8149-4fcf-a722-37b1def68e20, nb_cfg_timestamp=1760430783279, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.057 162547 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcf3ff29310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.058 162547 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.062 162547 DEBUG oslo_service.service [-] Started child 162744 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.066 162547 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpcavyxgub/privsep.sock']#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.068 162744 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-134133287'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.093 162744 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.093 162744 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.093 162744 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.096 162744 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.103 162744 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.108 162744 INFO eventlet.wsgi.server [-] (162744) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 14 04:34:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:34:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:34:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:07 np0005486808 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.703 162547 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.704 162547 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcavyxgub/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.585 162749 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.590 162749 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.596 162749 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.596 162749 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162749#033[00m
Oct 14 04:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:07.707 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ae4805-61b9-4b8a-9551-d6fc36af7378]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.198 162749 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.198 162749 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.198 162749 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.717 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3d09e2-8297-4d65-8ff8-a61cf2e7dbab]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, column=external_ids, values=({'neutron:ovn-metadata-id': '44bb0d9e-a806-560e-8c45-e268c855c20c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.738 162547 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.738 162547 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.738 162547 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.739 162547 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.739 162547 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.739 162547 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.740 162547 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.741 162547 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.742 162547 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.743 162547 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.744 162547 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.745 162547 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.746 162547 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.747 162547 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.748 162547 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.749 162547 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.750 162547 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.751 162547 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.752 162547 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.753 162547 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.754 162547 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.755 162547 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.756 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.757 162547 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.758 162547 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.759 162547 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.760 162547 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.761 162547 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.762 162547 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.763 162547 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.764 162547 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.765 162547 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.766 162547 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.767 162547 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.768 162547 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.769 162547 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.770 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.771 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.772 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.773 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:34:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:34:08.774 162547 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 14 04:34:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:10 np0005486808 systemd-logind[799]: New session 50 of user zuul.
Oct 14 04:34:10 np0005486808 systemd[1]: Started Session 50 of User zuul.
Oct 14 04:34:12 np0005486808 python3.9[162908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:34:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:13 np0005486808 python3.9[163065]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:14 np0005486808 python3.9[163230]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:34:14 np0005486808 systemd[1]: Reloading.
Oct 14 04:34:14 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:34:14 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:34:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:15 np0005486808 python3.9[163415]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:34:15 np0005486808 network[163432]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:34:15 np0005486808 network[163433]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:34:15 np0005486808 network[163434]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:34:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:20 np0005486808 python3.9[163699]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:21 np0005486808 python3.9[163852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:21 np0005486808 python3.9[164005]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:22 np0005486808 python3.9[164158]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:23 np0005486808 python3.9[164311]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:24 np0005486808 python3.9[164464]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:25 np0005486808 python3.9[164617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:34:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:26 np0005486808 python3.9[164770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:27 np0005486808 python3.9[164922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:28 np0005486808 python3.9[165074]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:29 np0005486808 python3.9[165226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:29 np0005486808 podman[165332]: 2025-10-14 08:34:29.727929521 +0000 UTC m=+0.129207048 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 04:34:29 np0005486808 python3.9[165400]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:31 np0005486808 python3.9[165556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:32 np0005486808 python3.9[165708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:34:32
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:34:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:32 np0005486808 python3.9[165860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:33 np0005486808 python3.9[166012]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:34 np0005486808 python3.9[166164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:35 np0005486808 python3.9[166316]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:35 np0005486808 podman[166440]: 2025-10-14 08:34:35.677770863 +0000 UTC m=+0.088766501 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:34:35 np0005486808 python3.9[166489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:36 np0005486808 python3.9[166641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:37 np0005486808 python3.9[166793]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:34:38 np0005486808 python3.9[166946]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:39 np0005486808 python3.9[167098]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 14 04:34:40 np0005486808 python3.9[167250]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:34:40 np0005486808 systemd[1]: Reloading.
Oct 14 04:34:40 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:34:40 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:34:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:41 np0005486808 python3.9[167437]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:41 np0005486808 python3.9[167590]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:34:42 np0005486808 python3.9[167743]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:43 np0005486808 python3.9[167896]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:44 np0005486808 python3.9[168049]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:44 np0005486808 python3.9[168202]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:45 np0005486808 python3.9[168355]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:34:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:47 np0005486808 python3.9[168508]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 14 04:34:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:48 np0005486808 python3.9[168661]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:34:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:49 np0005486808 python3.9[168819]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 14 04:34:50 np0005486808 python3.9[168979]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:34:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:34:51 np0005486808 python3.9[169063]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:34:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:34:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:34:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:00 np0005486808 podman[169119]: 2025-10-14 08:35:00.773457872 +0000 UTC m=+0.170950393 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 04:35:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:06 np0005486808 podman[169298]: 2025-10-14 08:35:06.448089497 +0000 UTC m=+0.059639915 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 04:35:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:35:06.989 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:35:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:35:06.990 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:35:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:35:06.990 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0e6e4720-613b-4b29-b4a3-e9564fa714b2 does not exist
Oct 14 04:35:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6563e7cf-dda1-4c9a-9a65-c4c09c8e2063 does not exist
Oct 14 04:35:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4c3be0a9-89c4-4957-911a-fb6d16839096 does not exist
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:35:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:35:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:35:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.140073887 +0000 UTC m=+0.076851528 container create 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:35:08 np0005486808 systemd[1]: Started libpod-conmon-81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f.scope.
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.110943517 +0000 UTC m=+0.047721218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.238244927 +0000 UTC m=+0.175022548 container init 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.249114998 +0000 UTC m=+0.185892639 container start 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.252850398 +0000 UTC m=+0.189628019 container attach 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:35:08 np0005486808 nostalgic_diffie[169581]: 167 167
Oct 14 04:35:08 np0005486808 systemd[1]: libpod-81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f.scope: Deactivated successfully.
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.255923122 +0000 UTC m=+0.192700723 container died 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:35:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-185990f4ef493bedae44e8d3e2cbe0ea5d713205cac54357e11c678399c7e2ee-merged.mount: Deactivated successfully.
Oct 14 04:35:08 np0005486808 podman[169564]: 2025-10-14 08:35:08.300483503 +0000 UTC m=+0.237261104 container remove 81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_diffie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:35:08 np0005486808 systemd[1]: libpod-conmon-81aef3c7032407d83c1e6fac069f45c551d0bc7e1faf3b2bba85be078f8f2d3f.scope: Deactivated successfully.
Oct 14 04:35:08 np0005486808 podman[169604]: 2025-10-14 08:35:08.501805704 +0000 UTC m=+0.060110356 container create b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:08 np0005486808 systemd[1]: Started libpod-conmon-b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049.scope.
Oct 14 04:35:08 np0005486808 podman[169604]: 2025-10-14 08:35:08.468474593 +0000 UTC m=+0.026779285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:08 np0005486808 podman[169604]: 2025-10-14 08:35:08.645430807 +0000 UTC m=+0.203735459 container init b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:35:08 np0005486808 podman[169604]: 2025-10-14 08:35:08.658999913 +0000 UTC m=+0.217304545 container start b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:35:08 np0005486808 podman[169604]: 2025-10-14 08:35:08.663223655 +0000 UTC m=+0.221528297 container attach b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:35:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:09 np0005486808 zealous_johnson[169621]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:35:09 np0005486808 zealous_johnson[169621]: --> relative data size: 1.0
Oct 14 04:35:09 np0005486808 zealous_johnson[169621]: --> All data devices are unavailable
Oct 14 04:35:09 np0005486808 systemd[1]: libpod-b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049.scope: Deactivated successfully.
Oct 14 04:35:09 np0005486808 systemd[1]: libpod-b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049.scope: Consumed 1.212s CPU time.
Oct 14 04:35:09 np0005486808 podman[169604]: 2025-10-14 08:35:09.929009058 +0000 UTC m=+1.487313700 container died b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:35:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-21d6bbcddd6d3eeb23b5bf689a62884a8bac1878e3978e540db6b4a56765f181-merged.mount: Deactivated successfully.
Oct 14 04:35:10 np0005486808 podman[169604]: 2025-10-14 08:35:10.015225301 +0000 UTC m=+1.573529943 container remove b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 04:35:10 np0005486808 systemd[1]: libpod-conmon-b185edebabd6f121530202f5d4387009588853ec87d5f15aa1a95067b9ff2049.scope: Deactivated successfully.
Oct 14 04:35:10 np0005486808 podman[169809]: 2025-10-14 08:35:10.837261736 +0000 UTC m=+0.066382467 container create 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:35:10 np0005486808 systemd[1]: Started libpod-conmon-1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4.scope.
Oct 14 04:35:10 np0005486808 podman[169809]: 2025-10-14 08:35:10.805446541 +0000 UTC m=+0.034567282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:10 np0005486808 podman[169809]: 2025-10-14 08:35:10.922793752 +0000 UTC m=+0.151914523 container init 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:10 np0005486808 podman[169809]: 2025-10-14 08:35:10.929097574 +0000 UTC m=+0.158218285 container start 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:35:10 np0005486808 podman[169809]: 2025-10-14 08:35:10.932616928 +0000 UTC m=+0.161737739 container attach 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:35:10 np0005486808 stupefied_maxwell[169826]: 167 167
Oct 14 04:35:10 np0005486808 systemd[1]: libpod-1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4.scope: Deactivated successfully.
Oct 14 04:35:10 np0005486808 podman[169831]: 2025-10-14 08:35:10.97636188 +0000 UTC m=+0.027175484 container died 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:35:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0894ac8007583869cb1054719bb939a0c5cee69108eb3d2aa0f152abf1e9f19c-merged.mount: Deactivated successfully.
Oct 14 04:35:11 np0005486808 podman[169831]: 2025-10-14 08:35:11.020548732 +0000 UTC m=+0.071362336 container remove 1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:35:11 np0005486808 systemd[1]: libpod-conmon-1d0e3c3bc0022002168ed5d0f0bb19d47eb4d75925537eed2fcee2255bb8a4a4.scope: Deactivated successfully.
Oct 14 04:35:11 np0005486808 podman[169853]: 2025-10-14 08:35:11.19765426 +0000 UTC m=+0.037798469 container create 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 04:35:11 np0005486808 systemd[1]: Started libpod-conmon-988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92.scope.
Oct 14 04:35:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81ba1ba0b4ebd6494f1dd3c6cf74b88537c99bc7e60bfc51e5f36bb898c159a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81ba1ba0b4ebd6494f1dd3c6cf74b88537c99bc7e60bfc51e5f36bb898c159a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81ba1ba0b4ebd6494f1dd3c6cf74b88537c99bc7e60bfc51e5f36bb898c159a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81ba1ba0b4ebd6494f1dd3c6cf74b88537c99bc7e60bfc51e5f36bb898c159a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:11 np0005486808 podman[169853]: 2025-10-14 08:35:11.181380069 +0000 UTC m=+0.021524288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:11 np0005486808 podman[169853]: 2025-10-14 08:35:11.293566577 +0000 UTC m=+0.133710886 container init 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:35:11 np0005486808 podman[169853]: 2025-10-14 08:35:11.299561111 +0000 UTC m=+0.139705320 container start 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:35:11 np0005486808 podman[169853]: 2025-10-14 08:35:11.30285466 +0000 UTC m=+0.142998909 container attach 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]: {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    "0": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "devices": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "/dev/loop3"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            ],
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_name": "ceph_lv0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_size": "21470642176",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "name": "ceph_lv0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "tags": {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_name": "ceph",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.crush_device_class": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.encrypted": "0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_id": "0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.vdo": "0"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            },
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "vg_name": "ceph_vg0"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        }
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    ],
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    "1": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "devices": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "/dev/loop4"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            ],
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_name": "ceph_lv1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_size": "21470642176",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "name": "ceph_lv1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "tags": {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_name": "ceph",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.crush_device_class": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.encrypted": "0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_id": "1",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.vdo": "0"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            },
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "vg_name": "ceph_vg1"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        }
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    ],
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    "2": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "devices": [
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "/dev/loop5"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            ],
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_name": "ceph_lv2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_size": "21470642176",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "name": "ceph_lv2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "tags": {
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.cluster_name": "ceph",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.crush_device_class": "",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.encrypted": "0",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osd_id": "2",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:                "ceph.vdo": "0"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            },
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "type": "block",
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:            "vg_name": "ceph_vg2"
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:        }
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]:    ]
Oct 14 04:35:12 np0005486808 recursing_murdock[169870]: }
Oct 14 04:35:12 np0005486808 systemd[1]: libpod-988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92.scope: Deactivated successfully.
Oct 14 04:35:12 np0005486808 podman[169853]: 2025-10-14 08:35:12.095890486 +0000 UTC m=+0.936034735 container died 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:35:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a81ba1ba0b4ebd6494f1dd3c6cf74b88537c99bc7e60bfc51e5f36bb898c159a-merged.mount: Deactivated successfully.
Oct 14 04:35:12 np0005486808 podman[169853]: 2025-10-14 08:35:12.200895391 +0000 UTC m=+1.041039610 container remove 988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:12 np0005486808 systemd[1]: libpod-conmon-988ba5b3cae6bce473965826c75f236d8410d294e82570835aa048c0a9479b92.scope: Deactivated successfully.
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.827121697 +0000 UTC m=+0.041952990 container create d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:35:12 np0005486808 systemd[1]: Started libpod-conmon-d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1.scope.
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.808441058 +0000 UTC m=+0.023272361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.934847937 +0000 UTC m=+0.149679310 container init d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.941364124 +0000 UTC m=+0.156195407 container start d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.944748865 +0000 UTC m=+0.159580178 container attach d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:35:12 np0005486808 festive_jang[170051]: 167 167
Oct 14 04:35:12 np0005486808 systemd[1]: libpod-d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1.scope: Deactivated successfully.
Oct 14 04:35:12 np0005486808 podman[170035]: 2025-10-14 08:35:12.949482179 +0000 UTC m=+0.164313512 container died d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:35:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d6147f46fe4a98ebc8cd9549a9ac05513129ccb9722c274829dee4c7ce5a0a95-merged.mount: Deactivated successfully.
Oct 14 04:35:13 np0005486808 podman[170035]: 2025-10-14 08:35:13.007374281 +0000 UTC m=+0.222205564 container remove d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jang, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:35:13 np0005486808 systemd[1]: libpod-conmon-d5ccc761009477c7c0a9270c1482c174b5673d7947bf922634e0b3917f0702f1.scope: Deactivated successfully.
Oct 14 04:35:13 np0005486808 podman[170076]: 2025-10-14 08:35:13.263340175 +0000 UTC m=+0.075998698 container create 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:35:13 np0005486808 podman[170076]: 2025-10-14 08:35:13.233417126 +0000 UTC m=+0.046075689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:35:13 np0005486808 systemd[1]: Started libpod-conmon-2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9.scope.
Oct 14 04:35:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:35:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f0128c5999a5558ec5f831a325d222179f440ae7076dfff71b308de15d2a2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f0128c5999a5558ec5f831a325d222179f440ae7076dfff71b308de15d2a2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f0128c5999a5558ec5f831a325d222179f440ae7076dfff71b308de15d2a2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f0128c5999a5558ec5f831a325d222179f440ae7076dfff71b308de15d2a2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:35:13 np0005486808 podman[170076]: 2025-10-14 08:35:13.38331735 +0000 UTC m=+0.195975923 container init 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:35:13 np0005486808 podman[170076]: 2025-10-14 08:35:13.394944719 +0000 UTC m=+0.207603232 container start 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:13 np0005486808 podman[170076]: 2025-10-14 08:35:13.431841026 +0000 UTC m=+0.244499609 container attach 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:35:14 np0005486808 awesome_villani[170093]: {
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_id": 2,
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "type": "bluestore"
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    },
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_id": 1,
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "type": "bluestore"
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    },
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_id": 0,
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:        "type": "bluestore"
Oct 14 04:35:14 np0005486808 awesome_villani[170093]:    }
Oct 14 04:35:14 np0005486808 awesome_villani[170093]: }
Oct 14 04:35:14 np0005486808 systemd[1]: libpod-2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9.scope: Deactivated successfully.
Oct 14 04:35:14 np0005486808 systemd[1]: libpod-2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9.scope: Consumed 1.113s CPU time.
Oct 14 04:35:14 np0005486808 podman[170126]: 2025-10-14 08:35:14.558777792 +0000 UTC m=+0.040499375 container died 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:35:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-52f0128c5999a5558ec5f831a325d222179f440ae7076dfff71b308de15d2a2c-merged.mount: Deactivated successfully.
Oct 14 04:35:14 np0005486808 podman[170126]: 2025-10-14 08:35:14.64314066 +0000 UTC m=+0.124862203 container remove 2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:35:14 np0005486808 systemd[1]: libpod-conmon-2d700796f9067308b3674c3037b0d49f76e68d6894d0b160b28d793aef55daf9.scope: Deactivated successfully.
Oct 14 04:35:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:35:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:35:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f02e7ee7-f69a-4871-855e-642a2a09fea1 does not exist
Oct 14 04:35:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a9b12983-c0f4-41ff-9b01-ee9ccaa64d6f does not exist
Oct 14 04:35:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:35:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:19 np0005486808 kernel: SELinux:  Converting 2764 SID table entries...
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:35:19 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:35:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:29 np0005486808 kernel: SELinux:  Converting 2764 SID table entries...
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:35:29 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:35:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:31 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 14 04:35:31 np0005486808 podman[170206]: 2025-10-14 08:35:31.77759757 +0000 UTC m=+0.175212493 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 14 04:35:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:35:32
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'images', 'vms', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:35:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:36 np0005486808 podman[170233]: 2025-10-14 08:35:36.637351852 +0000 UTC m=+0.049478010 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:35:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:35:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:35:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:35:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:02 np0005486808 podman[180568]: 2025-10-14 08:36:02.723715092 +0000 UTC m=+0.126643265 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 04:36:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:36:06.990 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:36:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:36:06.991 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:36:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:36:06.991 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:36:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:07 np0005486808 podman[182921]: 2025-10-14 08:36:07.667486595 +0000 UTC m=+0.075288509 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 04:36:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:15 np0005486808 podman[186776]: 2025-10-14 08:36:15.935684645 +0000 UTC m=+0.064961092 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:36:16 np0005486808 podman[186776]: 2025-10-14 08:36:16.061324506 +0000 UTC m=+0.190600903 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:36:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:36:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:36:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 23e80f49-54c5-492d-8d26-48a5c4dd9960 does not exist
Oct 14 04:36:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9e91dff3-9e86-47f2-93ca-c13c4ae918c6 does not exist
Oct 14 04:36:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d673cdb9-9214-449f-b2e3-8f5a6b701e3c does not exist
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.435938026 +0000 UTC m=+0.071543880 container create d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:36:18 np0005486808 systemd[1]: Started libpod-conmon-d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3.scope.
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.395000398 +0000 UTC m=+0.030606342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.509218206 +0000 UTC m=+0.144824110 container init d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.516905329 +0000 UTC m=+0.152511183 container start d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.522323909 +0000 UTC m=+0.157929803 container attach d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:36:18 np0005486808 tender_shtern[187659]: 167 167
Oct 14 04:36:18 np0005486808 systemd[1]: libpod-d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3.scope: Deactivated successfully.
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.525587637 +0000 UTC m=+0.161193521 container died d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:36:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cb0c6013839061d8d3333ca848a02435f7ac2eefa4ce57187826a68b329a2a18-merged.mount: Deactivated successfully.
Oct 14 04:36:18 np0005486808 podman[187640]: 2025-10-14 08:36:18.580501748 +0000 UTC m=+0.216107612 container remove d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:36:18 np0005486808 systemd[1]: libpod-conmon-d6c7b63a4dd4e376ab78aa62f32e7b471e759d9dccad4cfaf3a3704b57ffcac3.scope: Deactivated successfully.
Oct 14 04:36:18 np0005486808 podman[187683]: 2025-10-14 08:36:18.798182347 +0000 UTC m=+0.082906941 container create e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:36:18 np0005486808 systemd[1]: Started libpod-conmon-e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d.scope.
Oct 14 04:36:18 np0005486808 podman[187683]: 2025-10-14 08:36:18.766892959 +0000 UTC m=+0.051617633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:18 np0005486808 podman[187683]: 2025-10-14 08:36:18.906589856 +0000 UTC m=+0.191314510 container init e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:36:18 np0005486808 podman[187683]: 2025-10-14 08:36:18.919956735 +0000 UTC m=+0.204681359 container start e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:36:18 np0005486808 podman[187683]: 2025-10-14 08:36:18.923679004 +0000 UTC m=+0.208403608 container attach e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:36:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:20 np0005486808 sad_poincare[187700]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:36:20 np0005486808 sad_poincare[187700]: --> relative data size: 1.0
Oct 14 04:36:20 np0005486808 sad_poincare[187700]: --> All data devices are unavailable
Oct 14 04:36:20 np0005486808 systemd[1]: libpod-e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d.scope: Deactivated successfully.
Oct 14 04:36:20 np0005486808 podman[187683]: 2025-10-14 08:36:20.101294217 +0000 UTC m=+1.386018821 container died e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:36:20 np0005486808 systemd[1]: libpod-e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d.scope: Consumed 1.107s CPU time.
Oct 14 04:36:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-912f6c580d652b1954c30124baf7ed1c51bdddebc66cc199b1716cd42a214e51-merged.mount: Deactivated successfully.
Oct 14 04:36:20 np0005486808 podman[187683]: 2025-10-14 08:36:20.180592731 +0000 UTC m=+1.465317315 container remove e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:36:20 np0005486808 systemd[1]: libpod-conmon-e2818206a8d01831fd24f270c00943affdec2635d5bd72fd12b14498461c587d.scope: Deactivated successfully.
Oct 14 04:36:20 np0005486808 podman[187889]: 2025-10-14 08:36:20.882957015 +0000 UTC m=+0.057448123 container create c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:36:20 np0005486808 systemd[1]: Started libpod-conmon-c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda.scope.
Oct 14 04:36:20 np0005486808 podman[187889]: 2025-10-14 08:36:20.848730737 +0000 UTC m=+0.023221895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:20 np0005486808 podman[187889]: 2025-10-14 08:36:20.984256744 +0000 UTC m=+0.158747902 container init c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:36:20 np0005486808 podman[187889]: 2025-10-14 08:36:20.999134139 +0000 UTC m=+0.173625237 container start c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:36:21 np0005486808 unruffled_shaw[187905]: 167 167
Oct 14 04:36:21 np0005486808 podman[187889]: 2025-10-14 08:36:21.003634777 +0000 UTC m=+0.178125885 container attach c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:36:21 np0005486808 systemd[1]: libpod-c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda.scope: Deactivated successfully.
Oct 14 04:36:21 np0005486808 conmon[187905]: conmon c4f64623d801c0e4ba70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda.scope/container/memory.events
Oct 14 04:36:21 np0005486808 podman[187889]: 2025-10-14 08:36:21.006200568 +0000 UTC m=+0.180691636 container died c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.023576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981023631, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3542185, "memory_usage": 3594152, "flush_reason": "Manual Compaction"}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981044360, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3467044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9615, "largest_seqno": 11658, "table_properties": {"data_size": 3457698, "index_size": 5968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17882, "raw_average_key_size": 19, "raw_value_size": 3439234, "raw_average_value_size": 3746, "num_data_blocks": 271, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430747, "oldest_key_time": 1760430747, "file_creation_time": 1760430981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20818 microseconds, and 8268 cpu microseconds.
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:36:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-059efec61b6e390e64475ce1391ea78795beb6aeae10980dfeef5399b0ea1e96-merged.mount: Deactivated successfully.
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.044403) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3467044 bytes OK
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.044420) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.046203) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.046217) EVENT_LOG_v1 {"time_micros": 1760430981046213, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.046235) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3533650, prev total WAL file size 3533650, number of live WAL files 2.
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.047431) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3385KB)], [26(5837KB)]
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981047475, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9444885, "oldest_snapshot_seqno": -1}
Oct 14 04:36:21 np0005486808 podman[187889]: 2025-10-14 08:36:21.07746502 +0000 UTC m=+0.251956128 container remove c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3657 keys, 7894154 bytes, temperature: kUnknown
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981084327, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7894154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7866089, "index_size": 17843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 87816, "raw_average_key_size": 24, "raw_value_size": 7796392, "raw_average_value_size": 2131, "num_data_blocks": 774, "num_entries": 3657, "num_filter_entries": 3657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760430981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.084564) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7894154 bytes
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.086270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 255.9 rd, 213.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 4171, records dropped: 514 output_compression: NoCompression
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.086287) EVENT_LOG_v1 {"time_micros": 1760430981086279, "job": 10, "event": "compaction_finished", "compaction_time_micros": 36908, "compaction_time_cpu_micros": 17492, "output_level": 6, "num_output_files": 1, "total_output_size": 7894154, "num_input_records": 4171, "num_output_records": 3657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981086887, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760430981087732, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.047306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.087755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.087760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.087763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.087764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:36:21.087766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:36:21 np0005486808 systemd[1]: libpod-conmon-c4f64623d801c0e4ba70066538611f85658b52e998021e6da8e5f2a3a8499eda.scope: Deactivated successfully.
Oct 14 04:36:21 np0005486808 podman[187930]: 2025-10-14 08:36:21.316252103 +0000 UTC m=+0.056892360 container create 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:36:21 np0005486808 systemd[1]: Started libpod-conmon-95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db.scope.
Oct 14 04:36:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c49b8414f9e4aec1ff16eb6552209c6e0ea77e8f3f0aa8d76340739ef87331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c49b8414f9e4aec1ff16eb6552209c6e0ea77e8f3f0aa8d76340739ef87331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c49b8414f9e4aec1ff16eb6552209c6e0ea77e8f3f0aa8d76340739ef87331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c49b8414f9e4aec1ff16eb6552209c6e0ea77e8f3f0aa8d76340739ef87331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:21 np0005486808 podman[187930]: 2025-10-14 08:36:21.379914973 +0000 UTC m=+0.120555240 container init 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:36:21 np0005486808 podman[187930]: 2025-10-14 08:36:21.385995618 +0000 UTC m=+0.126635875 container start 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:36:21 np0005486808 podman[187930]: 2025-10-14 08:36:21.389138473 +0000 UTC m=+0.129778730 container attach 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:36:21 np0005486808 podman[187930]: 2025-10-14 08:36:21.298900008 +0000 UTC m=+0.039540305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]: {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    "0": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "devices": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "/dev/loop3"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            ],
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_name": "ceph_lv0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_size": "21470642176",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "name": "ceph_lv0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "tags": {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_name": "ceph",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.crush_device_class": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.encrypted": "0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_id": "0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.vdo": "0"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            },
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "vg_name": "ceph_vg0"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        }
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    ],
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    "1": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "devices": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "/dev/loop4"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            ],
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_name": "ceph_lv1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_size": "21470642176",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "name": "ceph_lv1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "tags": {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_name": "ceph",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.crush_device_class": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.encrypted": "0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_id": "1",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.vdo": "0"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            },
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "vg_name": "ceph_vg1"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        }
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    ],
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    "2": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "devices": [
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "/dev/loop5"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            ],
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_name": "ceph_lv2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_size": "21470642176",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "name": "ceph_lv2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "tags": {
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.cluster_name": "ceph",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.crush_device_class": "",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.encrypted": "0",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osd_id": "2",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:                "ceph.vdo": "0"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            },
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "type": "block",
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:            "vg_name": "ceph_vg2"
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:        }
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]:    ]
Oct 14 04:36:22 np0005486808 pensive_kapitsa[187947]: }
Oct 14 04:36:22 np0005486808 systemd[1]: libpod-95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db.scope: Deactivated successfully.
Oct 14 04:36:22 np0005486808 podman[187930]: 2025-10-14 08:36:22.137068545 +0000 UTC m=+0.877708842 container died 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:36:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-09c49b8414f9e4aec1ff16eb6552209c6e0ea77e8f3f0aa8d76340739ef87331-merged.mount: Deactivated successfully.
Oct 14 04:36:22 np0005486808 podman[187930]: 2025-10-14 08:36:22.209939876 +0000 UTC m=+0.950580143 container remove 95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:36:22 np0005486808 systemd[1]: libpod-conmon-95f56c500bb610e43346a7ebff6c10ba766e54d3de6eed1087e347b4d4ad68db.scope: Deactivated successfully.
Oct 14 04:36:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:22 np0005486808 podman[188106]: 2025-10-14 08:36:22.95668493 +0000 UTC m=+0.057735960 container create f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:36:22 np0005486808 systemd[1]: Started libpod-conmon-f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3.scope.
Oct 14 04:36:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:22.932544173 +0000 UTC m=+0.033595203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:23.039828505 +0000 UTC m=+0.140879545 container init f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:23.05091571 +0000 UTC m=+0.151966740 container start f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:23.055253874 +0000 UTC m=+0.156304924 container attach f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:36:23 np0005486808 cool_montalcini[188122]: 167 167
Oct 14 04:36:23 np0005486808 systemd[1]: libpod-f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3.scope: Deactivated successfully.
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:23.058673315 +0000 UTC m=+0.159724355 container died f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:36:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-993058820c3dc9cbdbc64654962426d3feb6d1239c2230ec914da42377e92613-merged.mount: Deactivated successfully.
Oct 14 04:36:23 np0005486808 podman[188106]: 2025-10-14 08:36:23.108442814 +0000 UTC m=+0.209493854 container remove f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:36:23 np0005486808 systemd[1]: libpod-conmon-f22df383083927d8f1b0bd22ac87f02e1e547b1d6c591e0a238d4064f6ec2bd3.scope: Deactivated successfully.
Oct 14 04:36:23 np0005486808 podman[188147]: 2025-10-14 08:36:23.349246844 +0000 UTC m=+0.075359320 container create 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:36:23 np0005486808 systemd[1]: Started libpod-conmon-06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c.scope.
Oct 14 04:36:23 np0005486808 podman[188147]: 2025-10-14 08:36:23.318397457 +0000 UTC m=+0.044509973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:36:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:36:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d81d40e6212bde93cda3fe2e6f663e525f013c116082183744aeb4246bb387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d81d40e6212bde93cda3fe2e6f663e525f013c116082183744aeb4246bb387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d81d40e6212bde93cda3fe2e6f663e525f013c116082183744aeb4246bb387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d81d40e6212bde93cda3fe2e6f663e525f013c116082183744aeb4246bb387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:36:23 np0005486808 podman[188147]: 2025-10-14 08:36:23.445301088 +0000 UTC m=+0.171413544 container init 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:36:23 np0005486808 podman[188147]: 2025-10-14 08:36:23.459421705 +0000 UTC m=+0.185534141 container start 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:36:23 np0005486808 podman[188147]: 2025-10-14 08:36:23.463488622 +0000 UTC m=+0.189601078 container attach 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]: {
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_id": 2,
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "type": "bluestore"
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    },
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_id": 1,
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "type": "bluestore"
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    },
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_id": 0,
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:        "type": "bluestore"
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]:    }
Oct 14 04:36:24 np0005486808 lucid_driscoll[188163]: }
Oct 14 04:36:24 np0005486808 systemd[1]: libpod-06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c.scope: Deactivated successfully.
Oct 14 04:36:24 np0005486808 podman[188147]: 2025-10-14 08:36:24.531181411 +0000 UTC m=+1.257293867 container died 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:36:24 np0005486808 systemd[1]: libpod-06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c.scope: Consumed 1.072s CPU time.
Oct 14 04:36:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f5d81d40e6212bde93cda3fe2e6f663e525f013c116082183744aeb4246bb387-merged.mount: Deactivated successfully.
Oct 14 04:36:24 np0005486808 podman[188147]: 2025-10-14 08:36:24.608573479 +0000 UTC m=+1.334685945 container remove 06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:36:24 np0005486808 systemd[1]: libpod-conmon-06fa935d5253ad315576229855551ee81b11a28872d9abb486098bab0baf6e0c.scope: Deactivated successfully.
Oct 14 04:36:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:36:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:36:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0be350d6-7239-40c4-9efe-e670354ad997 does not exist
Oct 14 04:36:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8e118007-0ae1-41e7-83ac-e0a4ba8cf508 does not exist
Oct 14 04:36:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:36:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:28 np0005486808 kernel: SELinux:  Converting 2765 SID table entries...
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability network_peer_controls=1
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability open_perms=1
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability extended_socket_class=1
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability always_check_network=0
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 14 04:36:28 np0005486808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 14 04:36:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:29 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:36:29 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 14 04:36:29 np0005486808 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Oct 14 04:36:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:36:32
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'images']
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:36:32 np0005486808 podman[188335]: 2025-10-14 08:36:32.910172696 +0000 UTC m=+0.112339984 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Oct 14 04:36:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:37 np0005486808 systemd[1]: Stopping OpenSSH server daemon...
Oct 14 04:36:37 np0005486808 systemd[1]: sshd.service: Deactivated successfully.
Oct 14 04:36:37 np0005486808 systemd[1]: Stopped OpenSSH server daemon.
Oct 14 04:36:37 np0005486808 systemd[1]: sshd.service: Consumed 2.863s CPU time, read 0B from disk, written 4.0K to disk.
Oct 14 04:36:37 np0005486808 systemd[1]: Stopped target sshd-keygen.target.
Oct 14 04:36:37 np0005486808 systemd[1]: Stopping sshd-keygen.target...
Oct 14 04:36:37 np0005486808 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 04:36:37 np0005486808 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 04:36:37 np0005486808 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 14 04:36:37 np0005486808 systemd[1]: Reached target sshd-keygen.target.
Oct 14 04:36:37 np0005486808 systemd[1]: Starting OpenSSH server daemon...
Oct 14 04:36:37 np0005486808 systemd[1]: Started OpenSSH server daemon.
Oct 14 04:36:37 np0005486808 podman[189150]: 2025-10-14 08:36:37.974897281 +0000 UTC m=+0.089108260 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:36:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:40 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:36:40 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:36:40 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:40 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:40 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:40 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:36:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:36:42 np0005486808 systemd[1]: Starting PackageKit Daemon...
Oct 14 04:36:42 np0005486808 systemd[1]: Started PackageKit Daemon.
Oct 14 04:36:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:44 np0005486808 python3.9[192714]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:36:44 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:44 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:44 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:45 np0005486808 python3.9[194140]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:36:45 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:45 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:45 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:47 np0005486808 python3.9[195215]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:36:47 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:47 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:47 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:48 np0005486808 python3.9[196574]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:36:48 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:48 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:48 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:49 np0005486808 auditd[703]: Audit daemon rotating log files
Oct 14 04:36:49 np0005486808 python3.9[197979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:49 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:49 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:49 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:49 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:36:49 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:36:49 np0005486808 systemd[1]: man-db-cache-update.service: Consumed 11.826s CPU time.
Oct 14 04:36:49 np0005486808 systemd[1]: run-r06bf69923c344c11be373909f0a75d8e.service: Deactivated successfully.
Oct 14 04:36:50 np0005486808 python3.9[198775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:51 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:52 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:52 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:53 np0005486808 python3.9[198964]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:53 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:53 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:53 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:53 np0005486808 systemd[1]: Starting dnf makecache...
Oct 14 04:36:53 np0005486808 dnf[199004]: Metadata cache refreshed recently.
Oct 14 04:36:53 np0005486808 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 14 04:36:53 np0005486808 systemd[1]: Finished dnf makecache.
Oct 14 04:36:54 np0005486808 python3.9[199156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:55 np0005486808 python3.9[199311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:55 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:55 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:55 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:56 np0005486808 python3.9[199501]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 14 04:36:56 np0005486808 systemd[1]: Reloading.
Oct 14 04:36:56 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:36:56 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:36:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:56 np0005486808 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 14 04:36:57 np0005486808 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 14 04:36:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:36:57 np0005486808 python3.9[199694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:58 np0005486808 python3.9[199849]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:36:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:36:59 np0005486808 python3.9[200004]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:01 np0005486808 python3.9[200159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:02 np0005486808 python3.9[200314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:03 np0005486808 podman[200441]: 2025-10-14 08:37:03.096208115 +0000 UTC m=+0.130944518 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 04:37:03 np0005486808 python3.9[200487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:05 np0005486808 python3.9[200651]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:06 np0005486808 python3.9[200806]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:37:06.991 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:37:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:37:06.991 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:37:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:37:06.991 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:37:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:07 np0005486808 python3.9[200961]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:08 np0005486808 python3.9[201116]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:08 np0005486808 podman[201118]: 2025-10-14 08:37:08.373995319 +0000 UTC m=+0.067610075 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:37:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:09 np0005486808 python3.9[201291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:10 np0005486808 python3.9[201446]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:10 np0005486808 python3.9[201601]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:11 np0005486808 python3.9[201756]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 14 04:37:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:13 np0005486808 python3.9[201911]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:14 np0005486808 python3.9[202063]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:15 np0005486808 python3.9[202215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:16 np0005486808 python3.9[202367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:17 np0005486808 python3.9[202519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:17 np0005486808 python3.9[202671]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:37:18 np0005486808 python3.9[202823]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:19 np0005486808 python3.9[202948]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431038.001419-554-223950967449452/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:20 np0005486808 python3.9[203100]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:21 np0005486808 python3.9[203225]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431039.8933797-554-112036156715274/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:22 np0005486808 python3.9[203377]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:22 np0005486808 python3.9[203502]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431041.4362633-554-192021456004208/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:23 np0005486808 python3.9[203654]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:24 np0005486808 python3.9[203779]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431042.9426541-554-136541225722811/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:25 np0005486808 python3.9[203931]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:25 np0005486808 python3.9[204173]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431044.4417748-554-239449924053927/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9c72098b-342a-44be-9e40-1c7757b79ffd does not exist
Oct 14 04:37:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 70c352aa-f246-4e3a-a223-d5b4a5b9b8f8 does not exist
Oct 14 04:37:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8078df02-9eac-40a4-abd5-eaf98743a4e1 does not exist
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:37:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:37:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:37:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:37:26 np0005486808 python3.9[204452]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.461113365 +0000 UTC m=+0.048394919 container create 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:37:26 np0005486808 systemd[1]: Started libpod-conmon-72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d.scope.
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.434422245 +0000 UTC m=+0.021703849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.569071942 +0000 UTC m=+0.156353586 container init 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.577281171 +0000 UTC m=+0.164562755 container start 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.581447053 +0000 UTC m=+0.168728647 container attach 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:37:26 np0005486808 sad_cerf[204500]: 167 167
Oct 14 04:37:26 np0005486808 systemd[1]: libpod-72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d.scope: Deactivated successfully.
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.583827561 +0000 UTC m=+0.171109135 container died 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 04:37:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0a8901ffb827c722e83150633afe9ee5f07bb4faee70050cb064421fa21371f8-merged.mount: Deactivated successfully.
Oct 14 04:37:26 np0005486808 podman[204482]: 2025-10-14 08:37:26.636319998 +0000 UTC m=+0.223601562 container remove 72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:37:26 np0005486808 systemd[1]: libpod-conmon-72aa333d2b7ace26e1a8722a2a4dc160e7e8148375c616f8a020b7ef3185292d.scope: Deactivated successfully.
Oct 14 04:37:26 np0005486808 podman[204615]: 2025-10-14 08:37:26.870116107 +0000 UTC m=+0.072027134 container create b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:37:26 np0005486808 systemd[1]: Started libpod-conmon-b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794.scope.
Oct 14 04:37:26 np0005486808 podman[204615]: 2025-10-14 08:37:26.839747998 +0000 UTC m=+0.041659065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:26 np0005486808 podman[204615]: 2025-10-14 08:37:26.981664281 +0000 UTC m=+0.183575338 container init b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:26 np0005486808 podman[204615]: 2025-10-14 08:37:26.993735355 +0000 UTC m=+0.195646372 container start b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:37:26 np0005486808 podman[204615]: 2025-10-14 08:37:26.998321436 +0000 UTC m=+0.200232463 container attach b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:37:27 np0005486808 python3.9[204661]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431045.9332979-554-152716367709549/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:27 np0005486808 python3.9[204828]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:27 np0005486808 wizardly_archimedes[204664]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:37:27 np0005486808 wizardly_archimedes[204664]: --> relative data size: 1.0
Oct 14 04:37:27 np0005486808 wizardly_archimedes[204664]: --> All data devices are unavailable
Oct 14 04:37:28 np0005486808 systemd[1]: libpod-b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794.scope: Deactivated successfully.
Oct 14 04:37:28 np0005486808 podman[204615]: 2025-10-14 08:37:28.006448234 +0000 UTC m=+1.208359231 container died b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:37:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4c92fed85df6d5ae52aa4838969196f8fd809532dceca28ebb4f2519f2cc14d2-merged.mount: Deactivated successfully.
Oct 14 04:37:28 np0005486808 podman[204615]: 2025-10-14 08:37:28.076101639 +0000 UTC m=+1.278012626 container remove b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:28 np0005486808 systemd[1]: libpod-conmon-b70cd6503a39c02eee238bd2e4cf09ae519971fabec2f94b6e6d2e8325262794.scope: Deactivated successfully.
Oct 14 04:37:28 np0005486808 python3.9[205054]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431047.3307102-554-80608480822560/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.615666238 +0000 UTC m=+0.032935973 container create 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:28 np0005486808 systemd[1]: Started libpod-conmon-039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03.scope.
Oct 14 04:37:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.687339502 +0000 UTC m=+0.104609247 container init 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.60056168 +0000 UTC m=+0.017831435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.699573299 +0000 UTC m=+0.116843054 container start 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.703258859 +0000 UTC m=+0.120528624 container attach 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:37:28 np0005486808 funny_wing[205203]: 167 167
Oct 14 04:37:28 np0005486808 systemd[1]: libpod-039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03.scope: Deactivated successfully.
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.706005426 +0000 UTC m=+0.123275171 container died 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:37:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5cd9de823f97f4d53b9a86c9b5dc796a5d713be8690938b87ba511a4cacfb52b-merged.mount: Deactivated successfully.
Oct 14 04:37:28 np0005486808 podman[205150]: 2025-10-14 08:37:28.743921538 +0000 UTC m=+0.161191273 container remove 039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 14 04:37:28 np0005486808 systemd[1]: libpod-conmon-039113619c722a4ceb239007d45d433919760df64c0935016c40d5a20c561f03.scope: Deactivated successfully.
Oct 14 04:37:28 np0005486808 podman[205281]: 2025-10-14 08:37:28.884681463 +0000 UTC m=+0.037595106 container create e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:37:28 np0005486808 systemd[1]: Started libpod-conmon-e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086.scope.
Oct 14 04:37:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db71ae04d458ed08af50641978813f2b215d9b0ac441ea9eb9cb6afc780430e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db71ae04d458ed08af50641978813f2b215d9b0ac441ea9eb9cb6afc780430e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db71ae04d458ed08af50641978813f2b215d9b0ac441ea9eb9cb6afc780430e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db71ae04d458ed08af50641978813f2b215d9b0ac441ea9eb9cb6afc780430e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:28 np0005486808 podman[205281]: 2025-10-14 08:37:28.965144891 +0000 UTC m=+0.118058564 container init e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:37:28 np0005486808 podman[205281]: 2025-10-14 08:37:28.868072779 +0000 UTC m=+0.020986452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:28 np0005486808 podman[205281]: 2025-10-14 08:37:28.972993842 +0000 UTC m=+0.125907475 container start e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:37:28 np0005486808 podman[205281]: 2025-10-14 08:37:28.97618766 +0000 UTC m=+0.129101303 container attach e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:37:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:29 np0005486808 python3.9[205328]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:29 np0005486808 loving_haibt[205325]: {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    "0": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "devices": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "/dev/loop3"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            ],
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_name": "ceph_lv0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_size": "21470642176",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "name": "ceph_lv0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "tags": {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_name": "ceph",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.crush_device_class": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.encrypted": "0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_id": "0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.vdo": "0"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            },
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "vg_name": "ceph_vg0"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        }
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    ],
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    "1": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "devices": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "/dev/loop4"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            ],
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_name": "ceph_lv1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_size": "21470642176",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "name": "ceph_lv1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "tags": {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_name": "ceph",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.crush_device_class": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.encrypted": "0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_id": "1",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.vdo": "0"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            },
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "vg_name": "ceph_vg1"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        }
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    ],
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    "2": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "devices": [
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "/dev/loop5"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            ],
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_name": "ceph_lv2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_size": "21470642176",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "name": "ceph_lv2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "tags": {
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.cluster_name": "ceph",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.crush_device_class": "",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.encrypted": "0",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osd_id": "2",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:                "ceph.vdo": "0"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            },
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "type": "block",
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:            "vg_name": "ceph_vg2"
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:        }
Oct 14 04:37:29 np0005486808 loving_haibt[205325]:    ]
Oct 14 04:37:29 np0005486808 loving_haibt[205325]: }
Oct 14 04:37:29 np0005486808 python3.9[205456]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760431048.5982654-554-209971643233557/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:29 np0005486808 systemd[1]: libpod-e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086.scope: Deactivated successfully.
Oct 14 04:37:29 np0005486808 podman[205281]: 2025-10-14 08:37:29.717781044 +0000 UTC m=+0.870694697 container died e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:37:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1db71ae04d458ed08af50641978813f2b215d9b0ac441ea9eb9cb6afc780430e-merged.mount: Deactivated successfully.
Oct 14 04:37:29 np0005486808 podman[205281]: 2025-10-14 08:37:29.787783567 +0000 UTC m=+0.940697210 container remove e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:37:29 np0005486808 systemd[1]: libpod-conmon-e44927b0b00e04080b2638bd59da06cf87525288de52c529b38f8af3784fc086.scope: Deactivated successfully.
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.465125338 +0000 UTC m=+0.046546274 container create e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:30 np0005486808 python3.9[205745]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 14 04:37:30 np0005486808 systemd[1]: Started libpod-conmon-e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f.scope.
Oct 14 04:37:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.446206038 +0000 UTC m=+0.027626974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.564638159 +0000 UTC m=+0.146059175 container init e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.57452105 +0000 UTC m=+0.155941976 container start e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:37:30 np0005486808 quizzical_wu[205786]: 167 167
Oct 14 04:37:30 np0005486808 systemd[1]: libpod-e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f.scope: Deactivated successfully.
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.582938865 +0000 UTC m=+0.164359821 container attach e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.583491748 +0000 UTC m=+0.164912714 container died e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:37:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3051b1b49967989681e85d5564741c64f167335978f3b2e47f4e39e8f1965dca-merged.mount: Deactivated successfully.
Oct 14 04:37:30 np0005486808 podman[205768]: 2025-10-14 08:37:30.624172778 +0000 UTC m=+0.205593694 container remove e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:37:30 np0005486808 systemd[1]: libpod-conmon-e5e3d53d30d24a2af461e94e35d4d5f5c54e5811bce8b520416ffd19982e3a1f.scope: Deactivated successfully.
Oct 14 04:37:30 np0005486808 podman[205834]: 2025-10-14 08:37:30.82442796 +0000 UTC m=+0.081654787 container create 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:37:30 np0005486808 systemd[1]: Started libpod-conmon-22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e.scope.
Oct 14 04:37:30 np0005486808 podman[205834]: 2025-10-14 08:37:30.792059283 +0000 UTC m=+0.049286180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:37:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:37:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee39787ed85ed459c8df938383bd6d81225ecb423bc33cca32b0576a12feaa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee39787ed85ed459c8df938383bd6d81225ecb423bc33cca32b0576a12feaa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee39787ed85ed459c8df938383bd6d81225ecb423bc33cca32b0576a12feaa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee39787ed85ed459c8df938383bd6d81225ecb423bc33cca32b0576a12feaa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:37:30 np0005486808 podman[205834]: 2025-10-14 08:37:30.936896037 +0000 UTC m=+0.194122884 container init 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:37:30 np0005486808 podman[205834]: 2025-10-14 08:37:30.950505938 +0000 UTC m=+0.207732765 container start 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:37:30 np0005486808 podman[205834]: 2025-10-14 08:37:30.955190132 +0000 UTC m=+0.212416989 container attach 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:37:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:31 np0005486808 python3.9[205983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]: {
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_id": 2,
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "type": "bluestore"
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    },
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_id": 1,
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "type": "bluestore"
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    },
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_id": 0,
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:        "type": "bluestore"
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]:    }
Oct 14 04:37:32 np0005486808 cranky_lederberg[205895]: }
Oct 14 04:37:32 np0005486808 systemd[1]: libpod-22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e.scope: Deactivated successfully.
Oct 14 04:37:32 np0005486808 podman[205834]: 2025-10-14 08:37:32.11165655 +0000 UTC m=+1.368883367 container died 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:37:32 np0005486808 systemd[1]: libpod-22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e.scope: Consumed 1.170s CPU time.
Oct 14 04:37:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9ee39787ed85ed459c8df938383bd6d81225ecb423bc33cca32b0576a12feaa9-merged.mount: Deactivated successfully.
Oct 14 04:37:32 np0005486808 podman[205834]: 2025-10-14 08:37:32.170097242 +0000 UTC m=+1.427324059 container remove 22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:37:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:32 np0005486808 systemd[1]: libpod-conmon-22aa0d41279e1844f01d5b4860334abb285361b160a4b754b60c205bef91fa2e.scope: Deactivated successfully.
Oct 14 04:37:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:37:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:37:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 949b8cce-9623-400d-8899-f717f33559ca does not exist
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dcdf2ddd-bb92-4206-8ace-222ed764bf65 does not exist
Oct 14 04:37:32 np0005486808 python3.9[206163]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:37:32
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'vms', 'images', 'default.rgw.log', '.mgr']
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:37:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:33 np0005486808 python3.9[206378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:37:33 np0005486808 podman[206502]: 2025-10-14 08:37:33.741665741 +0000 UTC m=+0.167790684 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 04:37:33 np0005486808 python3.9[206549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:34 np0005486808 python3.9[206708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:35 np0005486808 python3.9[206860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:36 np0005486808 python3.9[207012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:36 np0005486808 python3.9[207164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:37 np0005486808 python3.9[207316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:38 np0005486808 python3.9[207468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:38 np0005486808 podman[207516]: 2025-10-14 08:37:38.664585481 +0000 UTC m=+0.069153543 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 04:37:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:39 np0005486808 python3.9[207638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:39 np0005486808 python3.9[207790]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:40 np0005486808 python3.9[207942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:41 np0005486808 python3.9[208094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:42 np0005486808 python3.9[208246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:37:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:43 np0005486808 python3.9[208369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431061.7756808-775-149610112283579/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:43 np0005486808 python3.9[208521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:44 np0005486808 python3.9[208644]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431063.3446634-775-111062198625924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:45 np0005486808 python3.9[208796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:46 np0005486808 python3.9[208919]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431064.7726479-775-108825107546650/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:46 np0005486808 python3.9[209071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:47 np0005486808 python3.9[209194]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431066.3131654-775-91485065181164/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:48 np0005486808 python3.9[209346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:49 np0005486808 python3.9[209469]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431067.7823079-775-187868876231187/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:49 np0005486808 python3.9[209621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:50 np0005486808 python3.9[209744]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431069.389477-775-252895167940474/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:51 np0005486808 python3.9[209896]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:51 np0005486808 python3.9[210019]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431070.6351683-775-94693641628453/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:52 np0005486808 python3.9[210171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:53 np0005486808 python3.9[210294]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431071.980097-775-74282403317363/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:53 np0005486808 python3.9[210446]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:54 np0005486808 python3.9[210569]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431073.2665536-775-235188000384645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:55 np0005486808 python3.9[210721]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:55 np0005486808 python3.9[210844]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431074.6992903-775-187192747186884/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:56 np0005486808 python3.9[210996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:37:57 np0005486808 python3.9[211119]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431076.1317124-775-250811180893503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:57 np0005486808 python3.9[211271]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:58 np0005486808 python3.9[211394]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431077.45076-775-161625560718989/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:37:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:37:59 np0005486808 python3.9[211546]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:37:59 np0005486808 python3.9[211669]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431078.7598686-775-167057451581015/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:00 np0005486808 python3.9[211821]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:01 np0005486808 python3.9[211944]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431080.0853124-775-114141250981364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:01 np0005486808 python3.9[212094]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:03 np0005486808 python3.9[212249]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 14 04:38:04 np0005486808 dbus-broker-launch[782]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 14 04:38:04 np0005486808 podman[212353]: 2025-10-14 08:38:04.736330132 +0000 UTC m=+0.126221222 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 04:38:04 np0005486808 python3.9[212431]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:05 np0005486808 python3.9[212584]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:06 np0005486808 python3.9[212736]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:38:06.993 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:38:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:38:06.993 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:38:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:38:06.994 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:38:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:07 np0005486808 python3.9[212888]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:08 np0005486808 python3.9[213040]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:08 np0005486808 python3.9[213192]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:08 np0005486808 podman[213193]: 2025-10-14 08:38:08.87412849 +0000 UTC m=+0.047886136 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 04:38:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:09 np0005486808 python3.9[213361]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:10 np0005486808 python3.9[213513]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:10 np0005486808 python3.9[213665]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:11 np0005486808 python3.9[213817]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:12 np0005486808 python3.9[213969]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:38:12 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:12 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:12 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:12 np0005486808 systemd[1]: Starting libvirt logging daemon socket...
Oct 14 04:38:12 np0005486808 systemd[1]: Listening on libvirt logging daemon socket.
Oct 14 04:38:12 np0005486808 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 14 04:38:12 np0005486808 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 14 04:38:12 np0005486808 systemd[1]: Starting libvirt logging daemon...
Oct 14 04:38:12 np0005486808 systemd[1]: Started libvirt logging daemon.
Oct 14 04:38:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:13 np0005486808 python3.9[214162]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:38:13 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:14 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:14 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:14 np0005486808 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 14 04:38:14 np0005486808 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 14 04:38:14 np0005486808 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 14 04:38:14 np0005486808 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 14 04:38:14 np0005486808 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 14 04:38:14 np0005486808 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 14 04:38:14 np0005486808 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 14 04:38:14 np0005486808 systemd[1]: Starting libvirt nodedev daemon...
Oct 14 04:38:14 np0005486808 systemd[1]: Started libvirt nodedev daemon.
Oct 14 04:38:14 np0005486808 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 14 04:38:14 np0005486808 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 14 04:38:14 np0005486808 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 14 04:38:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:15 np0005486808 python3.9[214385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:38:15 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:15 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:15 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:15 np0005486808 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 14 04:38:15 np0005486808 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 14 04:38:15 np0005486808 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 14 04:38:15 np0005486808 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 14 04:38:15 np0005486808 systemd[1]: Starting libvirt proxy daemon...
Oct 14 04:38:15 np0005486808 systemd[1]: Started libvirt proxy daemon.
Oct 14 04:38:15 np0005486808 setroubleshoot[214199]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 632f02fe-39b8-46f6-9acc-f306d010244a
Oct 14 04:38:15 np0005486808 setroubleshoot[214199]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 14 04:38:15 np0005486808 setroubleshoot[214199]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 632f02fe-39b8-46f6-9acc-f306d010244a
Oct 14 04:38:15 np0005486808 setroubleshoot[214199]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 14 04:38:16 np0005486808 python3.9[214596]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:38:16 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:16 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:16 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:16 np0005486808 systemd[1]: Listening on libvirt locking daemon socket.
Oct 14 04:38:16 np0005486808 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 14 04:38:16 np0005486808 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 14 04:38:16 np0005486808 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 14 04:38:16 np0005486808 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 14 04:38:16 np0005486808 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 14 04:38:16 np0005486808 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 14 04:38:16 np0005486808 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 14 04:38:16 np0005486808 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 14 04:38:16 np0005486808 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 14 04:38:16 np0005486808 systemd[1]: Starting libvirt QEMU daemon...
Oct 14 04:38:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:17 np0005486808 systemd[1]: Started libvirt QEMU daemon.
Oct 14 04:38:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:17 np0005486808 python3.9[214809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:38:17 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:17 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:17 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:18 np0005486808 systemd[1]: Starting libvirt secret daemon socket...
Oct 14 04:38:18 np0005486808 systemd[1]: Listening on libvirt secret daemon socket.
Oct 14 04:38:18 np0005486808 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 14 04:38:18 np0005486808 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 14 04:38:18 np0005486808 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 14 04:38:18 np0005486808 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 14 04:38:18 np0005486808 systemd[1]: Starting libvirt secret daemon...
Oct 14 04:38:18 np0005486808 systemd[1]: Started libvirt secret daemon.
Oct 14 04:38:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:19 np0005486808 python3.9[215019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:20 np0005486808 python3.9[215171]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 14 04:38:20 np0005486808 python3.9[215323]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:21 np0005486808 python3.9[215477]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 14 04:38:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:22 np0005486808 python3.9[215627]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:23 np0005486808 python3.9[215748]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431101.9543293-1133-154143262165824/.source.xml follow=False _original_basename=secret.xml.j2 checksum=bb60ee115d72a4056e2d4c1d7188db6331b4b8b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:23 np0005486808 python3.9[215900]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c49aadb6-9b04-5cb1-8f5f-4c91676c568e#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:24 np0005486808 python3.9[216062]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:25 np0005486808 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 14 04:38:25 np0005486808 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 14 04:38:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:27 np0005486808 python3.9[216525]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:28 np0005486808 python3.9[216677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:28 np0005486808 python3.9[216800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431107.6020255-1188-97129367229645/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:29 np0005486808 python3.9[216952]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:30 np0005486808 python3.9[217104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:31 np0005486808 python3.9[217182]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:31 np0005486808 python3.9[217334]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:32 np0005486808 python3.9[217412]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hb33n17z recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:38:32
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.data']
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:38:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:38:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:33 np0005486808 python3.9[217680]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 22f3a244-4178-44de-b3c9-0b18a12293a2 does not exist
Oct 14 04:38:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8aa2b83a-ee98-4a0a-9ee0-2e234172d093 does not exist
Oct 14 04:38:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 927c72b3-5c72-484c-8f25-f80ef899279d does not exist
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:38:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:38:33 np0005486808 python3.9[217865]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.002568459 +0000 UTC m=+0.048705806 container create 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:38:34 np0005486808 systemd[1]: Started libpod-conmon-6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c.scope.
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:33.977692174 +0000 UTC m=+0.023829531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.109498931 +0000 UTC m=+0.155636328 container init 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:38:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:38:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.122234581 +0000 UTC m=+0.168371908 container start 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:38:34 np0005486808 condescending_franklin[217977]: 167 167
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.128783111 +0000 UTC m=+0.174920498 container attach 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:38:34 np0005486808 systemd[1]: libpod-6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c.scope: Deactivated successfully.
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.131524947 +0000 UTC m=+0.177662254 container died 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:38:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-32bb78b978bb6695d7e363cf12f2244ca88d910fa627a7bf4235eaae1ae4968a-merged.mount: Deactivated successfully.
Oct 14 04:38:34 np0005486808 podman[217938]: 2025-10-14 08:38:34.198105077 +0000 UTC m=+0.244242394 container remove 6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:38:34 np0005486808 systemd[1]: libpod-conmon-6b06d8db6184eb9cc5f2c0b2e3404a77c6334aabcb8098cebb43353393af086c.scope: Deactivated successfully.
Oct 14 04:38:34 np0005486808 podman[218077]: 2025-10-14 08:38:34.437140493 +0000 UTC m=+0.073701884 container create 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:38:34 np0005486808 systemd[1]: Started libpod-conmon-9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2.scope.
Oct 14 04:38:34 np0005486808 podman[218077]: 2025-10-14 08:38:34.40658823 +0000 UTC m=+0.043149711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:34 np0005486808 podman[218077]: 2025-10-14 08:38:34.570537109 +0000 UTC m=+0.207098510 container init 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:38:34 np0005486808 podman[218077]: 2025-10-14 08:38:34.57961593 +0000 UTC m=+0.216177341 container start 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:38:34 np0005486808 podman[218077]: 2025-10-14 08:38:34.595866605 +0000 UTC m=+0.232428046 container attach 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:38:34 np0005486808 python3.9[218120]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:35 np0005486808 podman[218262]: 2025-10-14 08:38:35.512831587 +0000 UTC m=+0.102110356 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:38:35 np0005486808 affectionate_fermat[218123]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:38:35 np0005486808 affectionate_fermat[218123]: --> relative data size: 1.0
Oct 14 04:38:35 np0005486808 affectionate_fermat[218123]: --> All data devices are unavailable
Oct 14 04:38:35 np0005486808 systemd[1]: libpod-9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2.scope: Deactivated successfully.
Oct 14 04:38:35 np0005486808 systemd[1]: libpod-9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2.scope: Consumed 1.019s CPU time.
Oct 14 04:38:35 np0005486808 podman[218077]: 2025-10-14 08:38:35.673347642 +0000 UTC m=+1.309909043 container died 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:38:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-897b501fdfbbd0715480feb1c2fa7482b25abf48f4b7e26b79636317b70b4680-merged.mount: Deactivated successfully.
Oct 14 04:38:35 np0005486808 python3[218312]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 14 04:38:35 np0005486808 podman[218077]: 2025-10-14 08:38:35.777432224 +0000 UTC m=+1.413993655 container remove 9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermat, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:38:35 np0005486808 systemd[1]: libpod-conmon-9abf1dd2e4de241ea5e2e3de42f8bbb3d2fec4e151325e17fa073f20c43e56a2.scope: Deactivated successfully.
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.536741749 +0000 UTC m=+0.071461320 container create 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:38:36 np0005486808 python3.9[218621]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:36 np0005486808 systemd[1]: Started libpod-conmon-575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3.scope.
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.508093242 +0000 UTC m=+0.042812823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.649182715 +0000 UTC m=+0.183902276 container init 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.656169635 +0000 UTC m=+0.190889166 container start 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.660874089 +0000 UTC m=+0.195593640 container attach 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:38:36 np0005486808 peaceful_rubin[218653]: 167 167
Oct 14 04:38:36 np0005486808 systemd[1]: libpod-575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3.scope: Deactivated successfully.
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.661947406 +0000 UTC m=+0.196666927 container died 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:38:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-33b5e6bac5f3a0387f11f3a83d4416cf0c5614e8ffcec5c24027e54d31cb5024-merged.mount: Deactivated successfully.
Oct 14 04:38:36 np0005486808 podman[218636]: 2025-10-14 08:38:36.703800094 +0000 UTC m=+0.238519625 container remove 575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:38:36 np0005486808 systemd[1]: libpod-conmon-575499cc6c79abb589cd48d397f2b6aadedde1269d5fd9401c36e44e8c1b9ef3.scope: Deactivated successfully.
Oct 14 04:38:36 np0005486808 podman[218729]: 2025-10-14 08:38:36.922458784 +0000 UTC m=+0.064291825 container create 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:38:36 np0005486808 systemd[1]: Started libpod-conmon-7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37.scope.
Oct 14 04:38:36 np0005486808 podman[218729]: 2025-10-14 08:38:36.896736728 +0000 UTC m=+0.038569789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a7f8a8aa85c3039d464a805a9df027638bc1419c825982fbb8647a5a0b5039/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a7f8a8aa85c3039d464a805a9df027638bc1419c825982fbb8647a5a0b5039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a7f8a8aa85c3039d464a805a9df027638bc1419c825982fbb8647a5a0b5039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4a7f8a8aa85c3039d464a805a9df027638bc1419c825982fbb8647a5a0b5039/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:37 np0005486808 podman[218729]: 2025-10-14 08:38:37.024300592 +0000 UTC m=+0.166133643 container init 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:38:37 np0005486808 podman[218729]: 2025-10-14 08:38:37.036618642 +0000 UTC m=+0.178451673 container start 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:38:37 np0005486808 podman[218729]: 2025-10-14 08:38:37.040275241 +0000 UTC m=+0.182108302 container attach 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:38:37 np0005486808 python3.9[218766]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:37 np0005486808 laughing_colden[218771]: {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    "0": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "devices": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "/dev/loop3"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            ],
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_name": "ceph_lv0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_size": "21470642176",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "name": "ceph_lv0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "tags": {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_name": "ceph",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.crush_device_class": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.encrypted": "0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_id": "0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.vdo": "0"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            },
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "vg_name": "ceph_vg0"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        }
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    ],
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    "1": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "devices": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "/dev/loop4"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            ],
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_name": "ceph_lv1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_size": "21470642176",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "name": "ceph_lv1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "tags": {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_name": "ceph",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.crush_device_class": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.encrypted": "0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_id": "1",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.vdo": "0"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            },
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "vg_name": "ceph_vg1"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        }
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    ],
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    "2": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "devices": [
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "/dev/loop5"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            ],
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_name": "ceph_lv2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_size": "21470642176",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "name": "ceph_lv2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "tags": {
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.cluster_name": "ceph",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.crush_device_class": "",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.encrypted": "0",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osd_id": "2",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:                "ceph.vdo": "0"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            },
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "type": "block",
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:            "vg_name": "ceph_vg2"
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:        }
Oct 14 04:38:37 np0005486808 laughing_colden[218771]:    ]
Oct 14 04:38:37 np0005486808 laughing_colden[218771]: }
Oct 14 04:38:37 np0005486808 systemd[1]: libpod-7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37.scope: Deactivated successfully.
Oct 14 04:38:37 np0005486808 podman[218932]: 2025-10-14 08:38:37.856327757 +0000 UTC m=+0.041175053 container died 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:38:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a4a7f8a8aa85c3039d464a805a9df027638bc1419c825982fbb8647a5a0b5039-merged.mount: Deactivated successfully.
Oct 14 04:38:37 np0005486808 python3.9[218929]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:37 np0005486808 podman[218932]: 2025-10-14 08:38:37.950396446 +0000 UTC m=+0.135243642 container remove 7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:38:37 np0005486808 systemd[1]: libpod-conmon-7e0f1a42051e0563e2b8a2973fb77e774ebb343ed98a7493dce6c0bce1552b37.scope: Deactivated successfully.
Oct 14 04:38:38 np0005486808 python3.9[219097]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.786191542 +0000 UTC m=+0.047614950 container create 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:38:38 np0005486808 systemd[1]: Started libpod-conmon-5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301.scope.
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.766915903 +0000 UTC m=+0.028339401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.8909076 +0000 UTC m=+0.152331118 container init 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.90079992 +0000 UTC m=+0.162223328 container start 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.904420328 +0000 UTC m=+0.165843826 container attach 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:38:38 np0005486808 dreamy_mestorf[219257]: 167 167
Oct 14 04:38:38 np0005486808 systemd[1]: libpod-5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301.scope: Deactivated successfully.
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.909964993 +0000 UTC m=+0.171388441 container died 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:38:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b7da1bc055425595b491f459fb2e881bb4c9b764d4040738cac74af35ec9467f-merged.mount: Deactivated successfully.
Oct 14 04:38:38 np0005486808 podman[219211]: 2025-10-14 08:38:38.978535332 +0000 UTC m=+0.239958750 container remove 5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 04:38:38 np0005486808 systemd[1]: libpod-conmon-5a0bda5fbf997a007c37219c90cea230848ef6342cdbece1fcf2e554d85e5301.scope: Deactivated successfully.
Oct 14 04:38:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:39 np0005486808 podman[219278]: 2025-10-14 08:38:39.042717303 +0000 UTC m=+0.095328940 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 04:38:39 np0005486808 podman[219372]: 2025-10-14 08:38:39.180349492 +0000 UTC m=+0.053236036 container create 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:38:39 np0005486808 systemd[1]: Started libpod-conmon-0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30.scope.
Oct 14 04:38:39 np0005486808 podman[219372]: 2025-10-14 08:38:39.158173523 +0000 UTC m=+0.031060067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:38:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:38:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a987c55203d3d06069b993bbd844e6c844f172feeae676b34408d84af3ce20e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a987c55203d3d06069b993bbd844e6c844f172feeae676b34408d84af3ce20e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a987c55203d3d06069b993bbd844e6c844f172feeae676b34408d84af3ce20e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a987c55203d3d06069b993bbd844e6c844f172feeae676b34408d84af3ce20e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:38:39 np0005486808 podman[219372]: 2025-10-14 08:38:39.28179304 +0000 UTC m=+0.154679574 container init 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:38:39 np0005486808 podman[219372]: 2025-10-14 08:38:39.294965261 +0000 UTC m=+0.167851795 container start 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:38:39 np0005486808 podman[219372]: 2025-10-14 08:38:39.299335467 +0000 UTC m=+0.172221991 container attach 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:38:39 np0005486808 python3.9[219388]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:39 np0005486808 python3.9[219474]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]: {
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_id": 2,
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "type": "bluestore"
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    },
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_id": 1,
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "type": "bluestore"
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    },
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_id": 0,
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:        "type": "bluestore"
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]:    }
Oct 14 04:38:40 np0005486808 confident_visvesvaraya[219392]: }
Oct 14 04:38:40 np0005486808 systemd[1]: libpod-0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30.scope: Deactivated successfully.
Oct 14 04:38:40 np0005486808 systemd[1]: libpod-0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30.scope: Consumed 1.142s CPU time.
Oct 14 04:38:40 np0005486808 podman[219372]: 2025-10-14 08:38:40.435810358 +0000 UTC m=+1.308696892 container died 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:38:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4a987c55203d3d06069b993bbd844e6c844f172feeae676b34408d84af3ce20e-merged.mount: Deactivated successfully.
Oct 14 04:38:40 np0005486808 podman[219372]: 2025-10-14 08:38:40.505883953 +0000 UTC m=+1.378770467 container remove 0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:38:40 np0005486808 systemd[1]: libpod-conmon-0ab6fdbb5d05229828b44c37bf730f870b44606b606adf13ade7c58ecc4dee30.scope: Deactivated successfully.
Oct 14 04:38:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:38:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:38:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:40 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 63493eaf-2589-4147-8a1d-6a5f98d394bf does not exist
Oct 14 04:38:40 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 46a12baa-56dc-4c19-be80-8e257c72fb93 does not exist
Oct 14 04:38:40 np0005486808 python3.9[219691]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:41 np0005486808 python3.9[219796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:38:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:42 np0005486808 python3.9[219948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:38:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:38:42 np0005486808 python3.9[220073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760431121.614396-1313-85387682782387/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:43 np0005486808 python3.9[220225]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:44 np0005486808 python3.9[220377]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:45 np0005486808 python3.9[220532]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:46 np0005486808 python3.9[220684]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:47 np0005486808 python3.9[220837]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:38:48 np0005486808 python3.9[220991]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:38:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:49 np0005486808 python3.9[221146]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:50 np0005486808 python3.9[221298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:50 np0005486808 python3.9[221421]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431129.5041332-1385-123049197465407/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:51 np0005486808 python3.9[221573]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:52 np0005486808 python3.9[221696]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431131.0007918-1400-6086335327281/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:52 np0005486808 python3.9[221848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:38:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:53 np0005486808 python3.9[221971]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431132.462772-1415-194233696129589/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:38:54 np0005486808 python3.9[222123]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:38:54 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:54 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:54 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:55 np0005486808 systemd[1]: Reached target edpm_libvirt.target.
Oct 14 04:38:55 np0005486808 python3.9[222314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 14 04:38:55 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:56 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:56 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:56 np0005486808 systemd[1]: Reloading.
Oct 14 04:38:56 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:38:56 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:38:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:38:57 np0005486808 systemd[1]: session-50.scope: Deactivated successfully.
Oct 14 04:38:57 np0005486808 systemd[1]: session-50.scope: Consumed 3min 51.289s CPU time.
Oct 14 04:38:57 np0005486808 systemd-logind[799]: Session 50 logged out. Waiting for processes to exit.
Oct 14 04:38:57 np0005486808 systemd-logind[799]: Removed session 50.
Oct 14 04:38:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:38:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:02 np0005486808 systemd-logind[799]: New session 51 of user zuul.
Oct 14 04:39:02 np0005486808 systemd[1]: Started Session 51 of User zuul.
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:03 np0005486808 python3.9[222565]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:39:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:05 np0005486808 python3.9[222721]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:05 np0005486808 podman[222874]: 2025-10-14 08:39:05.671228459 +0000 UTC m=+0.087538381 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:39:05 np0005486808 python3.9[222873]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:06 np0005486808 python3.9[223052]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:39:06.994 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:39:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:39:06.995 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:39:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:39:06.995 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:39:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:07 np0005486808 python3.9[223204]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 14 04:39:08 np0005486808 python3.9[223356]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:09 np0005486808 python3.9[223508]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:39:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:09 np0005486808 podman[223587]: 2025-10-14 08:39:09.673549611 +0000 UTC m=+0.082397495 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:39:10 np0005486808 python3.9[223683]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:39:10 np0005486808 systemd[1]: Reloading.
Oct 14 04:39:10 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:39:10 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:39:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:11 np0005486808 python3.9[223873]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:39:11 np0005486808 network[223890]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:39:11 np0005486808 network[223891]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:39:11 np0005486808 network[223892]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:39:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:18 np0005486808 python3.9[224166]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:39:19 np0005486808 systemd[1]: Reloading.
Oct 14 04:39:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:19 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:39:19 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:39:20 np0005486808 python3.9[224354]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:39:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:21 np0005486808 python3.9[224506]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 14 04:39:22 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:39:22 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:39:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:22 np0005486808 podman[224519]: 2025-10-14 08:39:22.849654471 +0000 UTC m=+1.363112009 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe
Oct 14 04:39:22 np0005486808 podman[224579]: 2025-10-14 08:39:22.97640784 +0000 UTC m=+0.036108030 container create 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 04:39:22 np0005486808 NetworkManager[44885]: <info>  [1760431162.9971] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered blocking state
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered disabled state
Oct 14 04:39:23 np0005486808 kernel: veth0: entered allmulticast mode
Oct 14 04:39:23 np0005486808 kernel: veth0: entered promiscuous mode
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0127] device (veth0): carrier: link connected
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered blocking state
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered forwarding state
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0131] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0135] device (podman0): carrier: link connected
Oct 14 04:39:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:23 np0005486808 systemd-udevd[224608]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:39:23 np0005486808 systemd-udevd[224610]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0458] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0465] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0476] device (podman0): Activation: starting connection 'podman0' (157b745d-2b1f-4c28-9dfe-d7d89cc6b5fd)
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0477] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0479] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0481] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0483] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:22.957263999 +0000 UTC m=+0.016964179 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe
Oct 14 04:39:23 np0005486808 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 14 04:39:23 np0005486808 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0815] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0818] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.0827] device (podman0): Activation: successful, device activated.
Oct 14 04:39:23 np0005486808 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 14 04:39:23 np0005486808 systemd[1]: Started libpod-conmon-2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7.scope.
Oct 14 04:39:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:23.342795728 +0000 UTC m=+0.402495968 container init 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:23.354152967 +0000 UTC m=+0.413853127 container start 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:23.357628643 +0000 UTC m=+0.417328883 container attach 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 04:39:23 np0005486808 iscsid_config[224736]: iqn.1994-05.com.redhat:d3c2572e8ecf#015
Oct 14 04:39:23 np0005486808 systemd[1]: libpod-2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7.scope: Deactivated successfully.
Oct 14 04:39:23 np0005486808 conmon[224736]: conmon 2ff8a643d0f53965b334 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7.scope/container/memory.events
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:23.35953636 +0000 UTC m=+0.419236520 container died 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered disabled state
Oct 14 04:39:23 np0005486808 kernel: veth0 (unregistering): left allmulticast mode
Oct 14 04:39:23 np0005486808 kernel: veth0 (unregistering): left promiscuous mode
Oct 14 04:39:23 np0005486808 kernel: podman0: port 1(veth0) entered disabled state
Oct 14 04:39:23 np0005486808 NetworkManager[44885]: <info>  [1760431163.4488] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:39:23 np0005486808 systemd[1]: run-netns-netns\x2dd414e972\x2d7371\x2d2965\x2d3fba\x2dab86af9dc87a.mount: Deactivated successfully.
Oct 14 04:39:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ec411797de23420ed8efcd13068b66e835ed2257ca8a778905626e1ca6915b53-merged.mount: Deactivated successfully.
Oct 14 04:39:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7-userdata-shm.mount: Deactivated successfully.
Oct 14 04:39:23 np0005486808 podman[224579]: 2025-10-14 08:39:23.824563825 +0000 UTC m=+0.884264015 container remove 2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 04:39:23 np0005486808 python3.9[224506]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe /usr/sbin/iscsi-iname
Oct 14 04:39:23 np0005486808 systemd[1]: libpod-conmon-2ff8a643d0f53965b3343a4a8abbcaa0848ee6542e90b7f5d8a672772ed9c6f7.scope: Deactivated successfully.
Oct 14 04:39:23 np0005486808 python3.9[224506]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 14 04:39:24 np0005486808 python3.9[224976]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:25 np0005486808 python3.9[225099]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431164.2442691-119-188357974285305/.source.iscsi _original_basename=.ks_ggmrj follow=False checksum=1c537088a4f5067787932ba3d331d54ba41fa6e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:26 np0005486808 python3.9[225251]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:27 np0005486808 python3.9[225401]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:39:28 np0005486808 python3.9[225555]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:29 np0005486808 python3.9[225707]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:30 np0005486808 python3.9[225859]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:31 np0005486808 python3.9[225937]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:31 np0005486808 python3.9[226089]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:32 np0005486808 python3.9[226167]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:39:32
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta']
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:39:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:39:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:33 np0005486808 python3.9[226319]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:33 np0005486808 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 14 04:39:33 np0005486808 python3.9[226471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:34 np0005486808 python3.9[226549]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:35 np0005486808 python3.9[226701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:35 np0005486808 python3.9[226779]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:36 np0005486808 podman[226903]: 2025-10-14 08:39:36.339091076 +0000 UTC m=+0.122847324 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:39:36 np0005486808 python3.9[226948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:39:36 np0005486808 systemd[1]: Reloading.
Oct 14 04:39:36 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:39:36 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:39:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:37 np0005486808 python3.9[227145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:38 np0005486808 python3.9[227223]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:39 np0005486808 python3.9[227375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:39 np0005486808 python3.9[227453]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:40 np0005486808 podman[227577]: 2025-10-14 08:39:40.328573831 +0000 UTC m=+0.078416361 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 04:39:40 np0005486808 python3.9[227623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:39:40 np0005486808 systemd[1]: Reloading.
Oct 14 04:39:40 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:39:40 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:39:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:41 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:39:41 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:39:41 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:39:41 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b79ca8b1-11c5-4a47-a844-231f0087c3cb does not exist
Oct 14 04:39:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e9377fc3-2255-4056-9117-5f10668ca796 does not exist
Oct 14 04:39:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f43c99dd-b00d-48b1-87e5-8dae3ded2365 does not exist
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:39:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:39:42 np0005486808 python3.9[227980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.207643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182207686, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1753, "num_deletes": 250, "total_data_size": 2948873, "memory_usage": 2990664, "flush_reason": "Manual Compaction"}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182218649, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1665912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11659, "largest_seqno": 13411, "table_properties": {"data_size": 1660160, "index_size": 2891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14416, "raw_average_key_size": 20, "raw_value_size": 1647455, "raw_average_value_size": 2300, "num_data_blocks": 134, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430982, "oldest_key_time": 1760430982, "file_creation_time": 1760431182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11076 microseconds, and 5795 cpu microseconds.
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.218720) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1665912 bytes OK
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.218738) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.220092) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.220109) EVENT_LOG_v1 {"time_micros": 1760431182220103, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.220126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2941416, prev total WAL file size 2941416, number of live WAL files 2.
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.220909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1626KB)], [29(7709KB)]
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182220943, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9560066, "oldest_snapshot_seqno": -1}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3955 keys, 7496172 bytes, temperature: kUnknown
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182257615, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7496172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7468005, "index_size": 17177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 94125, "raw_average_key_size": 23, "raw_value_size": 7394917, "raw_average_value_size": 1869, "num_data_blocks": 750, "num_entries": 3955, "num_filter_entries": 3955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.257803) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7496172 bytes
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.259041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.3 rd, 204.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4373, records dropped: 418 output_compression: NoCompression
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.259061) EVENT_LOG_v1 {"time_micros": 1760431182259051, "job": 12, "event": "compaction_finished", "compaction_time_micros": 36734, "compaction_time_cpu_micros": 16697, "output_level": 6, "num_output_files": 1, "total_output_size": 7496172, "num_input_records": 4373, "num_output_records": 3955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182259484, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431182260888, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.220843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.260922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.260926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.260928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.260930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:39:42.260932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.348069324 +0000 UTC m=+0.047588162 container create 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:39:42 np0005486808 systemd[1]: Started libpod-conmon-7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7.scope.
Oct 14 04:39:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.326405091 +0000 UTC m=+0.025923969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.43686742 +0000 UTC m=+0.136386248 container init 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.445776579 +0000 UTC m=+0.145295387 container start 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.449162022 +0000 UTC m=+0.148680830 container attach 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:39:42 np0005486808 cool_joliot[228196]: 167 167
Oct 14 04:39:42 np0005486808 systemd[1]: libpod-7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7.scope: Deactivated successfully.
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.454819512 +0000 UTC m=+0.154338320 container died 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:39:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-30d8e17b9bca03461fc1aeeef0e3e80d58685628938a838e80972952351ca1bf-merged.mount: Deactivated successfully.
Oct 14 04:39:42 np0005486808 podman[228144]: 2025-10-14 08:39:42.509152819 +0000 UTC m=+0.208671657 container remove 7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:39:42 np0005486808 systemd[1]: libpod-conmon-7428291fba29704a779651323f88965e312f910df58669e228ef1c752dc051d7.scope: Deactivated successfully.
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:39:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:39:42 np0005486808 podman[228282]: 2025-10-14 08:39:42.717135438 +0000 UTC m=+0.059384853 container create 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:39:42 np0005486808 systemd[1]: Started libpod-conmon-0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb.scope.
Oct 14 04:39:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:42 np0005486808 podman[228282]: 2025-10-14 08:39:42.700489798 +0000 UTC m=+0.042739193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:42 np0005486808 podman[228282]: 2025-10-14 08:39:42.828246582 +0000 UTC m=+0.170496037 container init 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:39:42 np0005486808 podman[228282]: 2025-10-14 08:39:42.835922981 +0000 UTC m=+0.178172376 container start 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:39:42 np0005486808 podman[228282]: 2025-10-14 08:39:42.839485339 +0000 UTC m=+0.181734774 container attach 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:39:42 np0005486808 python3.9[228285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:43 np0005486808 python3.9[228428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431182.2659204-273-222994769072471/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:43 np0005486808 loving_keller[228301]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:39:43 np0005486808 loving_keller[228301]: --> relative data size: 1.0
Oct 14 04:39:43 np0005486808 loving_keller[228301]: --> All data devices are unavailable
Oct 14 04:39:43 np0005486808 systemd[1]: libpod-0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb.scope: Deactivated successfully.
Oct 14 04:39:43 np0005486808 systemd[1]: libpod-0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb.scope: Consumed 1.061s CPU time.
Oct 14 04:39:43 np0005486808 podman[228282]: 2025-10-14 08:39:43.95613118 +0000 UTC m=+1.298380605 container died 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:39:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ff1e443b8896e592e65dbf288e461d7f85878ac6af249015047b9e8d9cb5cf99-merged.mount: Deactivated successfully.
Oct 14 04:39:44 np0005486808 podman[228282]: 2025-10-14 08:39:44.034323365 +0000 UTC m=+1.376572760 container remove 0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:39:44 np0005486808 systemd[1]: libpod-conmon-0a6fcbdec01244cf6ebab5606fab7cd94c77b9800f6a34ae4c1700d6dabe29fb.scope: Deactivated successfully.
Oct 14 04:39:44 np0005486808 python3.9[228714]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.742682869 +0000 UTC m=+0.034710725 container create 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:39:44 np0005486808 systemd[1]: Started libpod-conmon-6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207.scope.
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.727227199 +0000 UTC m=+0.019255075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.843040849 +0000 UTC m=+0.135068725 container init 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.849810935 +0000 UTC m=+0.141838801 container start 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.854333877 +0000 UTC m=+0.146361823 container attach 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:39:44 np0005486808 exciting_tharp[228822]: 167 167
Oct 14 04:39:44 np0005486808 systemd[1]: libpod-6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207.scope: Deactivated successfully.
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.856651904 +0000 UTC m=+0.148679760 container died 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:39:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f2dfc6ff366a39ce78bd97390915545837bfa2d86aa32236643448b9a168b531-merged.mount: Deactivated successfully.
Oct 14 04:39:44 np0005486808 podman[228783]: 2025-10-14 08:39:44.904226295 +0000 UTC m=+0.196254161 container remove 6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tharp, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:39:44 np0005486808 systemd[1]: libpod-conmon-6920282f4175e332901691c5e590bb9216791fd867999c3eca8ddfcf34195207.scope: Deactivated successfully.
Oct 14 04:39:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.088640634 +0000 UTC m=+0.045715337 container create 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:39:45 np0005486808 systemd[1]: Started libpod-conmon-2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297.scope.
Oct 14 04:39:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ab57d7dd212398ecbbf9daccc9318bef6d2c05b3938da26cf6abe31f6636d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ab57d7dd212398ecbbf9daccc9318bef6d2c05b3938da26cf6abe31f6636d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ab57d7dd212398ecbbf9daccc9318bef6d2c05b3938da26cf6abe31f6636d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ab57d7dd212398ecbbf9daccc9318bef6d2c05b3938da26cf6abe31f6636d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.069511823 +0000 UTC m=+0.026586546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.176768492 +0000 UTC m=+0.133843195 container init 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.193620747 +0000 UTC m=+0.150695490 container start 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.199659546 +0000 UTC m=+0.156734269 container attach 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:39:45 np0005486808 python3.9[228972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:39:45 np0005486808 serene_morse[228948]: {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    "0": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "devices": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "/dev/loop3"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            ],
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_name": "ceph_lv0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_size": "21470642176",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "name": "ceph_lv0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "tags": {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_name": "ceph",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.crush_device_class": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.encrypted": "0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_id": "0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.vdo": "0"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            },
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "vg_name": "ceph_vg0"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        }
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    ],
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    "1": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "devices": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "/dev/loop4"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            ],
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_name": "ceph_lv1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_size": "21470642176",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "name": "ceph_lv1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "tags": {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_name": "ceph",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.crush_device_class": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.encrypted": "0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_id": "1",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.vdo": "0"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            },
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "vg_name": "ceph_vg1"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        }
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    ],
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    "2": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "devices": [
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "/dev/loop5"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            ],
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_name": "ceph_lv2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_size": "21470642176",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "name": "ceph_lv2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "tags": {
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.cluster_name": "ceph",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.crush_device_class": "",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.encrypted": "0",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osd_id": "2",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:                "ceph.vdo": "0"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            },
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "type": "block",
Oct 14 04:39:45 np0005486808 serene_morse[228948]:            "vg_name": "ceph_vg2"
Oct 14 04:39:45 np0005486808 serene_morse[228948]:        }
Oct 14 04:39:45 np0005486808 serene_morse[228948]:    ]
Oct 14 04:39:45 np0005486808 serene_morse[228948]: }
Oct 14 04:39:45 np0005486808 systemd[1]: libpod-2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297.scope: Deactivated successfully.
Oct 14 04:39:45 np0005486808 podman[228898]: 2025-10-14 08:39:45.946047856 +0000 UTC m=+0.903122589 container died 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:39:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d3ab57d7dd212398ecbbf9daccc9318bef6d2c05b3938da26cf6abe31f6636d3-merged.mount: Deactivated successfully.
Oct 14 04:39:46 np0005486808 python3.9[229097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431184.79548-298-140819113656009/.source.json _original_basename=.95w3krjs follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:46 np0005486808 podman[228898]: 2025-10-14 08:39:46.037265601 +0000 UTC m=+0.994340314 container remove 2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_morse, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:39:46 np0005486808 systemd[1]: libpod-conmon-2ef4bbbc97d292ec9c424620158bfb1c2c2216289dda30096d5208fd0b726297.scope: Deactivated successfully.
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.729780315 +0000 UTC m=+0.061689240 container create b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:39:46 np0005486808 systemd[1]: Started libpod-conmon-b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a.scope.
Oct 14 04:39:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.703746314 +0000 UTC m=+0.035655289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.807420366 +0000 UTC m=+0.139329271 container init b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.816182331 +0000 UTC m=+0.148091216 container start b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.819592595 +0000 UTC m=+0.151501510 container attach b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:39:46 np0005486808 cool_hamilton[229420]: 167 167
Oct 14 04:39:46 np0005486808 systemd[1]: libpod-b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a.scope: Deactivated successfully.
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.822852315 +0000 UTC m=+0.154761200 container died b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:39:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-82b8a50c21098232c31a3ee8e94907249ce851ffb08a277ae3781faa11849846-merged.mount: Deactivated successfully.
Oct 14 04:39:46 np0005486808 podman[229401]: 2025-10-14 08:39:46.862687176 +0000 UTC m=+0.194596051 container remove b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 04:39:46 np0005486808 python3.9[229409]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:46 np0005486808 systemd[1]: libpod-conmon-b6dc77bbbf9eb7561c2dceee063cfdb4159b934aaa509f1f10b62e4dda96ac1a.scope: Deactivated successfully.
Oct 14 04:39:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:47 np0005486808 podman[229468]: 2025-10-14 08:39:47.057607893 +0000 UTC m=+0.045074190 container create 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:39:47 np0005486808 systemd[1]: Started libpod-conmon-82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656.scope.
Oct 14 04:39:47 np0005486808 podman[229468]: 2025-10-14 08:39:47.036438842 +0000 UTC m=+0.023905229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:39:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:39:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2515335e0adbdbfd66184cbd6bc675a46ec8540ed4b1b782a7c6af965164a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2515335e0adbdbfd66184cbd6bc675a46ec8540ed4b1b782a7c6af965164a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2515335e0adbdbfd66184cbd6bc675a46ec8540ed4b1b782a7c6af965164a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f2515335e0adbdbfd66184cbd6bc675a46ec8540ed4b1b782a7c6af965164a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:39:47 np0005486808 podman[229468]: 2025-10-14 08:39:47.153400581 +0000 UTC m=+0.140866938 container init 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:39:47 np0005486808 podman[229468]: 2025-10-14 08:39:47.166177785 +0000 UTC m=+0.153644082 container start 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:39:47 np0005486808 podman[229468]: 2025-10-14 08:39:47.169408305 +0000 UTC m=+0.156874652 container attach 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:39:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]: {
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_id": 2,
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "type": "bluestore"
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    },
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_id": 1,
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "type": "bluestore"
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    },
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_id": 0,
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:        "type": "bluestore"
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]:    }
Oct 14 04:39:48 np0005486808 peaceful_kapitsa[229489]: }
Oct 14 04:39:48 np0005486808 systemd[1]: libpod-82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656.scope: Deactivated successfully.
Oct 14 04:39:48 np0005486808 podman[229468]: 2025-10-14 08:39:48.124285475 +0000 UTC m=+1.111751802 container died 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:39:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8f2515335e0adbdbfd66184cbd6bc675a46ec8540ed4b1b782a7c6af965164a4-merged.mount: Deactivated successfully.
Oct 14 04:39:48 np0005486808 podman[229468]: 2025-10-14 08:39:48.196311908 +0000 UTC m=+1.183778215 container remove 82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:39:48 np0005486808 systemd[1]: libpod-conmon-82bc3f022f0d073b1ce7467a65b8ea04e28ce13b63f57786efbf768f7a9be656.scope: Deactivated successfully.
Oct 14 04:39:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:39:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:39:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2136bc92-f169-479b-8cd0-4d66aa57dd3d does not exist
Oct 14 04:39:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 335fb7e1-1ba9-4754-86ac-2a07fe66fd62 does not exist
Oct 14 04:39:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:39:49 np0005486808 python3.9[229984]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 14 04:39:50 np0005486808 python3.9[230136]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:39:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:51 np0005486808 python3.9[230288]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 14 04:39:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:53 np0005486808 python3[230466]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:39:53 np0005486808 podman[230505]: 2025-10-14 08:39:53.775672684 +0000 UTC m=+0.061775711 container create 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 14 04:39:53 np0005486808 podman[230505]: 2025-10-14 08:39:53.74177008 +0000 UTC m=+0.027873167 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe
Oct 14 04:39:53 np0005486808 python3[230466]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe
Oct 14 04:39:54 np0005486808 python3.9[230694]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:39:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:55 np0005486808 python3.9[230848]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:56 np0005486808 python3.9[230924]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:39:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:39:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:39:57 np0005486808 python3.9[231075]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760431196.393655-386-86994810012517/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:39:57 np0005486808 python3.9[231151]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:39:57 np0005486808 systemd[1]: Reloading.
Oct 14 04:39:58 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:39:58 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:39:59 np0005486808 python3.9[231262]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:39:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:00 np0005486808 systemd[1]: Reloading.
Oct 14 04:40:00 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:40:00 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:40:00 np0005486808 systemd[1]: Starting iscsid container...
Oct 14 04:40:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c032d232bd0b72fbdf80348425722e1b26ce46048c32588b21631d6fd24210/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c032d232bd0b72fbdf80348425722e1b26ce46048c32588b21631d6fd24210/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c032d232bd0b72fbdf80348425722e1b26ce46048c32588b21631d6fd24210/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:00 np0005486808 systemd[1]: Started /usr/bin/podman healthcheck run 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9.
Oct 14 04:40:00 np0005486808 podman[231301]: 2025-10-14 08:40:00.615790919 +0000 UTC m=+0.126304320 container init 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 04:40:00 np0005486808 iscsid[231316]: + sudo -E kolla_set_configs
Oct 14 04:40:00 np0005486808 podman[231301]: 2025-10-14 08:40:00.656335647 +0000 UTC m=+0.166849078 container start 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 04:40:00 np0005486808 podman[231301]: iscsid
Oct 14 04:40:00 np0005486808 systemd[1]: Started iscsid container.
Oct 14 04:40:00 np0005486808 systemd[1]: Created slice User Slice of UID 0.
Oct 14 04:40:00 np0005486808 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 14 04:40:00 np0005486808 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 14 04:40:00 np0005486808 systemd[1]: Starting User Manager for UID 0...
Oct 14 04:40:00 np0005486808 podman[231324]: 2025-10-14 08:40:00.776290019 +0000 UTC m=+0.098510615 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:40:00 np0005486808 systemd[1]: 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9-7581ed72f23f2c85.service: Main process exited, code=exited, status=1/FAILURE
Oct 14 04:40:00 np0005486808 systemd[1]: 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9-7581ed72f23f2c85.service: Failed with result 'exit-code'.
Oct 14 04:40:00 np0005486808 systemd[231341]: Queued start job for default target Main User Target.
Oct 14 04:40:00 np0005486808 systemd[231341]: Created slice User Application Slice.
Oct 14 04:40:00 np0005486808 systemd[231341]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 14 04:40:00 np0005486808 systemd[231341]: Started Daily Cleanup of User's Temporary Directories.
Oct 14 04:40:00 np0005486808 systemd[231341]: Reached target Paths.
Oct 14 04:40:00 np0005486808 systemd[231341]: Reached target Timers.
Oct 14 04:40:00 np0005486808 systemd[231341]: Starting D-Bus User Message Bus Socket...
Oct 14 04:40:00 np0005486808 systemd[231341]: Starting Create User's Volatile Files and Directories...
Oct 14 04:40:00 np0005486808 systemd[231341]: Listening on D-Bus User Message Bus Socket.
Oct 14 04:40:00 np0005486808 systemd[231341]: Reached target Sockets.
Oct 14 04:40:00 np0005486808 systemd[231341]: Finished Create User's Volatile Files and Directories.
Oct 14 04:40:00 np0005486808 systemd[231341]: Reached target Basic System.
Oct 14 04:40:00 np0005486808 systemd[231341]: Reached target Main User Target.
Oct 14 04:40:00 np0005486808 systemd[231341]: Startup finished in 170ms.
Oct 14 04:40:00 np0005486808 systemd[1]: Started User Manager for UID 0.
Oct 14 04:40:00 np0005486808 systemd[1]: Started Session c3 of User root.
Oct 14 04:40:01 np0005486808 iscsid[231316]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:40:01 np0005486808 iscsid[231316]: INFO:__main__:Validating config file
Oct 14 04:40:01 np0005486808 iscsid[231316]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:40:01 np0005486808 iscsid[231316]: INFO:__main__:Writing out command to execute
Oct 14 04:40:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:01 np0005486808 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 14 04:40:01 np0005486808 iscsid[231316]: ++ cat /run_command
Oct 14 04:40:01 np0005486808 iscsid[231316]: + CMD='/usr/sbin/iscsid -f'
Oct 14 04:40:01 np0005486808 iscsid[231316]: + ARGS=
Oct 14 04:40:01 np0005486808 iscsid[231316]: + sudo kolla_copy_cacerts
Oct 14 04:40:01 np0005486808 systemd[1]: Started Session c4 of User root.
Oct 14 04:40:01 np0005486808 iscsid[231316]: + [[ ! -n '' ]]
Oct 14 04:40:01 np0005486808 iscsid[231316]: + . kolla_extend_start
Oct 14 04:40:01 np0005486808 iscsid[231316]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 14 04:40:01 np0005486808 iscsid[231316]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 14 04:40:01 np0005486808 iscsid[231316]: Running command: '/usr/sbin/iscsid -f'
Oct 14 04:40:01 np0005486808 iscsid[231316]: + umask 0022
Oct 14 04:40:01 np0005486808 iscsid[231316]: + exec /usr/sbin/iscsid -f
Oct 14 04:40:01 np0005486808 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 14 04:40:01 np0005486808 kernel: Loading iSCSI transport class v2.0-870.
Oct 14 04:40:01 np0005486808 python3.9[231522]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:02 np0005486808 python3.9[231674]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:03 np0005486808 python3.9[231826]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:40:03 np0005486808 network[231843]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:40:03 np0005486808 network[231844]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:40:03 np0005486808 network[231845]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:40:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:06 np0005486808 podman[231923]: 2025-10-14 08:40:06.738988889 +0000 UTC m=+0.132686356 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:40:06.996 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:40:06.997 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:40:06.997 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:40:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:09 np0005486808 python3.9[232146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 14 04:40:10 np0005486808 python3.9[232298]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 14 04:40:10 np0005486808 podman[232302]: 2025-10-14 08:40:10.432766919 +0000 UTC m=+0.058794118 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 04:40:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:11 np0005486808 python3.9[232473]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:11 np0005486808 systemd[1]: Stopping User Manager for UID 0...
Oct 14 04:40:11 np0005486808 systemd[231341]: Activating special unit Exit the Session...
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped target Main User Target.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped target Basic System.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped target Paths.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped target Sockets.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped target Timers.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 14 04:40:11 np0005486808 systemd[231341]: Closed D-Bus User Message Bus Socket.
Oct 14 04:40:11 np0005486808 systemd[231341]: Stopped Create User's Volatile Files and Directories.
Oct 14 04:40:11 np0005486808 systemd[231341]: Removed slice User Application Slice.
Oct 14 04:40:11 np0005486808 systemd[231341]: Reached target Shutdown.
Oct 14 04:40:11 np0005486808 systemd[231341]: Finished Exit the Session.
Oct 14 04:40:11 np0005486808 systemd[231341]: Reached target Exit the Session.
Oct 14 04:40:11 np0005486808 systemd[1]: user@0.service: Deactivated successfully.
Oct 14 04:40:11 np0005486808 systemd[1]: Stopped User Manager for UID 0.
Oct 14 04:40:11 np0005486808 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 14 04:40:11 np0005486808 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 14 04:40:11 np0005486808 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 14 04:40:11 np0005486808 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 14 04:40:11 np0005486808 systemd[1]: Removed slice User Slice of UID 0.
Oct 14 04:40:11 np0005486808 python3.9[232598]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431210.5515041-460-133458283295975/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:12 np0005486808 python3.9[232750]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:13 np0005486808 python3.9[232902]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:40:13 np0005486808 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 14 04:40:13 np0005486808 systemd[1]: Stopped Load Kernel Modules.
Oct 14 04:40:13 np0005486808 systemd[1]: Stopping Load Kernel Modules...
Oct 14 04:40:13 np0005486808 systemd[1]: Starting Load Kernel Modules...
Oct 14 04:40:13 np0005486808 systemd[1]: Finished Load Kernel Modules.
Oct 14 04:40:14 np0005486808 python3.9[233058]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:14 np0005486808 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 14 04:40:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:15 np0005486808 python3.9[233211]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:15 np0005486808 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 14 04:40:16 np0005486808 python3.9[233364]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:16 np0005486808 python3.9[233516]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:17 np0005486808 python3.9[233639]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431216.4160054-518-202180053844916/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:18 np0005486808 python3.9[233791]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:40:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:19 np0005486808 python3.9[233944]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:20 np0005486808 python3.9[234096]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:21 np0005486808 python3.9[234248]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:22 np0005486808 python3.9[234400]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:23 np0005486808 python3.9[234552]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:24 np0005486808 python3.9[234704]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:24 np0005486808 python3.9[234856]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:25 np0005486808 python3.9[235008]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:26 np0005486808 python3.9[235162]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:26 np0005486808 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 14 04:40:26 np0005486808 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 14 04:40:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:27 np0005486808 python3.9[235316]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:28 np0005486808 python3.9[235468]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:28 np0005486808 python3.9[235546]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:29 np0005486808 python3.9[235698]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:30 np0005486808 python3.9[235776]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:30 np0005486808 python3.9[235928]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:31 np0005486808 podman[236052]: 2025-10-14 08:40:31.537032726 +0000 UTC m=+0.079414196 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 04:40:31 np0005486808 python3.9[236097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:32 np0005486808 python3.9[236178]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:40:32
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:40:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:40:32 np0005486808 python3.9[236330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:33 np0005486808 python3.9[236408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:34 np0005486808 python3.9[236560]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:40:34 np0005486808 systemd[1]: Reloading.
Oct 14 04:40:34 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:40:34 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:40:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:35 np0005486808 python3.9[236750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:36 np0005486808 python3.9[236828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:36 np0005486808 podman[236952]: 2025-10-14 08:40:36.934624549 +0000 UTC m=+0.129473668 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:40:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:37 np0005486808 python3.9[237000]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:37 np0005486808 python3.9[237085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:38 np0005486808 python3.9[237237]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:40:38 np0005486808 systemd[1]: Reloading.
Oct 14 04:40:38 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:40:38 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:40:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:39 np0005486808 systemd[1]: Starting Create netns directory...
Oct 14 04:40:39 np0005486808 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 14 04:40:39 np0005486808 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 14 04:40:39 np0005486808 systemd[1]: Finished Create netns directory.
Oct 14 04:40:40 np0005486808 python3.9[237429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:40 np0005486808 podman[237550]: 2025-10-14 08:40:40.653998768 +0000 UTC m=+0.058426349 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 14 04:40:40 np0005486808 python3.9[237601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:41 np0005486808 python3.9[237724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431240.3370583-725-223904923442979/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:40:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:40:42 np0005486808 python3.9[237876]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:40:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:43 np0005486808 python3.9[238028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:40:44 np0005486808 python3.9[238151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431242.9011612-750-84984968239390/.source.json _original_basename=.0gvry5qc follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:44 np0005486808 python3.9[238303]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:47 np0005486808 python3.9[238730]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 14 04:40:48 np0005486808 python3.9[238882]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:40:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:49 np0005486808 python3.9[239151]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8b1e548e-f1b1-41a3-94b6-de278f7fe21c does not exist
Oct 14 04:40:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 77c53f8c-e2b8-4c7c-b142-1d27cbfcb6f8 does not exist
Oct 14 04:40:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d2c4628b-5704-44b6-a902-df1bb3c86e5d does not exist
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:40:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:40:49 np0005486808 podman[239358]: 2025-10-14 08:40:49.929532011 +0000 UTC m=+0.055272791 container create a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:40:49 np0005486808 systemd[1]: Started libpod-conmon-a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b.scope.
Oct 14 04:40:49 np0005486808 podman[239358]: 2025-10-14 08:40:49.903410619 +0000 UTC m=+0.029151449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:50 np0005486808 podman[239358]: 2025-10-14 08:40:50.035713015 +0000 UTC m=+0.161453765 container init a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:40:50 np0005486808 podman[239358]: 2025-10-14 08:40:50.046725646 +0000 UTC m=+0.172466456 container start a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 04:40:50 np0005486808 podman[239358]: 2025-10-14 08:40:50.050658503 +0000 UTC m=+0.176399273 container attach a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:40:50 np0005486808 interesting_mendeleev[239374]: 167 167
Oct 14 04:40:50 np0005486808 systemd[1]: libpod-a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b.scope: Deactivated successfully.
Oct 14 04:40:50 np0005486808 conmon[239374]: conmon a59b2df01ab98b334854 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b.scope/container/memory.events
Oct 14 04:40:50 np0005486808 podman[239358]: 2025-10-14 08:40:50.056313222 +0000 UTC m=+0.182053982 container died a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:40:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-22e82884cb27e2b2c4792a32113bfce4997480066b7bb6d13c4986cd9c8d95ba-merged.mount: Deactivated successfully.
Oct 14 04:40:50 np0005486808 podman[239358]: 2025-10-14 08:40:50.103060562 +0000 UTC m=+0.228801312 container remove a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mendeleev, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:40:50 np0005486808 systemd[1]: libpod-conmon-a59b2df01ab98b33485430b148b754536b4a5a09e291ff2cc937e04e37e9fb4b.scope: Deactivated successfully.
Oct 14 04:40:50 np0005486808 podman[239435]: 2025-10-14 08:40:50.262106787 +0000 UTC m=+0.049338136 container create 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:40:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:40:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:40:50 np0005486808 systemd[1]: Started libpod-conmon-5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6.scope.
Oct 14 04:40:50 np0005486808 podman[239435]: 2025-10-14 08:40:50.234285842 +0000 UTC m=+0.021517211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:50 np0005486808 podman[239435]: 2025-10-14 08:40:50.367189893 +0000 UTC m=+0.154421242 container init 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:40:50 np0005486808 podman[239435]: 2025-10-14 08:40:50.376545943 +0000 UTC m=+0.163777292 container start 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:40:50 np0005486808 podman[239435]: 2025-10-14 08:40:50.381166267 +0000 UTC m=+0.168397636 container attach 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:40:50 np0005486808 python3[239547]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:40:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:51 np0005486808 mystifying_elion[239490]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:40:51 np0005486808 mystifying_elion[239490]: --> relative data size: 1.0
Oct 14 04:40:51 np0005486808 mystifying_elion[239490]: --> All data devices are unavailable
Oct 14 04:40:51 np0005486808 systemd[1]: libpod-5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6.scope: Deactivated successfully.
Oct 14 04:40:51 np0005486808 podman[239435]: 2025-10-14 08:40:51.426785151 +0000 UTC m=+1.214016510 container died 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:40:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-977bb1eecc8a4977bf947c987423dd9c6fa5f9dc13766fd9f84d93223ff7b067-merged.mount: Deactivated successfully.
Oct 14 04:40:51 np0005486808 podman[239435]: 2025-10-14 08:40:51.483722693 +0000 UTC m=+1.270954052 container remove 5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:40:51 np0005486808 systemd[1]: libpod-conmon-5e1b72f873bdaa4536e7e83062f12c45c0b3fa7b7fa788d8fdecf62eb2ddaea6.scope: Deactivated successfully.
Oct 14 04:40:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.291349839 +0000 UTC m=+0.064088709 container create 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:40:52 np0005486808 systemd[1]: Started libpod-conmon-1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81.scope.
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.263412441 +0000 UTC m=+0.036151361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.410890371 +0000 UTC m=+0.183629221 container init 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.418319874 +0000 UTC m=+0.191058714 container start 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.421484442 +0000 UTC m=+0.194223322 container attach 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:40:52 np0005486808 determined_swanson[239772]: 167 167
Oct 14 04:40:52 np0005486808 systemd[1]: libpod-1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81.scope: Deactivated successfully.
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.426381772 +0000 UTC m=+0.199120612 container died 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:40:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-30ec2e77eca836276164a58e8a487000ae18576521ba783fd999250574c74811-merged.mount: Deactivated successfully.
Oct 14 04:40:52 np0005486808 podman[239750]: 2025-10-14 08:40:52.467574676 +0000 UTC m=+0.240313516 container remove 1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swanson, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:40:52 np0005486808 systemd[1]: libpod-conmon-1f8ce20dc78606b04b22178706ad1993981ad51bfb6926a7e0535fa71454ae81.scope: Deactivated successfully.
Oct 14 04:40:52 np0005486808 podman[239560]: 2025-10-14 08:40:52.804546399 +0000 UTC m=+1.865149514 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97
Oct 14 04:40:52 np0005486808 podman[239813]: 2025-10-14 08:40:52.813188262 +0000 UTC m=+0.231029017 container create 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:40:52 np0005486808 systemd[1]: Started libpod-conmon-80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b.scope.
Oct 14 04:40:52 np0005486808 podman[239813]: 2025-10-14 08:40:52.780802725 +0000 UTC m=+0.198643510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727859fbafd9817ac37d048a2f503581e997b8ef9e0026abf3fe3d2e17c23537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727859fbafd9817ac37d048a2f503581e997b8ef9e0026abf3fe3d2e17c23537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727859fbafd9817ac37d048a2f503581e997b8ef9e0026abf3fe3d2e17c23537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/727859fbafd9817ac37d048a2f503581e997b8ef9e0026abf3fe3d2e17c23537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:52 np0005486808 podman[239813]: 2025-10-14 08:40:52.913986453 +0000 UTC m=+0.331827258 container init 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:40:52 np0005486808 podman[239813]: 2025-10-14 08:40:52.920396731 +0000 UTC m=+0.338237496 container start 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:40:52 np0005486808 podman[239813]: 2025-10-14 08:40:52.923793874 +0000 UTC m=+0.341634719 container attach 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:40:52 np0005486808 podman[239855]: 2025-10-14 08:40:52.960537259 +0000 UTC m=+0.049081149 container create 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 14 04:40:52 np0005486808 podman[239855]: 2025-10-14 08:40:52.939570353 +0000 UTC m=+0.028114253 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97
Oct 14 04:40:52 np0005486808 python3[239547]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97
Oct 14 04:40:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]: {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    "0": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "devices": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "/dev/loop3"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            ],
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_name": "ceph_lv0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_size": "21470642176",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "name": "ceph_lv0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "tags": {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_name": "ceph",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.crush_device_class": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.encrypted": "0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_id": "0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.vdo": "0"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            },
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "vg_name": "ceph_vg0"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        }
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    ],
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    "1": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "devices": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "/dev/loop4"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            ],
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_name": "ceph_lv1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_size": "21470642176",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "name": "ceph_lv1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "tags": {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_name": "ceph",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.crush_device_class": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.encrypted": "0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_id": "1",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.vdo": "0"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            },
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "vg_name": "ceph_vg1"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        }
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    ],
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    "2": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "devices": [
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "/dev/loop5"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            ],
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_name": "ceph_lv2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_size": "21470642176",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "name": "ceph_lv2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "tags": {
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.cluster_name": "ceph",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.crush_device_class": "",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.encrypted": "0",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osd_id": "2",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:                "ceph.vdo": "0"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            },
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "type": "block",
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:            "vg_name": "ceph_vg2"
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:        }
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]:    ]
Oct 14 04:40:53 np0005486808 hopeful_ride[239842]: }
Oct 14 04:40:53 np0005486808 systemd[1]: libpod-80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b.scope: Deactivated successfully.
Oct 14 04:40:53 np0005486808 podman[239813]: 2025-10-14 08:40:53.727383382 +0000 UTC m=+1.145224117 container died 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:40:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-727859fbafd9817ac37d048a2f503581e997b8ef9e0026abf3fe3d2e17c23537-merged.mount: Deactivated successfully.
Oct 14 04:40:53 np0005486808 podman[239813]: 2025-10-14 08:40:53.805230188 +0000 UTC m=+1.223070933 container remove 80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:40:53 np0005486808 systemd[1]: libpod-conmon-80c4b2a88d5102444d31899f657db55d86d83f1a5e9d4b81a4bd9e561b208d6b.scope: Deactivated successfully.
Oct 14 04:40:53 np0005486808 python3.9[240049]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.338921773 +0000 UTC m=+0.020325641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.445085186 +0000 UTC m=+0.126489054 container create d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:40:54 np0005486808 systemd[1]: Started libpod-conmon-d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22.scope.
Oct 14 04:40:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.647720403 +0000 UTC m=+0.329124341 container init d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.656354566 +0000 UTC m=+0.337758444 container start d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:40:54 np0005486808 amazing_mahavira[240375]: 167 167
Oct 14 04:40:54 np0005486808 systemd[1]: libpod-d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22.scope: Deactivated successfully.
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.677097936 +0000 UTC m=+0.358501884 container attach d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.677594358 +0000 UTC m=+0.358998276 container died d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:40:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bbbaa698266ccdd8080b7b9032f649a8415b6464257c7fc44b7ed1caa9656980-merged.mount: Deactivated successfully.
Oct 14 04:40:54 np0005486808 podman[240291]: 2025-10-14 08:40:54.767382778 +0000 UTC m=+0.448786626 container remove d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:40:54 np0005486808 systemd[1]: libpod-conmon-d1f6f044a3de3ac6f599184a396c9afd80608d7a34d44143900b31d48eab5b22.scope: Deactivated successfully.
Oct 14 04:40:54 np0005486808 python3.9[240374]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:54 np0005486808 podman[240402]: 2025-10-14 08:40:54.976330841 +0000 UTC m=+0.074384832 container create dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 14 04:40:55 np0005486808 systemd[1]: Started libpod-conmon-dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c.scope.
Oct 14 04:40:55 np0005486808 podman[240402]: 2025-10-14 08:40:54.943185785 +0000 UTC m=+0.041239846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:40:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1c95f6cc46104133e91f7cb739aa8999ce1bb5aac84338b1bb09d20a50139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1c95f6cc46104133e91f7cb739aa8999ce1bb5aac84338b1bb09d20a50139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1c95f6cc46104133e91f7cb739aa8999ce1bb5aac84338b1bb09d20a50139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1c95f6cc46104133e91f7cb739aa8999ce1bb5aac84338b1bb09d20a50139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:55 np0005486808 podman[240402]: 2025-10-14 08:40:55.062687066 +0000 UTC m=+0.160741107 container init dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:40:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:55 np0005486808 podman[240402]: 2025-10-14 08:40:55.074817125 +0000 UTC m=+0.172871106 container start dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:40:55 np0005486808 podman[240402]: 2025-10-14 08:40:55.081662203 +0000 UTC m=+0.179716204 container attach dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:40:55 np0005486808 python3.9[240498]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]: {
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_id": 2,
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "type": "bluestore"
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    },
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_id": 1,
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "type": "bluestore"
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    },
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_id": 0,
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:        "type": "bluestore"
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]:    }
Oct 14 04:40:56 np0005486808 beautiful_sinoussi[240462]: }
Oct 14 04:40:56 np0005486808 systemd[1]: libpod-dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c.scope: Deactivated successfully.
Oct 14 04:40:56 np0005486808 podman[240402]: 2025-10-14 08:40:56.092553422 +0000 UTC m=+1.190607393 container died dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:40:56 np0005486808 systemd[1]: libpod-dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c.scope: Consumed 1.016s CPU time.
Oct 14 04:40:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f9e1c95f6cc46104133e91f7cb739aa8999ce1bb5aac84338b1bb09d20a50139-merged.mount: Deactivated successfully.
Oct 14 04:40:56 np0005486808 python3.9[240666]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760431255.4467018-838-61708961638017/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:40:56 np0005486808 podman[240402]: 2025-10-14 08:40:56.188063532 +0000 UTC m=+1.286117493 container remove dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:40:56 np0005486808 systemd[1]: libpod-conmon-dd78a25fe785fecc2dd31671401d694fb199d1f0ebddde243d18f9a97639994c.scope: Deactivated successfully.
Oct 14 04:40:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:40:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:40:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 03954262-3c19-4343-b5fc-6b8ea26355f7 does not exist
Oct 14 04:40:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f1901e2e-b026-4ed6-a2a5-88f0ce2fada4 does not exist
Oct 14 04:40:56 np0005486808 python3.9[240814]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:40:56 np0005486808 systemd[1]: Reloading.
Oct 14 04:40:56 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:40:56 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:40:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:40:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:40:57 np0005486808 python3.9[240926]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:40:57 np0005486808 systemd[1]: Reloading.
Oct 14 04:40:58 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:40:58 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:40:58 np0005486808 systemd[1]: Starting multipathd container...
Oct 14 04:40:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:40:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad96b531768087acb9b9d076cab7480ac4a2692a4066d177919215514dd84e8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad96b531768087acb9b9d076cab7480ac4a2692a4066d177919215514dd84e8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:40:58 np0005486808 systemd[1]: Started /usr/bin/podman healthcheck run 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19.
Oct 14 04:40:58 np0005486808 podman[240965]: 2025-10-14 08:40:58.463249809 +0000 UTC m=+0.175053370 container init 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:40:58 np0005486808 multipathd[240981]: + sudo -E kolla_set_configs
Oct 14 04:40:58 np0005486808 podman[240965]: 2025-10-14 08:40:58.493455932 +0000 UTC m=+0.205259473 container start 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:40:58 np0005486808 podman[240965]: multipathd
Oct 14 04:40:58 np0005486808 systemd[1]: Started multipathd container.
Oct 14 04:40:58 np0005486808 multipathd[240981]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:40:58 np0005486808 multipathd[240981]: INFO:__main__:Validating config file
Oct 14 04:40:58 np0005486808 multipathd[240981]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:40:58 np0005486808 multipathd[240981]: INFO:__main__:Writing out command to execute
Oct 14 04:40:58 np0005486808 multipathd[240981]: ++ cat /run_command
Oct 14 04:40:58 np0005486808 multipathd[240981]: + CMD='/usr/sbin/multipathd -d'
Oct 14 04:40:58 np0005486808 multipathd[240981]: + ARGS=
Oct 14 04:40:58 np0005486808 multipathd[240981]: + sudo kolla_copy_cacerts
Oct 14 04:40:58 np0005486808 multipathd[240981]: + [[ ! -n '' ]]
Oct 14 04:40:58 np0005486808 multipathd[240981]: + . kolla_extend_start
Oct 14 04:40:58 np0005486808 multipathd[240981]: Running command: '/usr/sbin/multipathd -d'
Oct 14 04:40:58 np0005486808 multipathd[240981]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 14 04:40:58 np0005486808 multipathd[240981]: + umask 0022
Oct 14 04:40:58 np0005486808 multipathd[240981]: + exec /usr/sbin/multipathd -d
Oct 14 04:40:58 np0005486808 podman[240988]: 2025-10-14 08:40:58.558697508 +0000 UTC m=+0.056557183 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:40:58 np0005486808 systemd[1]: 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-115dc4b19878283.service: Main process exited, code=exited, status=1/FAILURE
Oct 14 04:40:58 np0005486808 systemd[1]: 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-115dc4b19878283.service: Failed with result 'exit-code'.
Oct 14 04:40:58 np0005486808 multipathd[240981]: 5037.382120 | --------start up--------
Oct 14 04:40:58 np0005486808 multipathd[240981]: 5037.382134 | read /etc/multipath.conf
Oct 14 04:40:58 np0005486808 multipathd[240981]: 5037.386817 | path checkers start up
Oct 14 04:40:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:40:59 np0005486808 python3.9[241167]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:41:00 np0005486808 python3.9[241321]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:01 np0005486808 python3.9[241486]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:41:01 np0005486808 systemd[1]: Stopping multipathd container...
Oct 14 04:41:01 np0005486808 multipathd[240981]: 5040.135507 | exit (signal)
Oct 14 04:41:01 np0005486808 multipathd[240981]: 5040.135934 | --------shut down-------
Oct 14 04:41:01 np0005486808 systemd[1]: libpod-1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19.scope: Deactivated successfully.
Oct 14 04:41:01 np0005486808 podman[241490]: 2025-10-14 08:41:01.363256093 +0000 UTC m=+0.083190659 container died 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 14 04:41:01 np0005486808 systemd[1]: 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-115dc4b19878283.timer: Deactivated successfully.
Oct 14 04:41:01 np0005486808 systemd[1]: Stopped /usr/bin/podman healthcheck run 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19.
Oct 14 04:41:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-userdata-shm.mount: Deactivated successfully.
Oct 14 04:41:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6ad96b531768087acb9b9d076cab7480ac4a2692a4066d177919215514dd84e8-merged.mount: Deactivated successfully.
Oct 14 04:41:01 np0005486808 podman[241516]: 2025-10-14 08:41:01.633850003 +0000 UTC m=+0.052253337 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:41:01 np0005486808 podman[241490]: 2025-10-14 08:41:01.883658261 +0000 UTC m=+0.603592827 container cleanup 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:41:01 np0005486808 podman[241490]: multipathd
Oct 14 04:41:01 np0005486808 podman[241538]: multipathd
Oct 14 04:41:01 np0005486808 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 14 04:41:01 np0005486808 systemd[1]: Stopped multipathd container.
Oct 14 04:41:01 np0005486808 systemd[1]: Starting multipathd container...
Oct 14 04:41:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:41:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad96b531768087acb9b9d076cab7480ac4a2692a4066d177919215514dd84e8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad96b531768087acb9b9d076cab7480ac4a2692a4066d177919215514dd84e8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:02 np0005486808 systemd[1]: Started /usr/bin/podman healthcheck run 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19.
Oct 14 04:41:02 np0005486808 podman[241552]: 2025-10-14 08:41:02.10098662 +0000 UTC m=+0.123573142 container init 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:41:02 np0005486808 multipathd[241567]: + sudo -E kolla_set_configs
Oct 14 04:41:02 np0005486808 podman[241552]: 2025-10-14 08:41:02.138100663 +0000 UTC m=+0.160687135 container start 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 14 04:41:02 np0005486808 podman[241552]: multipathd
Oct 14 04:41:02 np0005486808 systemd[1]: Started multipathd container.
Oct 14 04:41:02 np0005486808 multipathd[241567]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:41:02 np0005486808 multipathd[241567]: INFO:__main__:Validating config file
Oct 14 04:41:02 np0005486808 multipathd[241567]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:41:02 np0005486808 multipathd[241567]: INFO:__main__:Writing out command to execute
Oct 14 04:41:02 np0005486808 multipathd[241567]: ++ cat /run_command
Oct 14 04:41:02 np0005486808 multipathd[241567]: + CMD='/usr/sbin/multipathd -d'
Oct 14 04:41:02 np0005486808 multipathd[241567]: + ARGS=
Oct 14 04:41:02 np0005486808 multipathd[241567]: + sudo kolla_copy_cacerts
Oct 14 04:41:02 np0005486808 multipathd[241567]: + [[ ! -n '' ]]
Oct 14 04:41:02 np0005486808 multipathd[241567]: + . kolla_extend_start
Oct 14 04:41:02 np0005486808 multipathd[241567]: Running command: '/usr/sbin/multipathd -d'
Oct 14 04:41:02 np0005486808 multipathd[241567]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 14 04:41:02 np0005486808 multipathd[241567]: + umask 0022
Oct 14 04:41:02 np0005486808 multipathd[241567]: + exec /usr/sbin/multipathd -d
Oct 14 04:41:02 np0005486808 podman[241574]: 2025-10-14 08:41:02.212190527 +0000 UTC m=+0.066569369 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:41:02 np0005486808 multipathd[241567]: 5041.030415 | --------start up--------
Oct 14 04:41:02 np0005486808 multipathd[241567]: 5041.030438 | read /etc/multipath.conf
Oct 14 04:41:02 np0005486808 systemd[1]: 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-9b638244670a583.service: Main process exited, code=exited, status=1/FAILURE
Oct 14 04:41:02 np0005486808 systemd[1]: 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19-9b638244670a583.service: Failed with result 'exit-code'.
Oct 14 04:41:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:02 np0005486808 multipathd[241567]: 5041.036330 | path checkers start up
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:02 np0005486808 python3.9[241758]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:03 np0005486808 python3.9[241910]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 14 04:41:04 np0005486808 python3.9[242062]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 14 04:41:04 np0005486808 kernel: Key type psk registered
Oct 14 04:41:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:05 np0005486808 python3.9[242224]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:41:06 np0005486808 python3.9[242347]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760431264.8511996-918-67226821530842/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:06 np0005486808 python3.9[242499]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:41:06.998 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:41:06.999 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:41:06.999 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:41:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:07 np0005486808 podman[242501]: 2025-10-14 08:41:07.147247355 +0000 UTC m=+0.119753588 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller)
Oct 14 04:41:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:07 np0005486808 python3.9[242679]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:41:07 np0005486808 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 14 04:41:07 np0005486808 systemd[1]: Stopped Load Kernel Modules.
Oct 14 04:41:07 np0005486808 systemd[1]: Stopping Load Kernel Modules...
Oct 14 04:41:07 np0005486808 systemd[1]: Starting Load Kernel Modules...
Oct 14 04:41:07 np0005486808 systemd[1]: Finished Load Kernel Modules.
Oct 14 04:41:08 np0005486808 python3.9[242835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 14 04:41:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:09 np0005486808 python3.9[242919]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 14 04:41:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:11 np0005486808 podman[242921]: 2025-10-14 08:41:11.669080394 +0000 UTC m=+0.068633520 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 04:41:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:17 np0005486808 systemd[1]: Reloading.
Oct 14 04:41:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:17 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:41:17 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:41:17 np0005486808 systemd[1]: Reloading.
Oct 14 04:41:17 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:41:17 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:41:17 np0005486808 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 14 04:41:18 np0005486808 lvm[243053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 04:41:18 np0005486808 lvm[243053]: VG ceph_vg0 finished
Oct 14 04:41:18 np0005486808 lvm[243055]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 04:41:18 np0005486808 lvm[243055]: VG ceph_vg2 finished
Oct 14 04:41:18 np0005486808 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 14 04:41:18 np0005486808 lvm[243054]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 04:41:18 np0005486808 lvm[243054]: VG ceph_vg1 finished
Oct 14 04:41:18 np0005486808 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 14 04:41:18 np0005486808 systemd[1]: Starting man-db-cache-update.service...
Oct 14 04:41:18 np0005486808 systemd[1]: Reloading.
Oct 14 04:41:18 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:41:18 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:41:18 np0005486808 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 14 04:41:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:20 np0005486808 python3.9[244398]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:20 np0005486808 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 14 04:41:20 np0005486808 systemd[1]: Finished man-db-cache-update.service.
Oct 14 04:41:20 np0005486808 systemd[1]: man-db-cache-update.service: Consumed 1.584s CPU time.
Oct 14 04:41:20 np0005486808 systemd[1]: run-rd5b2969d921841d18e37dcaa19c8839b.service: Deactivated successfully.
Oct 14 04:41:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:21 np0005486808 python3.9[244549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 14 04:41:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:22 np0005486808 python3.9[244705]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:24 np0005486808 python3.9[244857]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:41:24 np0005486808 systemd[1]: Reloading.
Oct 14 04:41:24 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:41:24 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:41:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:25 np0005486808 python3.9[245042]: ansible-ansible.builtin.service_facts Invoked
Oct 14 04:41:25 np0005486808 network[245059]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 14 04:41:25 np0005486808 network[245060]: 'network-scripts' will be removed from distribution in near future.
Oct 14 04:41:25 np0005486808 network[245061]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 14 04:41:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:32 np0005486808 python3.9[245339]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.242376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292242423, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1338, "num_deletes": 505, "total_data_size": 1655306, "memory_usage": 1684720, "flush_reason": "Manual Compaction"}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292272379, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1629155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13412, "largest_seqno": 14749, "table_properties": {"data_size": 1623269, "index_size": 2770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14687, "raw_average_key_size": 17, "raw_value_size": 1609554, "raw_average_value_size": 1967, "num_data_blocks": 127, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431183, "oldest_key_time": 1760431183, "file_creation_time": 1760431292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 30064 microseconds, and 7602 cpu microseconds.
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.272437) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1629155 bytes OK
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.272462) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.290776) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.290803) EVENT_LOG_v1 {"time_micros": 1760431292290795, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.290827) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1648267, prev total WAL file size 1648267, number of live WAL files 2.
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.291876) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1590KB)], [32(7320KB)]
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292291969, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9125327, "oldest_snapshot_seqno": -1}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3750 keys, 7191410 bytes, temperature: kUnknown
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292347610, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7191410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7164476, "index_size": 16456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91899, "raw_average_key_size": 24, "raw_value_size": 7094711, "raw_average_value_size": 1891, "num_data_blocks": 700, "num_entries": 3750, "num_filter_entries": 3750, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.347873) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7191410 bytes
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.353758) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.8 rd, 129.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.1 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4773, records dropped: 1023 output_compression: NoCompression
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.353807) EVENT_LOG_v1 {"time_micros": 1760431292353789, "job": 14, "event": "compaction_finished", "compaction_time_micros": 55717, "compaction_time_cpu_micros": 34823, "output_level": 6, "num_output_files": 1, "total_output_size": 7191410, "num_input_records": 4773, "num_output_records": 3750, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292354455, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431292355853, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.291764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.355961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.355967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.355969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.355971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:41:32.355972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:41:32 np0005486808 podman[245341]: 2025-10-14 08:41:32.361442344 +0000 UTC m=+0.128372091 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:41:32 np0005486808 podman[245384]: 2025-10-14 08:41:32.43683665 +0000 UTC m=+0.057981272 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:41:32
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.mgr']
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:41:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:41:33 np0005486808 python3.9[245532]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:33 np0005486808 python3.9[245685]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:34 np0005486808 python3.9[245838]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:36 np0005486808 python3.9[245991]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:37 np0005486808 python3.9[246144]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:37 np0005486808 podman[246146]: 2025-10-14 08:41:37.566705582 +0000 UTC m=+0.089561827 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 04:41:38 np0005486808 python3.9[246322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:39 np0005486808 python3.9[246475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:41:40 np0005486808 python3.9[246628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:40 np0005486808 python3.9[246780]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:41 np0005486808 python3.9[246932]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:42 np0005486808 podman[247056]: 2025-10-14 08:41:42.269898676 +0000 UTC m=+0.061734519 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 04:41:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:42 np0005486808 python3.9[247103]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:41:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:41:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:43 np0005486808 python3.9[247255]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:44 np0005486808 python3.9[247407]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:44 np0005486808 python3.9[247559]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:45 np0005486808 python3.9[247711]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:46 np0005486808 python3.9[247863]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:41:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3290 writes, 14K keys, 3290 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3290 writes, 3290 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1268 writes, 5764 keys, 1268 commit groups, 1.0 writes per commit group, ingest: 8.45 MB, 0.01 MB/s#012Interval WAL: 1268 writes, 1268 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.3      0.12              0.05         7    0.018       0      0       0.0       0.0#012  L6      1/0    6.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    196.4    161.3      0.25              0.15         6    0.042     24K   3200       0.0       0.0#012 Sum      1/0    6.86 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    131.9    150.2      0.38              0.20        13    0.029     24K   3200       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    151.6    153.4      0.23              0.11         8    0.028     17K   2469       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    196.4    161.3      0.25              0.15         6    0.042     24K   3200       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    134.8      0.12              0.05         6    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 308.00 MB usage: 1.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(100,1.40 MB,0.453563%) FilterBlock(14,73.80 KB,0.0233985%) IndexBlock(14,149.30 KB,0.0473369%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 04:41:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:47 np0005486808 python3.9[248015]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:48 np0005486808 python3.9[248167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:48 np0005486808 python3.9[248319]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:49 np0005486808 python3.9[248471]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:50 np0005486808 python3.9[248623]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:50 np0005486808 python3.9[248775]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:51 np0005486808 python3.9[248927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:41:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:52 np0005486808 python3.9[249079]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:53 np0005486808 python3.9[249231]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 14 04:41:54 np0005486808 python3.9[249383]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:41:54 np0005486808 systemd[1]: Reloading.
Oct 14 04:41:54 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:41:54 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:41:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:55 np0005486808 python3.9[249570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:56 np0005486808 python3.9[249723]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:56 np0005486808 python3.9[249974]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:41:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 084d02d4-998a-4001-bf67-d033e0f6b8e7 does not exist
Oct 14 04:41:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a9f49d30-1e39-42ac-8a91-887958794891 does not exist
Oct 14 04:41:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 61b7a8f3-6896-40ec-a6a5-9b65a77d3db6 does not exist
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:41:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:41:57 np0005486808 python3.9[250234]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.727969616 +0000 UTC m=+0.042587623 container create 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:41:57 np0005486808 systemd[1]: Started libpod-conmon-2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc.scope.
Oct 14 04:41:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.706581708 +0000 UTC m=+0.021199755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.812618127 +0000 UTC m=+0.127236144 container init 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.82043856 +0000 UTC m=+0.135056557 container start 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.824614807 +0000 UTC m=+0.139232824 container attach 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:41:57 np0005486808 nice_carson[250399]: 167 167
Oct 14 04:41:57 np0005486808 systemd[1]: libpod-2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc.scope: Deactivated successfully.
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.826620224 +0000 UTC m=+0.141238241 container died 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:41:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8b3f381e293320deaa2ec94010681e2d533252ff1719d46e146d427a433a7e48-merged.mount: Deactivated successfully.
Oct 14 04:41:57 np0005486808 podman[250350]: 2025-10-14 08:41:57.866314418 +0000 UTC m=+0.180932455 container remove 2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:41:57 np0005486808 systemd[1]: libpod-conmon-2b5066a765696c31f9d8d8cb47e6a6ee1bbc4fa3d3f6a0aea0a9c21cc717d0bc.scope: Deactivated successfully.
Oct 14 04:41:58 np0005486808 podman[250495]: 2025-10-14 08:41:58.027692677 +0000 UTC m=+0.037163036 container create 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:41:58 np0005486808 systemd[1]: Started libpod-conmon-3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b.scope.
Oct 14 04:41:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:41:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:58 np0005486808 podman[250495]: 2025-10-14 08:41:58.011568612 +0000 UTC m=+0.021038991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:41:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:41:58 np0005486808 python3.9[250489]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:58 np0005486808 podman[250495]: 2025-10-14 08:41:58.127716127 +0000 UTC m=+0.137186506 container init 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:41:58 np0005486808 podman[250495]: 2025-10-14 08:41:58.13342635 +0000 UTC m=+0.142896699 container start 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:41:58 np0005486808 podman[250495]: 2025-10-14 08:41:58.136668236 +0000 UTC m=+0.146138615 container attach 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:41:58 np0005486808 python3.9[250668]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:41:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:41:59 np0005486808 blissful_beaver[250511]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:41:59 np0005486808 blissful_beaver[250511]: --> relative data size: 1.0
Oct 14 04:41:59 np0005486808 blissful_beaver[250511]: --> All data devices are unavailable
Oct 14 04:41:59 np0005486808 podman[250495]: 2025-10-14 08:41:59.271169212 +0000 UTC m=+1.280639571 container died 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:41:59 np0005486808 systemd[1]: libpod-3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b.scope: Deactivated successfully.
Oct 14 04:41:59 np0005486808 systemd[1]: libpod-3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b.scope: Consumed 1.057s CPU time.
Oct 14 04:41:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d1ef274e450fbdbd2c158af51f7b6ccc54f317531c98fa85a8cf06c84afe2b30-merged.mount: Deactivated successfully.
Oct 14 04:41:59 np0005486808 podman[250495]: 2025-10-14 08:41:59.352996038 +0000 UTC m=+1.362466397 container remove 3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:41:59 np0005486808 systemd[1]: libpod-conmon-3c376d47f11a0691d2cf537f3df6515dafbf8370f0674c5d605f7d7615b9cb9b.scope: Deactivated successfully.
Oct 14 04:41:59 np0005486808 python3.9[250855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.060654671 +0000 UTC m=+0.048352967 container create 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:42:00 np0005486808 systemd[1]: Started libpod-conmon-8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf.scope.
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.040238256 +0000 UTC m=+0.027936542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:42:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.168424012 +0000 UTC m=+0.156122348 container init 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.183026982 +0000 UTC m=+0.170725248 container start 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.187000834 +0000 UTC m=+0.174699190 container attach 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:42:00 np0005486808 silly_satoshi[251162]: 167 167
Oct 14 04:42:00 np0005486808 systemd[1]: libpod-8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf.scope: Deactivated successfully.
Oct 14 04:42:00 np0005486808 conmon[251162]: conmon 8ad159e552611ccf5455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf.scope/container/memory.events
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.19410179 +0000 UTC m=+0.181800066 container died 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:42:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-93159b236fc47215cbe67d6da6c9dc9f019a9e2a0bcf3470eb9b53d2ef113a40-merged.mount: Deactivated successfully.
Oct 14 04:42:00 np0005486808 podman[251118]: 2025-10-14 08:42:00.232651678 +0000 UTC m=+0.220349924 container remove 8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:42:00 np0005486808 systemd[1]: libpod-conmon-8ad159e552611ccf5455e0e625256df4676e3ee931bfbf537502a204300c57bf.scope: Deactivated successfully.
Oct 14 04:42:00 np0005486808 python3.9[251166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 14 04:42:00 np0005486808 podman[251189]: 2025-10-14 08:42:00.449493649 +0000 UTC m=+0.061384301 container create 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:42:00 np0005486808 systemd[1]: Started libpod-conmon-40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d.scope.
Oct 14 04:42:00 np0005486808 podman[251189]: 2025-10-14 08:42:00.427744452 +0000 UTC m=+0.039635104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:42:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:42:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51d9003a8934e276a2a2bf5bbd2130083a09edcc789adb0818dc9cf24ed161/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51d9003a8934e276a2a2bf5bbd2130083a09edcc789adb0818dc9cf24ed161/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51d9003a8934e276a2a2bf5bbd2130083a09edcc789adb0818dc9cf24ed161/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51d9003a8934e276a2a2bf5bbd2130083a09edcc789adb0818dc9cf24ed161/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:00 np0005486808 podman[251189]: 2025-10-14 08:42:00.562626464 +0000 UTC m=+0.174517146 container init 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:42:00 np0005486808 podman[251189]: 2025-10-14 08:42:00.569465993 +0000 UTC m=+0.181356625 container start 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:42:00 np0005486808 podman[251189]: 2025-10-14 08:42:00.572719169 +0000 UTC m=+0.184609811 container attach 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:42:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]: {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    "0": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "devices": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "/dev/loop3"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            ],
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_name": "ceph_lv0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_size": "21470642176",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "name": "ceph_lv0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "tags": {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_name": "ceph",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.crush_device_class": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.encrypted": "0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_id": "0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.vdo": "0"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            },
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "vg_name": "ceph_vg0"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        }
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    ],
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    "1": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "devices": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "/dev/loop4"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            ],
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_name": "ceph_lv1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_size": "21470642176",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "name": "ceph_lv1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "tags": {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_name": "ceph",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.crush_device_class": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.encrypted": "0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_id": "1",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.vdo": "0"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            },
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "vg_name": "ceph_vg1"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        }
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    ],
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    "2": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "devices": [
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "/dev/loop5"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            ],
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_name": "ceph_lv2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_size": "21470642176",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "name": "ceph_lv2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "tags": {
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.cluster_name": "ceph",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.crush_device_class": "",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.encrypted": "0",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osd_id": "2",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:                "ceph.vdo": "0"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            },
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "type": "block",
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:            "vg_name": "ceph_vg2"
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:        }
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]:    ]
Oct 14 04:42:01 np0005486808 bold_lichterman[251230]: }
Oct 14 04:42:01 np0005486808 systemd[1]: libpod-40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d.scope: Deactivated successfully.
Oct 14 04:42:01 np0005486808 podman[251260]: 2025-10-14 08:42:01.336312226 +0000 UTC m=+0.025495545 container died 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:42:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8c51d9003a8934e276a2a2bf5bbd2130083a09edcc789adb0818dc9cf24ed161-merged.mount: Deactivated successfully.
Oct 14 04:42:01 np0005486808 podman[251260]: 2025-10-14 08:42:01.387690203 +0000 UTC m=+0.076873512 container remove 40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:42:01 np0005486808 systemd[1]: libpod-conmon-40be13c2bfe3ab0bdf2118121fa1fdb2599f426f5a14c4e6a1816cadd62b3d1d.scope: Deactivated successfully.
Oct 14 04:42:01 np0005486808 python3.9[251455]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.047101313 +0000 UTC m=+0.045796188 container create c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:42:02 np0005486808 systemd[1]: Started libpod-conmon-c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e.scope.
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.026491493 +0000 UTC m=+0.025186448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:42:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.148857113 +0000 UTC m=+0.147552008 container init c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.156830909 +0000 UTC m=+0.155525794 container start c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.16074591 +0000 UTC m=+0.159440805 container attach c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:42:02 np0005486808 zealous_panini[251615]: 167 167
Oct 14 04:42:02 np0005486808 systemd[1]: libpod-c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e.scope: Deactivated successfully.
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.16459875 +0000 UTC m=+0.163293635 container died c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:42:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0fac69b655be0b01a90abd4796449387e452f8a479e3951b8d5718ea97c2e692-merged.mount: Deactivated successfully.
Oct 14 04:42:02 np0005486808 podman[251564]: 2025-10-14 08:42:02.203703851 +0000 UTC m=+0.202398726 container remove c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:42:02 np0005486808 systemd[1]: libpod-conmon-c2ee4dfc72e774f20c141fb0bf88ebf83ed1a5cfaaf8d250b9bc581d7c978b5e.scope: Deactivated successfully.
Oct 14 04:42:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:02 np0005486808 podman[251706]: 2025-10-14 08:42:02.360394581 +0000 UTC m=+0.054761797 container create 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:42:02 np0005486808 systemd[1]: Started libpod-conmon-1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330.scope.
Oct 14 04:42:02 np0005486808 podman[251706]: 2025-10-14 08:42:02.330240108 +0000 UTC m=+0.024607314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:42:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:42:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9053a7db7f97e283c1ee6d9bb98969269bebbda778c3427b19cd35361814086b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9053a7db7f97e283c1ee6d9bb98969269bebbda778c3427b19cd35361814086b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9053a7db7f97e283c1ee6d9bb98969269bebbda778c3427b19cd35361814086b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9053a7db7f97e283c1ee6d9bb98969269bebbda778c3427b19cd35361814086b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:02 np0005486808 podman[251706]: 2025-10-14 08:42:02.488265349 +0000 UTC m=+0.182632555 container init 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:42:02 np0005486808 podman[251706]: 2025-10-14 08:42:02.499581363 +0000 UTC m=+0.193948549 container start 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:42:02 np0005486808 podman[251706]: 2025-10-14 08:42:02.503734899 +0000 UTC m=+0.198102075 container attach 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:42:02 np0005486808 python3.9[251715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:02 np0005486808 podman[251726]: 2025-10-14 08:42:02.550636052 +0000 UTC m=+0.115016730 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 14 04:42:02 np0005486808 podman[251736]: 2025-10-14 08:42:02.553317844 +0000 UTC m=+0.078755665 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:03 np0005486808 python3.9[251920]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:03 np0005486808 nice_williams[251727]: {
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_id": 2,
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "type": "bluestore"
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    },
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_id": 1,
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "type": "bluestore"
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    },
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_id": 0,
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:42:03 np0005486808 nice_williams[251727]:        "type": "bluestore"
Oct 14 04:42:03 np0005486808 nice_williams[251727]:    }
Oct 14 04:42:03 np0005486808 nice_williams[251727]: }
Oct 14 04:42:03 np0005486808 systemd[1]: libpod-1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330.scope: Deactivated successfully.
Oct 14 04:42:03 np0005486808 podman[251706]: 2025-10-14 08:42:03.474706776 +0000 UTC m=+1.169073942 container died 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:42:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9053a7db7f97e283c1ee6d9bb98969269bebbda778c3427b19cd35361814086b-merged.mount: Deactivated successfully.
Oct 14 04:42:03 np0005486808 podman[251706]: 2025-10-14 08:42:03.529097223 +0000 UTC m=+1.223464389 container remove 1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:42:03 np0005486808 systemd[1]: libpod-conmon-1861b6f8a21be55b6f2af4cffce4422d6b394d97a053324d5a97bfb653c52330.scope: Deactivated successfully.
Oct 14 04:42:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:42:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:42:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:42:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:42:03 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a97b0ebc-10f4-4dcf-9496-7fa5020caecc does not exist
Oct 14 04:42:03 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f842c273-27d7-4249-9f2b-d7508e886265 does not exist
Oct 14 04:42:03 np0005486808 python3.9[252161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:04 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:42:04 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:42:04 np0005486808 python3.9[252313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:05 np0005486808 python3.9[252465]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:06 np0005486808 python3.9[252617]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:42:06.999 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:42:07.000 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:42:07.000 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:42:07 np0005486808 python3.9[252769]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:07 np0005486808 podman[252921]: 2025-10-14 08:42:07.756687768 +0000 UTC m=+0.145473370 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:42:07 np0005486808 python3.9[252922]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:08 np0005486808 python3.9[253099]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:09 np0005486808 python3.9[253251]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:10 np0005486808 python3.9[253403]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:12 np0005486808 podman[253428]: 2025-10-14 08:42:12.68820431 +0000 UTC m=+0.091353949 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:42:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:16 np0005486808 python3.9[253574]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 14 04:42:17 np0005486808 python3.9[253727]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 14 04:42:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:18 np0005486808 python3.9[253885]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 14 04:42:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:19 np0005486808 systemd-logind[799]: New session 53 of user zuul.
Oct 14 04:42:19 np0005486808 systemd[1]: Started Session 53 of User zuul.
Oct 14 04:42:19 np0005486808 systemd[1]: session-53.scope: Deactivated successfully.
Oct 14 04:42:19 np0005486808 systemd-logind[799]: Session 53 logged out. Waiting for processes to exit.
Oct 14 04:42:19 np0005486808 systemd-logind[799]: Removed session 53.
Oct 14 04:42:20 np0005486808 python3.9[254071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:20 np0005486808 python3.9[254192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431339.6236262-1555-123391885362367/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:21 np0005486808 python3.9[254342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:22 np0005486808 python3.9[254418]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:22 np0005486808 python3.9[254568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:23 np0005486808 python3.9[254689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431342.1945248-1555-266933552396758/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:23 np0005486808 python3.9[254839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:24 np0005486808 python3.9[254960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431343.4006073-1555-256859154261521/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:25 np0005486808 python3.9[255110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:26 np0005486808 python3.9[255231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431344.755593-1555-88582323457001/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:26 np0005486808 python3.9[255383]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:42:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:27 np0005486808 python3.9[255535]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:42:28 np0005486808 python3.9[255687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:29 np0005486808 python3.9[255839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:29 np0005486808 python3.9[255962]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760431348.6639411-1648-132546660035728/.source _original_basename=.79ju4ssy follow=False checksum=43be58c893c1d185d145d89f6355a78c59da8289 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 14 04:42:30 np0005486808 python3.9[256114]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:31 np0005486808 python3.9[256266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:32 np0005486808 python3.9[256387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431350.8919086-1674-221267807641504/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=18905a67cb924eff0de6c55a7795830d7bacdcb8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:42:32
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'volumes', 'backups']
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:42:32 np0005486808 python3.9[256537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:42:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:42:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:33 np0005486808 podman[256633]: 2025-10-14 08:42:33.242341246 +0000 UTC m=+0.082123994 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 04:42:33 np0005486808 podman[256632]: 2025-10-14 08:42:33.254802766 +0000 UTC m=+0.094519103 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:42:33 np0005486808 python3.9[256685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760431352.262668-1689-8487806193948/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=5631457f7cd64d03e2b037b3eca93b1f5534ab5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 14 04:42:34 np0005486808 python3.9[256847]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 14 04:42:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:35 np0005486808 python3.9[256999]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:42:36 np0005486808 python3[257151]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:42:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:41 np0005486808 podman[257204]: 2025-10-14 08:42:41.565485769 +0000 UTC m=+3.254792375 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:42:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:42:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:42:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:45 np0005486808 podman[257246]: 2025-10-14 08:42:45.366854396 +0000 UTC m=+1.787707433 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:42:45 np0005486808 podman[257165]: 2025-10-14 08:42:45.495342909 +0000 UTC m=+9.296469887 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560
Oct 14 04:42:45 np0005486808 podman[257288]: 2025-10-14 08:42:45.646227653 +0000 UTC m=+0.056132308 container create 9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute_init, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 04:42:45 np0005486808 podman[257288]: 2025-10-14 08:42:45.613077011 +0000 UTC m=+0.022981696 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560
Oct 14 04:42:45 np0005486808 python3[257151]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 14 04:42:46 np0005486808 python3.9[257479]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:47 np0005486808 python3.9[257633]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 14 04:42:48 np0005486808 python3.9[257785]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 14 04:42:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:49 np0005486808 python3[257937]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 14 04:42:49 np0005486808 podman[257972]: 2025-10-14 08:42:49.561700548 +0000 UTC m=+0.068026815 container create 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:42:49 np0005486808 podman[257972]: 2025-10-14 08:42:49.52358748 +0000 UTC m=+0.029913797 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560
Oct 14 04:42:49 np0005486808 python3[257937]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560 kolla_start
Oct 14 04:42:50 np0005486808 python3.9[258163]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:51 np0005486808 python3.9[258317]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:42:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:52 np0005486808 python3.9[258468]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760431371.2057972-1781-13712488524168/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 14 04:42:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:52 np0005486808 python3.9[258544]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 14 04:42:52 np0005486808 systemd[1]: Reloading.
Oct 14 04:42:52 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:42:52 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:42:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:53 np0005486808 python3.9[258656]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 14 04:42:54 np0005486808 systemd[1]: Reloading.
Oct 14 04:42:55 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 04:42:55 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 04:42:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:55 np0005486808 systemd[1]: Starting nova_compute container...
Oct 14 04:42:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:42:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 14 04:42:55 np0005486808 podman[258696]: 2025-10-14 08:42:55.450520398 +0000 UTC m=+0.136300676 container init 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm)
Oct 14 04:42:55 np0005486808 podman[258696]: 2025-10-14 08:42:55.465184469 +0000 UTC m=+0.150964737 container start 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute)
Oct 14 04:42:55 np0005486808 podman[258696]: nova_compute
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + sudo -E kolla_set_configs
Oct 14 04:42:55 np0005486808 systemd[1]: Started nova_compute container.
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Validating config file
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying service configuration files
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Deleting /etc/ceph
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Creating directory /etc/ceph
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/ceph
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Writing out command to execute
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:42:55 np0005486808 nova_compute[258711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 14 04:42:55 np0005486808 nova_compute[258711]: ++ cat /run_command
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + CMD=nova-compute
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + ARGS=
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + sudo kolla_copy_cacerts
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + [[ ! -n '' ]]
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + . kolla_extend_start
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + echo 'Running command: '\''nova-compute'\'''
Oct 14 04:42:55 np0005486808 nova_compute[258711]: Running command: 'nova-compute'
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + umask 0022
Oct 14 04:42:55 np0005486808 nova_compute[258711]: + exec nova-compute
Oct 14 04:42:56 np0005486808 python3.9[258872]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:42:57 np0005486808 python3.9[259023]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.605 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.605 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.605 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.605 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.726 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:42:57 np0005486808 nova_compute[258711]: 2025-10-14 08:42:57.760 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:42:58 np0005486808 python3.9[259177]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.459 2 INFO nova.virt.driver [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.614 2 INFO nova.compute.provider_config [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.630 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.630 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.630 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.631 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.632 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.633 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.634 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.635 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.636 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.637 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.638 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.639 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.640 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.641 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.642 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.643 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.644 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.645 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.646 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.647 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.648 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.649 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.650 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.651 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.652 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.653 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.653 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.653 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.653 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.653 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.654 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.655 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.656 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.657 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.658 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.658 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.658 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.658 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.658 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.659 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.660 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.661 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.662 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.663 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.664 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.665 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.666 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.667 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.668 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.669 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.670 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.671 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.672 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.672 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.672 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.672 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.672 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.673 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.674 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.675 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.676 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.677 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.678 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.679 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.680 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.680 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.680 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.680 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.681 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.682 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.683 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.684 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.685 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.686 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.687 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.688 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.689 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.689 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.689 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.689 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.689 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.690 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.690 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.690 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.691 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.692 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.693 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.694 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.695 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.696 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.697 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.698 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.699 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.700 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.700 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.700 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.700 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.700 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.701 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.702 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.703 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.704 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.705 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.706 2 WARNING oslo_config.cfg [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 14 04:42:58 np0005486808 nova_compute[258711]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 14 04:42:58 np0005486808 nova_compute[258711]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 14 04:42:58 np0005486808 nova_compute[258711]: and ``live_migration_inbound_addr`` respectively.
Oct 14 04:42:58 np0005486808 nova_compute[258711]: ).  Its value may be silently ignored in the future.#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.706 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.706 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.706 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.707 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.708 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rbd_secret_uuid        = c49aadb6-9b04-5cb1-8f5f-4c91676c568e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.709 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.710 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.711 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.712 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.713 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.714 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.715 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.716 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.716 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.716 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.716 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.716 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.717 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.718 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.718 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.718 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.718 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.718 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.719 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.719 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.719 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.719 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.719 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.720 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.721 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.722 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.723 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.724 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.725 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.726 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.727 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.728 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.729 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.730 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.730 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.730 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.730 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.730 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.731 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.732 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.733 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.734 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.735 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.736 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.737 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.738 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.739 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.740 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.741 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.742 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.743 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.744 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.745 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.746 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.747 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.748 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.749 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.750 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.751 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.752 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.753 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.754 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.755 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.756 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.757 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.758 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.759 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.760 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.761 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.762 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.763 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.764 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.765 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.766 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.767 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.768 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.769 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.770 2 DEBUG oslo_service.service [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.771 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.790 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.791 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.791 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.791 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 14 04:42:58 np0005486808 systemd[1]: Starting libvirt QEMU daemon...
Oct 14 04:42:58 np0005486808 systemd[1]: Started libvirt QEMU daemon.
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.862 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fda098d3850> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.865 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fda098d3850> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.866 2 INFO nova.virt.libvirt.driver [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.890 2 WARNING nova.virt.libvirt.driver [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 14 04:42:58 np0005486808 nova_compute[258711]: 2025-10-14 08:42:58.890 2 DEBUG nova.virt.libvirt.volume.mount [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 14 04:42:58 np0005486808 python3.9[259329]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 14 04:42:59 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:42:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.895 2 INFO nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Libvirt host capabilities <capabilities>
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <host>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <uuid>1a1d621e-d701-42a8-a9a3-2d332c90e100</uuid>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <cpu>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <arch>x86_64</arch>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model>EPYC-Rome-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <vendor>AMD</vendor>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <microcode version='16777317'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <signature family='23' model='49' stepping='0'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='x2apic'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='tsc-deadline'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='osxsave'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='hypervisor'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='tsc_adjust'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='spec-ctrl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='stibp'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='arch-capabilities'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='ssbd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='cmp_legacy'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='topoext'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='virt-ssbd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='lbrv'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='tsc-scale'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='vmcb-clean'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='pause-filter'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='pfthreshold'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='svme-addr-chk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='rdctl-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='skip-l1dfl-vmentry'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='mds-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature name='pschange-mc-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <pages unit='KiB' size='4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <pages unit='KiB' size='2048'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <pages unit='KiB' size='1048576'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </cpu>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <power_management>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <suspend_mem/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </power_management>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <iommu support='no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <migration_features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <live/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <uri_transports>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <uri_transport>tcp</uri_transport>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <uri_transport>rdma</uri_transport>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </uri_transports>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </migration_features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <topology>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <cells num='1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <cell id='0'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <memory unit='KiB'>7864356</memory>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <pages unit='KiB' size='2048'>0</pages>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <distances>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <sibling id='0' value='10'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          </distances>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          <cpus num='8'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:          </cpus>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        </cell>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </cells>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </topology>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <cache>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </cache>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <secmodel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model>selinux</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <doi>0</doi>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </secmodel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <secmodel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model>dac</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <doi>0</doi>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </secmodel>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  </host>
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <guest>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <os_type>hvm</os_type>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <arch name='i686'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <wordsize>32</wordsize>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <domain type='qemu'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <domain type='kvm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </arch>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <pae/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <nonpae/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <acpi default='on' toggle='yes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <apic default='on' toggle='no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <cpuselection/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <deviceboot/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <disksnapshot default='on' toggle='no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <externalSnapshot/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  </guest>
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <guest>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <os_type>hvm</os_type>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <arch name='x86_64'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <wordsize>64</wordsize>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <domain type='qemu'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <domain type='kvm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </arch>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <acpi default='on' toggle='yes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <apic default='on' toggle='no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <cpuselection/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <deviceboot/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <disksnapshot default='on' toggle='no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <externalSnapshot/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </features>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  </guest>
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 
Oct 14 04:42:59 np0005486808 nova_compute[258711]: </capabilities>
Oct 14 04:42:59 np0005486808 nova_compute[258711]: #033[00m
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.907 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 14 04:42:59 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.935 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 14 04:42:59 np0005486808 nova_compute[258711]: <domainCapabilities>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <domain>kvm</domain>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <arch>i686</arch>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <vcpu max='240'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <iothreads supported='yes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <os supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <enum name='firmware'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <loader supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>rom</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>pflash</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <enum name='readonly'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>yes</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <enum name='secure'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </loader>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  </os>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:  <cpu>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <mode name='maximum' supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <enum name='maximumMigratable'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <mode name='host-model' supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <vendor>AMD</vendor>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='x2apic'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='stibp'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='ssbd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='succor'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='ibrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='lbrv'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='mds-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='gds-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:    <mode name='custom' supported='yes'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Denverton'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Dhyana-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx10'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx10-128'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx10-256'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx10-512'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SierraForest'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='SierraForest-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:42:59 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <memoryBacking supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='sourceType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>file</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>anonymous</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>memfd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </memoryBacking>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <disk supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='diskDevice'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>disk</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cdrom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>floppy</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>lun</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ide</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>fdc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>sata</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </disk>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <graphics supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vnc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egl-headless</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>dbus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </graphics>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <video supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='modelType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vga</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cirrus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>none</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>bochs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ramfb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </video>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hostdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='mode'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>subsystem</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='startupPolicy'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>mandatory</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>requisite</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>optional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='subsysType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pci</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='capsType'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='pciBackend'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hostdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <rng supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>random</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </rng>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <filesystem supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='driverType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>path</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>handle</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtiofs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </filesystem>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <tpm supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-tis</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-crb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emulator</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>external</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendVersion'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>2.0</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </tpm>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <redirdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </redirdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <channel supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pty</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>unix</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </channel>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <crypto supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>qemu</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </crypto>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <interface supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>passt</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </interface>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <panic supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>isa</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>hyperv</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </panic>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <gic supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <genid supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backup supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <async-teardown supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <ps2 supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sev supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sgx supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hyperv supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='features'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>relaxed</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vapic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>spinlocks</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vpindex</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>runtime</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>synic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>stimer</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reset</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vendor_id</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>frequencies</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reenlightenment</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tlbflush</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ipi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>avic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emsr_bitmap</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>xmm_input</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hyperv>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <launchSecurity supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: </domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.945 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 14 04:43:00 np0005486808 nova_compute[258711]: <domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <domain>kvm</domain>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <arch>i686</arch>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <vcpu max='4096'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <iothreads supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <os supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='firmware'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <loader supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>rom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pflash</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='readonly'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>yes</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='secure'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </loader>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </os>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='maximumMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <vendor>AMD</vendor>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='succor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='custom' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-128'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-256'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-512'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <memoryBacking supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='sourceType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>file</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>anonymous</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>memfd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </memoryBacking>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <disk supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='diskDevice'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>disk</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cdrom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>floppy</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>lun</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>fdc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>sata</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </disk>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <graphics supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vnc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egl-headless</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>dbus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </graphics>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <video supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='modelType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vga</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cirrus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>none</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>bochs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ramfb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </video>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hostdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='mode'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>subsystem</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='startupPolicy'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>mandatory</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>requisite</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>optional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='subsysType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pci</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='capsType'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='pciBackend'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hostdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <rng supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>random</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </rng>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <filesystem supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='driverType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>path</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>handle</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtiofs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </filesystem>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <tpm supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-tis</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-crb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emulator</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>external</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendVersion'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>2.0</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </tpm>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <redirdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </redirdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <channel supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pty</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>unix</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </channel>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <crypto supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>qemu</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </crypto>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <interface supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>passt</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </interface>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <panic supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>isa</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>hyperv</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </panic>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <gic supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <genid supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backup supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <async-teardown supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <ps2 supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sev supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sgx supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hyperv supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='features'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>relaxed</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vapic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>spinlocks</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vpindex</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>runtime</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>synic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>stimer</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reset</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vendor_id</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>frequencies</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reenlightenment</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tlbflush</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ipi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>avic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emsr_bitmap</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>xmm_input</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hyperv>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <launchSecurity supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: </domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.978 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:42:59.982 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 14 04:43:00 np0005486808 nova_compute[258711]: <domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <domain>kvm</domain>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <arch>x86_64</arch>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <vcpu max='240'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <iothreads supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <os supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='firmware'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <loader supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>rom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pflash</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='readonly'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>yes</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='secure'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </loader>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </os>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='maximumMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <vendor>AMD</vendor>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='succor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='custom' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-128'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-256'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-512'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <memoryBacking supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='sourceType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>file</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>anonymous</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>memfd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </memoryBacking>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <disk supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='diskDevice'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>disk</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cdrom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>floppy</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>lun</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ide</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>fdc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>sata</value>
Oct 14 04:43:00 np0005486808 python3.9[259560]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </disk>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <graphics supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vnc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egl-headless</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>dbus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </graphics>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <video supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='modelType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vga</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cirrus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>none</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>bochs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ramfb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </video>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hostdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='mode'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>subsystem</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='startupPolicy'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>mandatory</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>requisite</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>optional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='subsysType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pci</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='capsType'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='pciBackend'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hostdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <rng supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>random</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </rng>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <filesystem supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='driverType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>path</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>handle</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtiofs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </filesystem>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <tpm supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-tis</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-crb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emulator</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>external</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendVersion'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>2.0</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </tpm>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <redirdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </redirdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <channel supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pty</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>unix</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </channel>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <crypto supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>qemu</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </crypto>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <interface supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>passt</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </interface>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <panic supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>isa</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>hyperv</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </panic>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <gic supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <genid supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backup supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <async-teardown supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <ps2 supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sev supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sgx supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hyperv supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='features'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>relaxed</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vapic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>spinlocks</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vpindex</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>runtime</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>synic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>stimer</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reset</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vendor_id</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>frequencies</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reenlightenment</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tlbflush</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ipi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>avic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emsr_bitmap</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>xmm_input</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hyperv>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <launchSecurity supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: </domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.057 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 14 04:43:00 np0005486808 nova_compute[258711]: <domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <domain>kvm</domain>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <arch>x86_64</arch>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <vcpu max='4096'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <iothreads supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <os supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='firmware'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>efi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <loader supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>rom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pflash</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='readonly'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>yes</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='secure'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>yes</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>no</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </loader>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </os>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='maximumMigratable'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>on</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>off</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <vendor>AMD</vendor>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='succor'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <mode name='custom' supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Denverton-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='auto-ibrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amd-psfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='stibp-always-on'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='EPYC-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-128'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-256'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx10-512'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='prefetchiti'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Haswell-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512er'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512pf'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fma4'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tbm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xop'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='amx-tile'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-bf16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-fp16'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bitalg'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrc'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fzrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='la57'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='taa-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xfd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ifma'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cmpccxadd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fbsdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='fsrs'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ibrs-all'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mcdt-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pbrsb-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='psdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='serialize'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vaes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 systemd[1]: Stopping nova_compute container...
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='hle'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='rtm'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512bw'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512cd'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512dq'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512f'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='avx512vl'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='invpcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pcid'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='pku'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='mpx'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='core-capability'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='split-lock-detect'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='cldemote'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='erms'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='gfni'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdir64b'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='movdiri'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='xsaves'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='athlon-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='core2duo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='coreduo-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='n270-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='ss'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <blockers model='phenom-v1'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnow'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <feature name='3dnowext'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </blockers>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </mode>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </cpu>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <memoryBacking supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <enum name='sourceType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>file</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>anonymous</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <value>memfd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </memoryBacking>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <disk supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='diskDevice'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>disk</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cdrom</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>floppy</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>lun</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>fdc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>sata</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </disk>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <graphics supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vnc</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egl-headless</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>dbus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </graphics>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <video supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='modelType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vga</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>cirrus</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>none</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>bochs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ramfb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </video>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hostdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='mode'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>subsystem</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='startupPolicy'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>mandatory</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>requisite</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>optional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='subsysType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pci</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>scsi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='capsType'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='pciBackend'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hostdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <rng supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtio-non-transitional</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>random</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>egd</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </rng>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <filesystem supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='driverType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>path</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>handle</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>virtiofs</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </filesystem>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <tpm supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-tis</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tpm-crb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emulator</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>external</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendVersion'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>2.0</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </tpm>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <redirdev supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='bus'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>usb</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </redirdev>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <channel supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>pty</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>unix</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </channel>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <crypto supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='type'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>qemu</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendModel'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>builtin</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </crypto>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <interface supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='backendType'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>default</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>passt</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </interface>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <panic supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='model'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>isa</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>hyperv</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </panic>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </devices>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  <features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <gic supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <genid supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <backup supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <async-teardown supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <ps2 supported='yes'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sev supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <sgx supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <hyperv supported='yes'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      <enum name='features'>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>relaxed</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vapic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>spinlocks</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vpindex</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>runtime</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>synic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>stimer</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reset</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>vendor_id</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>frequencies</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>reenlightenment</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>tlbflush</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>ipi</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>avic</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>emsr_bitmap</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:        <value>xmm_input</value>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:      </enum>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    </hyperv>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:    <launchSecurity supported='no'/>
Oct 14 04:43:00 np0005486808 nova_compute[258711]:  </features>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: </domainCapabilities>
Oct 14 04:43:00 np0005486808 nova_compute[258711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.115 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.116 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.116 2 DEBUG nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.116 2 INFO nova.virt.libvirt.host [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Secure Boot support detected#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.119 2 INFO nova.virt.libvirt.driver [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.119 2 INFO nova.virt.libvirt.driver [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.143 2 DEBUG nova.virt.libvirt.driver [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.189 2 INFO nova.virt.node [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Determined node identity 92105e1d-1743-46e3-a494-858b4331398a from /var/lib/nova/compute_id#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.220 2 WARNING nova.compute.manager [None req-8cc66412-aa9c-40d5-b5b4-024329f8e4d4 - - - - - -] Compute nodes ['92105e1d-1743-46e3-a494-858b4331398a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.259 2 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 43b59f19210e4125a3fb6759efdac5df#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.259 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.259 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:43:00 np0005486808 nova_compute[258711]: 2025-10-14 08:43:00.260 2 DEBUG oslo_concurrency.lockutils [None req-4cb840f9-3f89-4196-b633-6b08195a9596 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:43:00 np0005486808 virtqemud[259351]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 14 04:43:00 np0005486808 virtqemud[259351]: hostname: compute-0
Oct 14 04:43:00 np0005486808 virtqemud[259351]: End of file while reading data: Input/output error
Oct 14 04:43:00 np0005486808 systemd[1]: libpod-4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133.scope: Deactivated successfully.
Oct 14 04:43:00 np0005486808 systemd[1]: libpod-4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133.scope: Consumed 2.989s CPU time.
Oct 14 04:43:00 np0005486808 podman[259568]: 2025-10-14 08:43:00.649075499 +0000 UTC m=+0.447516814 container died 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 04:43:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9-merged.mount: Deactivated successfully.
Oct 14 04:43:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133-userdata-shm.mount: Deactivated successfully.
Oct 14 04:43:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:01 np0005486808 podman[259568]: 2025-10-14 08:43:01.481665953 +0000 UTC m=+1.280107268 container cleanup 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 04:43:01 np0005486808 podman[259568]: nova_compute
Oct 14 04:43:01 np0005486808 podman[259599]: nova_compute
Oct 14 04:43:01 np0005486808 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 14 04:43:01 np0005486808 systemd[1]: Stopped nova_compute container.
Oct 14 04:43:01 np0005486808 systemd[1]: Starting nova_compute container...
Oct 14 04:43:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f8169e0ebb67cea7f464a7673054819207834f9a7de8afa344f71babd5ac3b9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:01 np0005486808 podman[259612]: 2025-10-14 08:43:01.758632485 +0000 UTC m=+0.133760687 container init 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 04:43:01 np0005486808 podman[259612]: 2025-10-14 08:43:01.773859949 +0000 UTC m=+0.148988111 container start 4b79e6d50d94e414985f39b868e461ab0b06189ac5c7fddd72fd5dd69820e133 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:43:01 np0005486808 podman[259612]: nova_compute
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + sudo -E kolla_set_configs
Oct 14 04:43:01 np0005486808 systemd[1]: Started nova_compute container.
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Validating config file
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying service configuration files
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /etc/ceph
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Creating directory /etc/ceph
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/ceph
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Writing out command to execute
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:43:01 np0005486808 nova_compute[259627]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 14 04:43:01 np0005486808 nova_compute[259627]: ++ cat /run_command
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + CMD=nova-compute
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + ARGS=
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + sudo kolla_copy_cacerts
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + [[ ! -n '' ]]
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + . kolla_extend_start
Oct 14 04:43:01 np0005486808 nova_compute[259627]: Running command: 'nova-compute'
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + echo 'Running command: '\''nova-compute'\'''
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + umask 0022
Oct 14 04:43:01 np0005486808 nova_compute[259627]: + exec nova-compute
Oct 14 04:43:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:02 np0005486808 python3.9[259790]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:02 np0005486808 systemd[1]: Started libpod-conmon-9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711.scope.
Oct 14 04:43:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d53a139709aecdaf755fe844a79201f24851c1d97e81fc6fae85ebd7936bf6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d53a139709aecdaf755fe844a79201f24851c1d97e81fc6fae85ebd7936bf6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d53a139709aecdaf755fe844a79201f24851c1d97e81fc6fae85ebd7936bf6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:02 np0005486808 podman[259816]: 2025-10-14 08:43:02.93427461 +0000 UTC m=+0.153415175 container init 9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:43:02 np0005486808 podman[259816]: 2025-10-14 08:43:02.947076298 +0000 UTC m=+0.166216843 container start 9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:43:02 np0005486808 python3.9[259790]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Applying nova statedir ownership
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 14 04:43:03 np0005486808 nova_compute_init[259837]: INFO:nova_statedir:Nova statedir ownership complete
Oct 14 04:43:03 np0005486808 systemd[1]: libpod-9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711.scope: Deactivated successfully.
Oct 14 04:43:03 np0005486808 podman[259857]: 2025-10-14 08:43:03.073527433 +0000 UTC m=+0.024131913 container died 9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:43:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711-userdata-shm.mount: Deactivated successfully.
Oct 14 04:43:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-86d53a139709aecdaf755fe844a79201f24851c1d97e81fc6fae85ebd7936bf6-merged.mount: Deactivated successfully.
Oct 14 04:43:03 np0005486808 podman[259857]: 2025-10-14 08:43:03.119526405 +0000 UTC m=+0.070130865 container cleanup 9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:33c58faa12b90b6009f89c9c60baeadc1323b62dcb141619a7a11c3c10903560', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:43:03 np0005486808 systemd[1]: libpod-conmon-9804a85f415aca658c6c025262c662a4ad4375798ad24bf721a44a502f2b1711.scope: Deactivated successfully.
Oct 14 04:43:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:03 np0005486808 systemd[1]: session-51.scope: Deactivated successfully.
Oct 14 04:43:03 np0005486808 systemd[1]: session-51.scope: Consumed 3min 1.672s CPU time.
Oct 14 04:43:03 np0005486808 systemd-logind[799]: Session 51 logged out. Waiting for processes to exit.
Oct 14 04:43:03 np0005486808 systemd-logind[799]: Removed session 51.
Oct 14 04:43:03 np0005486808 podman[259901]: 2025-10-14 08:43:03.655705404 +0000 UTC m=+0.057383227 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 14 04:43:03 np0005486808 podman[259900]: 2025-10-14 08:43:03.671869241 +0000 UTC m=+0.077499026 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.807 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.808 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.808 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.808 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.940 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:43:03 np0005486808 nova_compute[259627]: 2025-10-14 08:43:03.968 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.407 2 INFO nova.virt.driver [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.503 2 INFO nova.compute.provider_config [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.517 2 DEBUG oslo_concurrency.lockutils [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.517 2 DEBUG oslo_concurrency.lockutils [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.517 2 DEBUG oslo_concurrency.lockutils [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.518 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.519 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.519 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.519 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.519 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.519 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.520 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.520 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.520 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.520 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.520 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.521 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.521 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.521 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.521 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.521 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.522 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.523 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.523 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.523 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.523 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.523 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.524 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.524 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.524 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.524 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.524 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.525 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.525 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.525 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.525 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.525 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.526 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.526 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.526 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.526 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.526 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.527 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.528 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.528 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.528 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.528 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.529 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.530 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.530 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.530 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.530 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.530 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.531 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.531 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.531 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.531 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.531 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.532 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.532 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.532 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.532 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.532 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.533 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.533 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.533 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.533 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.534 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.534 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.534 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.534 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.535 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.535 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.535 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.535 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.535 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.536 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.536 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.536 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.536 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.536 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.537 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.537 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.537 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.537 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.538 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.538 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.538 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.538 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.539 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.540 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.540 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.540 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.540 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.540 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.541 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.541 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.541 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.541 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.541 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.542 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.542 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.542 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.542 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.542 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.543 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.543 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.543 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.543 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.544 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.544 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.544 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.544 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.544 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.545 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 78137762-7e85-4ad5-8efd-f3686451fb64 does not exist
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.545 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.545 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.545 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eec9e7e3-9446-4939-b180-39bd0773ad91 does not exist
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.545 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.546 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f5595fc6-cb64-4d2b-b386-8ca15d425ce6 does not exist
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.546 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.546 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.546 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.546 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.547 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.547 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.547 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.547 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.548 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.549 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.549 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.549 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.549 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.549 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.550 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.550 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.550 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.550 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.550 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.551 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.551 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.551 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.551 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.551 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:43:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.552 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.553 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.553 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.553 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.553 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.553 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.554 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.554 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.554 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.554 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.554 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.555 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.555 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.555 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.555 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.555 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.556 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.556 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.556 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.556 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.556 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.557 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.557 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.557 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.557 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.557 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.558 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.558 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.558 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.558 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.558 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.559 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.560 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.560 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.560 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.560 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.560 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.561 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.561 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.561 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.561 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.561 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.562 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.563 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.563 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.563 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.563 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.563 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.564 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.565 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.565 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.565 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.565 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.565 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.566 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.566 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.566 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.566 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.566 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.567 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.568 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.568 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.568 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.568 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.568 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.569 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.570 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.570 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.570 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.570 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.570 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.571 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.571 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.571 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.571 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.571 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.572 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.572 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.572 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.572 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.572 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.573 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.573 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.573 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.573 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.574 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.574 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.574 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.574 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.574 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.575 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.575 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.575 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.575 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.575 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.576 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.576 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.576 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.576 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.576 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.577 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.577 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.577 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.577 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.577 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.578 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.578 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.578 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.578 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.579 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.579 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.579 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.579 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.579 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.580 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.580 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.580 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.580 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.580 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.581 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.581 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.581 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.581 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.581 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.582 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.582 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.582 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.582 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.582 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.583 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.583 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.583 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.583 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.583 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.584 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.584 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.584 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.584 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.585 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.585 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.585 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.585 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.585 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.586 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.586 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.586 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.586 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.587 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.587 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.587 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.587 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.587 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.588 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.588 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.588 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.588 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.589 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.589 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.589 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.589 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.589 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.590 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.591 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.591 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.591 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.591 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.591 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.592 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.593 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.593 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.593 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.593 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.593 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.594 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.595 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.595 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.595 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.595 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.595 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.596 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.596 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.596 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.596 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.596 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.597 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.597 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.597 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.597 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.597 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.598 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.598 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.598 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.598 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.598 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.599 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.599 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.599 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.599 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.599 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.600 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.600 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.600 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.600 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.601 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.601 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.601 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.601 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.602 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.602 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.602 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.602 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.602 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.603 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.603 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.603 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.603 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.604 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.604 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.604 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.604 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.604 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.605 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.605 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.605 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.605 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.606 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.606 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.606 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.606 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.606 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.607 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.607 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.607 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.607 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.608 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.608 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.608 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.608 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.609 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.609 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.609 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.609 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.610 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.610 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.610 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.610 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.610 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.611 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.611 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.611 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.611 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.612 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.612 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.612 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.612 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.613 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.613 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.613 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.613 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.614 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.614 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.614 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.614 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.615 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.615 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.615 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.615 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.615 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.616 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.616 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.616 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.616 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.616 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.617 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.617 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.617 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.617 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.617 2 WARNING oslo_config.cfg [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 14 04:43:04 np0005486808 nova_compute[259627]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 14 04:43:04 np0005486808 nova_compute[259627]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 14 04:43:04 np0005486808 nova_compute[259627]: and ``live_migration_inbound_addr`` respectively.
Oct 14 04:43:04 np0005486808 nova_compute[259627]: ).  Its value may be silently ignored in the future.#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.618 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.618 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.618 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.618 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.618 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.619 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.619 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.619 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.619 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.619 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.620 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.620 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.620 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.620 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.620 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.621 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.621 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.621 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.621 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rbd_secret_uuid        = c49aadb6-9b04-5cb1-8f5f-4c91676c568e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.621 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.622 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.623 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.623 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.623 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.623 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.623 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.624 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.624 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.624 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.624 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.624 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.625 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.625 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.625 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.625 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.625 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.626 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.626 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.626 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.626 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.626 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.627 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.628 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.628 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.628 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.628 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.628 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.629 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.629 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.629 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.629 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.629 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.630 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.631 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.631 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.631 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.631 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.632 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.632 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.632 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.632 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.632 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.633 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.633 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.633 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.633 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.634 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.634 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.634 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.634 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.634 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.635 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.635 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.635 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.635 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.635 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.636 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.636 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.636 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.636 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.636 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.637 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.637 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.637 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.637 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.637 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.638 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.638 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.638 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.638 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.638 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.639 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.639 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.639 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.639 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.639 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.640 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.640 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.640 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.640 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.640 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.641 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.641 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.641 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.641 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.641 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.642 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.642 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.642 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.642 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.642 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.643 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.643 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.643 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.643 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.643 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.644 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.645 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.645 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.645 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.645 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.646 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.646 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.646 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.646 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.646 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.647 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.647 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.647 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.647 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.647 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.648 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.648 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.648 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.648 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.649 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.649 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.649 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.649 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.650 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.650 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.650 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.650 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.651 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.651 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.651 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.651 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.651 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.652 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.652 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.652 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.652 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.652 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.653 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.653 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.653 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.653 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.653 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.654 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.654 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.654 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.654 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.655 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.655 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.655 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.655 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.655 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.656 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.657 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.657 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.657 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.657 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.658 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.658 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.658 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.658 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.659 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.659 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.659 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.659 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.659 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.660 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.660 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.660 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.660 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.660 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.661 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.661 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.661 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.661 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.661 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.662 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.663 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.664 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.664 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.664 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.664 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.664 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.665 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.665 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.665 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.665 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.665 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.666 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.666 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.666 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.666 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.666 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.667 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.667 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.667 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.667 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.668 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.668 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.668 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.668 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.669 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.669 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.669 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.669 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.669 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.670 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.671 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.671 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.671 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.671 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.671 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.672 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.672 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.672 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.672 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.672 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.673 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.674 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.674 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.674 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.674 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.674 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.675 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.675 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.675 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.675 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.675 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.676 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.677 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.677 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.677 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.677 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.677 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.678 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.678 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.678 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.678 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.678 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.679 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.680 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.680 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.680 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.680 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.680 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.681 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.681 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.681 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.681 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.681 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.682 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.682 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.682 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.682 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.682 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.683 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.684 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.684 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.684 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.684 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.684 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.685 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.685 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.685 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.685 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.685 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.686 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.686 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.686 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.686 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.686 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.687 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.687 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.687 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.687 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.687 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.688 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.689 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.689 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.689 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.689 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.689 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.690 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.691 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.691 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.691 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.691 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.691 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.692 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.692 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.692 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.692 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.692 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.693 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.694 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.694 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.694 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.694 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.694 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.695 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.696 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.696 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.696 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.696 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.696 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.697 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.698 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.698 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.698 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.698 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.698 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.699 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.700 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.700 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.700 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.700 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.700 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.701 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.701 2 DEBUG oslo_service.service [None req-8b315fad-37ac-4da5-aaec-274bddf8a81f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.702 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.725 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.725 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.726 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.726 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.735 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb30c298670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.737 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb30c298670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.738 2 INFO nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.745 2 INFO nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Libvirt host capabilities <capabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <host>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <uuid>1a1d621e-d701-42a8-a9a3-2d332c90e100</uuid>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <arch>x86_64</arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model>EPYC-Rome-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <vendor>AMD</vendor>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <microcode version='16777317'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <signature family='23' model='49' stepping='0'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='x2apic'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='tsc-deadline'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='osxsave'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='hypervisor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='tsc_adjust'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='spec-ctrl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='stibp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='arch-capabilities'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='cmp_legacy'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='topoext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='virt-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='lbrv'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='tsc-scale'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='vmcb-clean'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='pause-filter'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='pfthreshold'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='svme-addr-chk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='rdctl-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='skip-l1dfl-vmentry'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='mds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature name='pschange-mc-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <pages unit='KiB' size='4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <pages unit='KiB' size='2048'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <pages unit='KiB' size='1048576'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <power_management>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <suspend_mem/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </power_management>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <iommu support='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <migration_features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <live/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <uri_transports>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <uri_transport>tcp</uri_transport>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <uri_transport>rdma</uri_transport>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </uri_transports>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </migration_features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <topology>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <cells num='1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <cell id='0'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <memory unit='KiB'>7864356</memory>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <pages unit='KiB' size='2048'>0</pages>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <distances>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <sibling id='0' value='10'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          </distances>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          <cpus num='8'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:          </cpus>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        </cell>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </cells>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </topology>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <cache>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </cache>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <secmodel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model>selinux</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <doi>0</doi>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </secmodel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <secmodel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model>dac</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <doi>0</doi>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </secmodel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </host>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <guest>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <os_type>hvm</os_type>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <arch name='i686'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <wordsize>32</wordsize>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <domain type='qemu'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <domain type='kvm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <pae/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <nonpae/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <acpi default='on' toggle='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <apic default='on' toggle='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <cpuselection/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <deviceboot/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <disksnapshot default='on' toggle='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <externalSnapshot/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </guest>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <guest>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <os_type>hvm</os_type>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <arch name='x86_64'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <wordsize>64</wordsize>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <domain type='qemu'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <domain type='kvm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <acpi default='on' toggle='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <apic default='on' toggle='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <cpuselection/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <deviceboot/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <disksnapshot default='on' toggle='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <externalSnapshot/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </guest>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 
Oct 14 04:43:04 np0005486808 nova_compute[259627]: </capabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: #033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.757 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.759 2 WARNING nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.759 2 DEBUG nova.virt.libvirt.volume.mount [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.762 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 14 04:43:04 np0005486808 nova_compute[259627]: <domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <domain>kvm</domain>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <arch>i686</arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <vcpu max='4096'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <iothreads supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <os supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='firmware'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <loader supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>rom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pflash</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='readonly'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>yes</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='secure'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </loader>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='maximumMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <vendor>AMD</vendor>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='succor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='custom' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-128'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-256'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-512'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <memoryBacking supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='sourceType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>file</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>anonymous</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>memfd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </memoryBacking>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <disk supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='diskDevice'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>disk</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cdrom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>floppy</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>lun</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>fdc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>sata</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <graphics supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vnc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egl-headless</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>dbus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <video supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='modelType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vga</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cirrus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>none</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>bochs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ramfb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hostdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='mode'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>subsystem</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='startupPolicy'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>mandatory</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>requisite</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>optional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='subsysType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pci</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='capsType'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='pciBackend'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hostdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <rng supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>random</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <filesystem supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='driverType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>path</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>handle</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtiofs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </filesystem>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <tpm supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-tis</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-crb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emulator</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>external</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendVersion'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>2.0</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </tpm>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <redirdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </redirdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <channel supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pty</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>unix</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </channel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <crypto supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>qemu</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </crypto>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <interface supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>passt</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <panic supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>isa</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>hyperv</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </panic>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <gic supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <genid supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backup supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <async-teardown supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <ps2 supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sev supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sgx supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hyperv supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='features'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>relaxed</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vapic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>spinlocks</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vpindex</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>runtime</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>synic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>stimer</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reset</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vendor_id</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>frequencies</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reenlightenment</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tlbflush</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ipi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>avic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emsr_bitmap</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>xmm_input</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hyperv>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <launchSecurity supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: </domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.768 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 14 04:43:04 np0005486808 nova_compute[259627]: <domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <domain>kvm</domain>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <arch>i686</arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <vcpu max='240'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <iothreads supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <os supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='firmware'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <loader supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>rom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pflash</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='readonly'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>yes</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='secure'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </loader>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='maximumMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <vendor>AMD</vendor>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='succor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='custom' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-128'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-256'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-512'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <memoryBacking supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='sourceType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>file</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>anonymous</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>memfd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </memoryBacking>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <disk supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='diskDevice'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>disk</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cdrom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>floppy</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>lun</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ide</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>fdc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>sata</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <graphics supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vnc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egl-headless</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>dbus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <video supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='modelType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vga</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cirrus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>none</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>bochs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ramfb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hostdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='mode'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>subsystem</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='startupPolicy'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>mandatory</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>requisite</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>optional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='subsysType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pci</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='capsType'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='pciBackend'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hostdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <rng supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>random</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <filesystem supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='driverType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>path</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>handle</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtiofs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </filesystem>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <tpm supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-tis</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-crb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emulator</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>external</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendVersion'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>2.0</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </tpm>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <redirdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </redirdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <channel supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pty</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>unix</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </channel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <crypto supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>qemu</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </crypto>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <interface supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>passt</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <panic supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>isa</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>hyperv</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </panic>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <gic supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <genid supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backup supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <async-teardown supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <ps2 supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sev supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sgx supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hyperv supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='features'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>relaxed</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vapic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>spinlocks</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vpindex</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>runtime</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>synic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>stimer</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reset</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vendor_id</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>frequencies</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reenlightenment</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tlbflush</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ipi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>avic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emsr_bitmap</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>xmm_input</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hyperv>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <launchSecurity supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: </domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.803 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.808 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 14 04:43:04 np0005486808 nova_compute[259627]: <domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <domain>kvm</domain>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <arch>x86_64</arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <vcpu max='4096'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <iothreads supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <os supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='firmware'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>efi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <loader supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>rom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pflash</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='readonly'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>yes</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='secure'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>yes</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </loader>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='maximumMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <vendor>AMD</vendor>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='succor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='custom' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-128'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-256'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-512'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='athlon-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='core2duo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='coreduo-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='n270-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='phenom-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <memoryBacking supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='sourceType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>file</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>anonymous</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>memfd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </memoryBacking>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <disk supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='diskDevice'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>disk</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cdrom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>floppy</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>lun</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>fdc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>sata</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <graphics supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vnc</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egl-headless</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>dbus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <video supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='modelType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vga</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>cirrus</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>none</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>bochs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ramfb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hostdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='mode'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>subsystem</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='startupPolicy'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>mandatory</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>requisite</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>optional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='subsysType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pci</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='capsType'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='pciBackend'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hostdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <rng supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>random</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>egd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <filesystem supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='driverType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>path</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>handle</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>virtiofs</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </filesystem>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <tpm supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-tis</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tpm-crb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emulator</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>external</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendVersion'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>2.0</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </tpm>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <redirdev supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </redirdev>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <channel supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pty</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>unix</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </channel>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <crypto supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>qemu</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </crypto>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <interface supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='backendType'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>passt</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <panic supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>isa</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>hyperv</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </panic>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <gic supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <genid supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <backup supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <async-teardown supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <ps2 supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sev supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <sgx supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <hyperv supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='features'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>relaxed</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vapic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>spinlocks</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vpindex</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>runtime</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>synic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>stimer</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reset</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>vendor_id</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>frequencies</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>reenlightenment</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>tlbflush</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>ipi</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>avic</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>emsr_bitmap</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>xmm_input</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </hyperv>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <launchSecurity supported='no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: </domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:04 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.871 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 14 04:43:04 np0005486808 nova_compute[259627]: <domainCapabilities>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <path>/usr/libexec/qemu-kvm</path>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <domain>kvm</domain>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <arch>x86_64</arch>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <vcpu max='240'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <iothreads supported='yes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <os supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <enum name='firmware'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <loader supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>rom</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>pflash</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='readonly'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>yes</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='secure'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>no</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </loader>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:  <cpu>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-passthrough' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='hostPassthroughMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='maximum' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <enum name='maximumMigratable'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>on</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <value>off</value>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='host-model' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <vendor>AMD</vendor>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='x2apic'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-deadline'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='hypervisor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc_adjust'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='spec-ctrl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='stibp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='arch-capabilities'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='cmp_legacy'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='overflow-recov'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='succor'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='amd-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='virt-ssbd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lbrv'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='tsc-scale'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='vmcb-clean'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='flushbyasid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pause-filter'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pfthreshold'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='svme-addr-chk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rdctl-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='mds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='pschange-mc-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='gds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='require' name='rfds-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <feature policy='disable' name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:    <mode name='custom' supported='yes'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Broadwell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cascadelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Cooperlake-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Denverton-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Dhyana-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Genoa-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='auto-ibrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Milan-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amd-psfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='no-nested-data-bp'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='null-sel-clr-base'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='stibp-always-on'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-Rome-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='EPYC-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='GraniteRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-128'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-256'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx10-512'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='prefetchiti'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Haswell-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-noTSX'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v6'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Icelake-Server-v7'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-IBRS'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='IvyBridge-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='KnightsMill-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4fmaps'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-4vnniw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512er'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512pf'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G4-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='Opteron_G5-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fma4'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tbm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xop'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v1'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v2'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:      <blockers model='SapphireRapids-v3'>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-int8'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='amx-tile'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-bf16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-fp16'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512-vpopcntdq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bitalg'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512ifma'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vbmi2'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='avx512vnni'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrc'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='fzrm'/>
Oct 14 04:43:04 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='la57'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='taa-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='tsx-ldtrk'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xfd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='SierraForest'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='SierraForest-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-ifma'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-ne-convert'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-vnni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx-vnni-int8'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='bus-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cmpccxadd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fbsdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fsrm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='fsrs'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ibrs-all'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='mcdt-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pbrsb-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='psdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='sbdr-ssdp-no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='serialize'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vaes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='vpclmulqdq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-IBRS'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v2'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v3'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Client-v4'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-IBRS'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v2'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='hle'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='rtm'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v3'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v4'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Skylake-Server-v5'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512bw'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512cd'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512dq'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512f'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='avx512vl'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='invpcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pcid'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='pku'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Snowridge'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='mpx'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v2'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v3'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='core-capability'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='split-lock-detect'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='Snowridge-v4'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='cldemote'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='erms'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='gfni'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdir64b'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='movdiri'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='xsaves'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='athlon'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='athlon-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='core2duo'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='core2duo-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='coreduo'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='coreduo-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='n270'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='n270-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='ss'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='phenom'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <blockers model='phenom-v1'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnow'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <feature name='3dnowext'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </blockers>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </mode>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  <memoryBacking supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <enum name='sourceType'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <value>file</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <value>anonymous</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <value>memfd</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  </memoryBacking>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <disk supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='diskDevice'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>disk</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>cdrom</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>floppy</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>lun</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>ide</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>fdc</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>sata</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <graphics supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>vnc</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>egl-headless</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>dbus</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <video supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='modelType'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>vga</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>cirrus</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>none</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>bochs</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>ramfb</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <hostdev supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='mode'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>subsystem</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='startupPolicy'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>mandatory</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>requisite</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>optional</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='subsysType'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>pci</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>scsi</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='capsType'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='pciBackend'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </hostdev>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <rng supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio-transitional</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtio-non-transitional</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>random</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>egd</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <filesystem supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='driverType'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>path</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>handle</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>virtiofs</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </filesystem>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <tpm supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>tpm-tis</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>tpm-crb</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>emulator</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>external</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='backendVersion'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>2.0</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </tpm>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <redirdev supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='bus'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>usb</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </redirdev>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <channel supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>pty</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>unix</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </channel>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <crypto supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='model'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='type'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>qemu</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='backendModel'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>builtin</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </crypto>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <interface supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='backendType'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>default</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>passt</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <panic supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='model'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>isa</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>hyperv</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </panic>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <gic supported='no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <vmcoreinfo supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <genid supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <backingStoreInput supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <backup supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <async-teardown supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <ps2 supported='yes'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <sev supported='no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <sgx supported='no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <hyperv supported='yes'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      <enum name='features'>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>relaxed</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>vapic</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>spinlocks</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>vpindex</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>runtime</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>synic</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>stimer</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>reset</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>vendor_id</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>frequencies</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>reenlightenment</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>tlbflush</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>ipi</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>avic</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>emsr_bitmap</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:        <value>xmm_input</value>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:      </enum>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    </hyperv>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:    <launchSecurity supported='no'/>
Oct 14 04:43:05 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:43:05 np0005486808 nova_compute[259627]: </domainCapabilities>
Oct 14 04:43:05 np0005486808 nova_compute[259627]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.924 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.925 2 INFO nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Secure Boot support detected#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.926 2 INFO nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.926 2 INFO nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.935 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.959 2 INFO nova.virt.node [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Determined node identity 92105e1d-1743-46e3-a494-858b4331398a from /var/lib/nova/compute_id#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:04.977 2 WARNING nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Compute nodes ['92105e1d-1743-46e3-a494-858b4331398a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.009 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.057 2 WARNING nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.057 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.058 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.058 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.058 2 DEBUG nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.058 2 DEBUG oslo_concurrency.processutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:43:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:43:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.212953077 +0000 UTC m=+0.052808881 container create 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:43:05 np0005486808 systemd[1]: Started libpod-conmon-144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53.scope.
Oct 14 04:43:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.19373779 +0000 UTC m=+0.033593614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.317484082 +0000 UTC m=+0.157339886 container init 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.323797679 +0000 UTC m=+0.163653483 container start 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.326730217 +0000 UTC m=+0.166586021 container attach 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:43:05 np0005486808 distracted_benz[260271]: 167 167
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.333475394 +0000 UTC m=+0.173331198 container died 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:43:05 np0005486808 systemd[1]: libpod-144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53.scope: Deactivated successfully.
Oct 14 04:43:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f711bdda703c37242ffc8c7e7c466368640bc76200ccfa8073db65d4e8efa0d4-merged.mount: Deactivated successfully.
Oct 14 04:43:05 np0005486808 podman[260236]: 2025-10-14 08:43:05.377637943 +0000 UTC m=+0.217493737 container remove 144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:43:05 np0005486808 systemd[1]: libpod-conmon-144daf36656d221f2c948139754491d118e16b9d22a48e61d46b087a599f2e53.scope: Deactivated successfully.
Oct 14 04:43:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:43:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106565973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.528 2 DEBUG oslo_concurrency.processutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:43:05 np0005486808 podman[260296]: 2025-10-14 08:43:05.548748319 +0000 UTC m=+0.051399448 container create 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:43:05 np0005486808 systemd[1]: Starting libvirt nodedev daemon...
Oct 14 04:43:05 np0005486808 systemd[1]: Started libpod-conmon-3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc.scope.
Oct 14 04:43:05 np0005486808 systemd[1]: Started libvirt nodedev daemon.
Oct 14 04:43:05 np0005486808 podman[260296]: 2025-10-14 08:43:05.526683235 +0000 UTC m=+0.029334354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:05 np0005486808 podman[260296]: 2025-10-14 08:43:05.701709202 +0000 UTC m=+0.204360351 container init 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:43:05 np0005486808 podman[260296]: 2025-10-14 08:43:05.709454502 +0000 UTC m=+0.212105611 container start 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:43:05 np0005486808 podman[260296]: 2025-10-14 08:43:05.71451466 +0000 UTC m=+0.217165779 container attach 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.875 2 WARNING nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.877 2 DEBUG nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.877 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.877 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.905 2 WARNING nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] No compute node record for compute-0.ctlplane.example.com:92105e1d-1743-46e3-a494-858b4331398a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 92105e1d-1743-46e3-a494-858b4331398a could not be found.#033[00m
Oct 14 04:43:05 np0005486808 nova_compute[259627]: 2025-10-14 08:43:05.930 2 INFO nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 92105e1d-1743-46e3-a494-858b4331398a#033[00m
Oct 14 04:43:06 np0005486808 nova_compute[259627]: 2025-10-14 08:43:06.001 2 DEBUG nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:43:06 np0005486808 nova_compute[259627]: 2025-10-14 08:43:06.002 2 DEBUG nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:43:06 np0005486808 lucid_carver[260333]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:43:06 np0005486808 lucid_carver[260333]: --> relative data size: 1.0
Oct 14 04:43:06 np0005486808 lucid_carver[260333]: --> All data devices are unavailable
Oct 14 04:43:06 np0005486808 systemd[1]: libpod-3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc.scope: Deactivated successfully.
Oct 14 04:43:06 np0005486808 podman[260296]: 2025-10-14 08:43:06.809240401 +0000 UTC m=+1.311891510 container died 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:43:06 np0005486808 systemd[1]: libpod-3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc.scope: Consumed 1.034s CPU time.
Oct 14 04:43:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5120635142cd7c31a3cdcfb375e6faf82ebe1d38693d6743fb3a3c54b4f2f341-merged.mount: Deactivated successfully.
Oct 14 04:43:06 np0005486808 podman[260296]: 2025-10-14 08:43:06.871274566 +0000 UTC m=+1.373925685 container remove 3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:43:06 np0005486808 systemd[1]: libpod-conmon-3e04f76b2c7ae3ac92e330296dcee8aaf11195ef26af68031f41bb45d49026fc.scope: Deactivated successfully.
Oct 14 04:43:06 np0005486808 nova_compute[259627]: 2025-10-14 08:43:06.933 2 INFO nova.scheduler.client.report [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [req-8f699c06-cfb0-4c81-8212-404fdf8dc0d2] Created resource provider record via placement API for resource provider with UUID 92105e1d-1743-46e3-a494-858b4331398a and name compute-0.ctlplane.example.com.#033[00m
Oct 14 04:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:43:07.001 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:43:07.001 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:43:07.001 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:43:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.351 2 DEBUG oslo_concurrency.processutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.666738105 +0000 UTC m=+0.066793687 container create 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:43:07 np0005486808 systemd[1]: Started libpod-conmon-45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24.scope.
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.639055721 +0000 UTC m=+0.039111383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.779686995 +0000 UTC m=+0.179742617 container init 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.792552605 +0000 UTC m=+0.192608187 container start 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.797570732 +0000 UTC m=+0.197626314 container attach 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:43:07 np0005486808 unruffled_clarke[260560]: 167 167
Oct 14 04:43:07 np0005486808 systemd[1]: libpod-45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24.scope: Deactivated successfully.
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.801049823 +0000 UTC m=+0.201105415 container died 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:43:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:43:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253227861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:43:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e1e395fe7bf27a2fd20fe978e1349952c60b1945ac282811ba81c93c367caeac-merged.mount: Deactivated successfully.
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.831 2 DEBUG oslo_concurrency.processutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.842 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 14 04:43:07 np0005486808 nova_compute[259627]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.842 2 INFO nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.844 2 DEBUG nova.compute.provider_tree [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.845 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:43:07 np0005486808 podman[260543]: 2025-10-14 08:43:07.846882851 +0000 UTC m=+0.246938393 container remove 45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_clarke, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:43:07 np0005486808 systemd[1]: libpod-conmon-45024b09c2473702887785e30dd81b5ad42e0779adc2272d511b874362e26c24.scope: Deactivated successfully.
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.913 2 DEBUG nova.scheduler.client.report [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Updated inventory for provider 92105e1d-1743-46e3-a494-858b4331398a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.914 2 DEBUG nova.compute.provider_tree [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Updating resource provider 92105e1d-1743-46e3-a494-858b4331398a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 14 04:43:07 np0005486808 nova_compute[259627]: 2025-10-14 08:43:07.915 2 DEBUG nova.compute.provider_tree [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.010 2 DEBUG nova.compute.provider_tree [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Updating resource provider 92105e1d-1743-46e3-a494-858b4331398a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.035 2 DEBUG nova.compute.resource_tracker [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.035 2 DEBUG oslo_concurrency.lockutils [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.035 2 DEBUG nova.service [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.07353451 +0000 UTC m=+0.072670094 container create 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:43:08 np0005486808 systemd[1]: Started libpod-conmon-5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21.scope.
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.041713259 +0000 UTC m=+0.040848903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.130 2 DEBUG nova.service [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct 14 04:43:08 np0005486808 nova_compute[259627]: 2025-10-14 08:43:08.131 2 DEBUG nova.servicegroup.drivers.db [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct 14 04:43:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93189c564bdad3c78d7e4dce68ea9070c1a57ade415cccb847ae983551200854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93189c564bdad3c78d7e4dce68ea9070c1a57ade415cccb847ae983551200854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93189c564bdad3c78d7e4dce68ea9070c1a57ade415cccb847ae983551200854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93189c564bdad3c78d7e4dce68ea9070c1a57ade415cccb847ae983551200854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.179303654 +0000 UTC m=+0.178439238 container init 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.189933022 +0000 UTC m=+0.189068586 container start 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.194436376 +0000 UTC m=+0.193571970 container attach 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]: {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    "0": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "devices": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "/dev/loop3"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            ],
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_name": "ceph_lv0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_size": "21470642176",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "name": "ceph_lv0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "tags": {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_name": "ceph",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.crush_device_class": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.encrypted": "0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_id": "0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.vdo": "0"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            },
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "vg_name": "ceph_vg0"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        }
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    ],
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    "1": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "devices": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "/dev/loop4"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            ],
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_name": "ceph_lv1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_size": "21470642176",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "name": "ceph_lv1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "tags": {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_name": "ceph",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.crush_device_class": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.encrypted": "0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_id": "1",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.vdo": "0"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            },
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "vg_name": "ceph_vg1"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        }
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    ],
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    "2": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "devices": [
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "/dev/loop5"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            ],
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_name": "ceph_lv2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_size": "21470642176",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "name": "ceph_lv2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "tags": {
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.cluster_name": "ceph",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.crush_device_class": "",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.encrypted": "0",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osd_id": "2",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:                "ceph.vdo": "0"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            },
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "type": "block",
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:            "vg_name": "ceph_vg2"
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:        }
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]:    ]
Oct 14 04:43:08 np0005486808 funny_northcutt[260600]: }
Oct 14 04:43:08 np0005486808 systemd[1]: libpod-5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21.scope: Deactivated successfully.
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.914320045 +0000 UTC m=+0.913455629 container died 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:43:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-93189c564bdad3c78d7e4dce68ea9070c1a57ade415cccb847ae983551200854-merged.mount: Deactivated successfully.
Oct 14 04:43:08 np0005486808 podman[260584]: 2025-10-14 08:43:08.972954261 +0000 UTC m=+0.972089835 container remove 5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:43:08 np0005486808 systemd[1]: libpod-conmon-5a863681ee24800d5f4a6441fc5c249a4981f30a7b9b528b49e02881275c8e21.scope: Deactivated successfully.
Oct 14 04:43:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.677336378 +0000 UTC m=+0.044751793 container create 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:43:09 np0005486808 systemd[1]: Started libpod-conmon-99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d.scope.
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.656417031 +0000 UTC m=+0.023832526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.773518139 +0000 UTC m=+0.140933634 container init 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.783895731 +0000 UTC m=+0.151311176 container start 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.788565029 +0000 UTC m=+0.155980474 container attach 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 04:43:09 np0005486808 romantic_shannon[260777]: 167 167
Oct 14 04:43:09 np0005486808 systemd[1]: libpod-99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d.scope: Deactivated successfully.
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.792112722 +0000 UTC m=+0.159528217 container died 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:43:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8c546c080a67d3b14c9ed5c3c38654d6157f964d13f33ea59e5e925a5aab16d9-merged.mount: Deactivated successfully.
Oct 14 04:43:09 np0005486808 podman[260760]: 2025-10-14 08:43:09.850976653 +0000 UTC m=+0.218392098 container remove 99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:43:09 np0005486808 systemd[1]: libpod-conmon-99ca19a5a19cc44149c4872976f651e02d3179f3db9e16f1621dea997274f58d.scope: Deactivated successfully.
Oct 14 04:43:10 np0005486808 podman[260802]: 2025-10-14 08:43:10.054955465 +0000 UTC m=+0.047135439 container create 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:43:10 np0005486808 systemd[1]: Started libpod-conmon-94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd.scope.
Oct 14 04:43:10 np0005486808 podman[260802]: 2025-10-14 08:43:10.03413261 +0000 UTC m=+0.026312614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:43:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:43:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1efc5efac332a14877a7cdbf2e7240521c64b42f5b0e5a38a84784ed5f47cb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1efc5efac332a14877a7cdbf2e7240521c64b42f5b0e5a38a84784ed5f47cb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1efc5efac332a14877a7cdbf2e7240521c64b42f5b0e5a38a84784ed5f47cb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1efc5efac332a14877a7cdbf2e7240521c64b42f5b0e5a38a84784ed5f47cb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:43:10 np0005486808 podman[260802]: 2025-10-14 08:43:10.155118108 +0000 UTC m=+0.147298092 container init 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:43:10 np0005486808 podman[260802]: 2025-10-14 08:43:10.1659521 +0000 UTC m=+0.158132074 container start 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:43:10 np0005486808 podman[260802]: 2025-10-14 08:43:10.171188912 +0000 UTC m=+0.163368956 container attach 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:43:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:11 np0005486808 admiring_newton[260818]: {
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_id": 2,
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "type": "bluestore"
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    },
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_id": 1,
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "type": "bluestore"
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    },
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_id": 0,
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:        "type": "bluestore"
Oct 14 04:43:11 np0005486808 admiring_newton[260818]:    }
Oct 14 04:43:11 np0005486808 admiring_newton[260818]: }
Oct 14 04:43:11 np0005486808 systemd[1]: libpod-94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd.scope: Deactivated successfully.
Oct 14 04:43:11 np0005486808 systemd[1]: libpod-94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd.scope: Consumed 1.080s CPU time.
Oct 14 04:43:11 np0005486808 podman[260802]: 2025-10-14 08:43:11.251716051 +0000 UTC m=+1.243896045 container died 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:43:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e1efc5efac332a14877a7cdbf2e7240521c64b42f5b0e5a38a84784ed5f47cb1-merged.mount: Deactivated successfully.
Oct 14 04:43:11 np0005486808 podman[260802]: 2025-10-14 08:43:11.324800624 +0000 UTC m=+1.316980568 container remove 94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 04:43:11 np0005486808 systemd[1]: libpod-conmon-94a450efd55fb9395f57e5a23bf5edad0c8d0da8b70d1d3ea6263e5f543545bd.scope: Deactivated successfully.
Oct 14 04:43:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:43:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:43:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 77e65500-7965-477f-82d2-b36fa6449053 does not exist
Oct 14 04:43:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 776302c6-e658-4a61-8e7f-623828fc2cee does not exist
Oct 14 04:43:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:43:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:15 np0005486808 podman[260914]: 2025-10-14 08:43:15.671156024 +0000 UTC m=+0.070300959 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 04:43:15 np0005486808 podman[260913]: 2025-10-14 08:43:15.720570745 +0000 UTC m=+0.119733970 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 14 04:43:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:17 np0005486808 nova_compute[259627]: 2025-10-14 08:43:17.133 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:43:17 np0005486808 nova_compute[259627]: 2025-10-14 08:43:17.162 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:43:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:43:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1732765279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:43:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:43:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1732765279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505323707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505323707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:43:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2154092045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:43:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2154092045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:43:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:43:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5649 writes, 23K keys, 5649 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5649 writes, 915 syncs, 6.17 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 14 04:43:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:43:32
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:43:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:43:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:43:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6794 writes, 27K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6794 writes, 1273 syncs, 5.34 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 14 04:43:34 np0005486808 podman[260960]: 2025-10-14 08:43:34.66950896 +0000 UTC m=+0.074559468 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 04:43:34 np0005486808 podman[260959]: 2025-10-14 08:43:34.675609892 +0000 UTC m=+0.080655780 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:43:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:43:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5599 writes, 23K keys, 5599 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5599 writes, 876 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct 14 04:43:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 04:43:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:43:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:43:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:46 np0005486808 podman[260997]: 2025-10-14 08:43:46.671758374 +0000 UTC m=+0.074892926 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:43:46 np0005486808 podman[260996]: 2025-10-14 08:43:46.710734045 +0000 UTC m=+0.117347722 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 14 04:43:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:43:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:43:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:03 np0005486808 nova_compute[259627]: 2025-10-14 08:44:03.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:03 np0005486808 nova_compute[259627]: 2025-10-14 08:44:03.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:03 np0005486808 nova_compute[259627]: 2025-10-14 08:44:03.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:44:03 np0005486808 nova_compute[259627]: 2025-10-14 08:44:03.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:03.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.003 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.003 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.028 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.029 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.029 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.030 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:44:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:44:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070894706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.484 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.624 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.625 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.626 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.626 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.776 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.776 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:44:04 np0005486808 nova_compute[259627]: 2025-10-14 08:44:04.813 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:44:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:44:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170218074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:44:05 np0005486808 nova_compute[259627]: 2025-10-14 08:44:05.212 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:44:05 np0005486808 nova_compute[259627]: 2025-10-14 08:44:05.219 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:44:05 np0005486808 nova_compute[259627]: 2025-10-14 08:44:05.321 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:44:05 np0005486808 nova_compute[259627]: 2025-10-14 08:44:05.324 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:44:05 np0005486808 nova_compute[259627]: 2025-10-14 08:44:05.324 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:44:05 np0005486808 podman[261083]: 2025-10-14 08:44:05.682844108 +0000 UTC m=+0.081876117 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 04:44:05 np0005486808 podman[261082]: 2025-10-14 08:44:05.684685643 +0000 UTC m=+0.095482968 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 04:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:44:07.002 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:44:07.002 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:44:07.002 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:44:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 14 04:44:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/969055489' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 14 04:44:07 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 14 04:44:07 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 14 04:44:07 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 14 04:44:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.063955) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452064004, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1487, "num_deletes": 251, "total_data_size": 2396280, "memory_usage": 2430032, "flush_reason": "Manual Compaction"}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452081772, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2352453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14750, "largest_seqno": 16236, "table_properties": {"data_size": 2345489, "index_size": 4035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14088, "raw_average_key_size": 19, "raw_value_size": 2331650, "raw_average_value_size": 3251, "num_data_blocks": 185, "num_entries": 717, "num_filter_entries": 717, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431293, "oldest_key_time": 1760431293, "file_creation_time": 1760431452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17857 microseconds, and 9697 cpu microseconds.
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.081818) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2352453 bytes OK
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.081838) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.083840) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.083885) EVENT_LOG_v1 {"time_micros": 1760431452083876, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.083907) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2389772, prev total WAL file size 2389772, number of live WAL files 2.
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.084683) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2297KB)], [35(7022KB)]
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452084736, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9543863, "oldest_snapshot_seqno": -1}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3953 keys, 7753715 bytes, temperature: kUnknown
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452130673, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7753715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7724983, "index_size": 17742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96560, "raw_average_key_size": 24, "raw_value_size": 7651137, "raw_average_value_size": 1935, "num_data_blocks": 754, "num_entries": 3953, "num_filter_entries": 3953, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.130962) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7753715 bytes
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.132602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.4 rd, 168.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4467, records dropped: 514 output_compression: NoCompression
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.132629) EVENT_LOG_v1 {"time_micros": 1760431452132616, "job": 16, "event": "compaction_finished", "compaction_time_micros": 46010, "compaction_time_cpu_micros": 23918, "output_level": 6, "num_output_files": 1, "total_output_size": 7753715, "num_input_records": 4467, "num_output_records": 3953, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452133596, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431452135965, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.084578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.136020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.136024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.136026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.136027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:44:12.136029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7eb58d30-e566-4c59-880b-3f214fb09e6f does not exist
Oct 14 04:44:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c72a5e9f-19ec-4656-9922-b28b7dbce573 does not exist
Oct 14 04:44:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b742b0a1-98cb-46e5-9a0e-ce0e694d6f39 does not exist
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:44:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:44:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:44:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:44:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.253950482 +0000 UTC m=+0.045499460 container create 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:44:13 np0005486808 systemd[1]: Started libpod-conmon-177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23.scope.
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.233383161 +0000 UTC m=+0.024932239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.361697289 +0000 UTC m=+0.153246287 container init 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.372215965 +0000 UTC m=+0.163764953 container start 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.375849674 +0000 UTC m=+0.167398752 container attach 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:44:13 np0005486808 nervous_meitner[261532]: 167 167
Oct 14 04:44:13 np0005486808 systemd[1]: libpod-177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23.scope: Deactivated successfully.
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.380412755 +0000 UTC m=+0.171961733 container died 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 14 04:44:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1c3d69cbbebecf03d47c3a33d2b7b50eb72cff5b4392ffe5c08d5750032bc7f8-merged.mount: Deactivated successfully.
Oct 14 04:44:13 np0005486808 podman[261515]: 2025-10-14 08:44:13.424139261 +0000 UTC m=+0.215688239 container remove 177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:44:13 np0005486808 systemd[1]: libpod-conmon-177c923d9f4c1e5a8c399b9a8c6cea2f13567cd0dcff3433442592f740298d23.scope: Deactivated successfully.
Oct 14 04:44:13 np0005486808 podman[261556]: 2025-10-14 08:44:13.611091359 +0000 UTC m=+0.062440694 container create be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:44:13 np0005486808 systemd[1]: Started libpod-conmon-be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4.scope.
Oct 14 04:44:13 np0005486808 podman[261556]: 2025-10-14 08:44:13.578226187 +0000 UTC m=+0.029575492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:13 np0005486808 podman[261556]: 2025-10-14 08:44:13.702109307 +0000 UTC m=+0.153458642 container init be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:44:13 np0005486808 podman[261556]: 2025-10-14 08:44:13.710715717 +0000 UTC m=+0.162065032 container start be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:44:13 np0005486808 podman[261556]: 2025-10-14 08:44:13.714447848 +0000 UTC m=+0.165797193 container attach be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:44:14 np0005486808 gracious_galois[261573]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:44:14 np0005486808 gracious_galois[261573]: --> relative data size: 1.0
Oct 14 04:44:14 np0005486808 gracious_galois[261573]: --> All data devices are unavailable
Oct 14 04:44:14 np0005486808 systemd[1]: libpod-be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4.scope: Deactivated successfully.
Oct 14 04:44:14 np0005486808 systemd[1]: libpod-be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4.scope: Consumed 1.063s CPU time.
Oct 14 04:44:14 np0005486808 podman[261556]: 2025-10-14 08:44:14.833830296 +0000 UTC m=+1.285179601 container died be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:44:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fe89ccb3b8b23d0282c007020523341e70f3305e50d812e5e52e8a203a801aea-merged.mount: Deactivated successfully.
Oct 14 04:44:14 np0005486808 podman[261556]: 2025-10-14 08:44:14.898133674 +0000 UTC m=+1.349482979 container remove be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:44:14 np0005486808 systemd[1]: libpod-conmon-be777f418e7f41b63a3c050d9aadffaf6b8423ec6b276329d1ba83a3ad5b84a4.scope: Deactivated successfully.
Oct 14 04:44:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.722606392 +0000 UTC m=+0.068483860 container create 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:44:15 np0005486808 systemd[1]: Started libpod-conmon-465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674.scope.
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.688636174 +0000 UTC m=+0.034513702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.83495716 +0000 UTC m=+0.180834668 container init 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.850373866 +0000 UTC m=+0.196251324 container start 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.855281646 +0000 UTC m=+0.201159074 container attach 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:44:15 np0005486808 competent_varahamihira[261770]: 167 167
Oct 14 04:44:15 np0005486808 systemd[1]: libpod-465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674.scope: Deactivated successfully.
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.859372645 +0000 UTC m=+0.205250103 container died 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:44:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8b5496a902f6e672daf5ef0c5560e6ec2f9e777c6c39c440a8e5f0c479deeb76-merged.mount: Deactivated successfully.
Oct 14 04:44:15 np0005486808 podman[261753]: 2025-10-14 08:44:15.905886719 +0000 UTC m=+0.251764137 container remove 465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:44:15 np0005486808 systemd[1]: libpod-conmon-465ae8589489537063fad8722c9855738b1ed5b1361628730346e20f3106f674.scope: Deactivated successfully.
Oct 14 04:44:16 np0005486808 podman[261794]: 2025-10-14 08:44:16.146275879 +0000 UTC m=+0.079348495 container create 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:44:16 np0005486808 systemd[1]: Started libpod-conmon-9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a.scope.
Oct 14 04:44:16 np0005486808 podman[261794]: 2025-10-14 08:44:16.113244324 +0000 UTC m=+0.046317040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cada18722dacc93a0dde26f54dfea25708f1d4739e95a29f7224e605a8be6b17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cada18722dacc93a0dde26f54dfea25708f1d4739e95a29f7224e605a8be6b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cada18722dacc93a0dde26f54dfea25708f1d4739e95a29f7224e605a8be6b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cada18722dacc93a0dde26f54dfea25708f1d4739e95a29f7224e605a8be6b17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:16 np0005486808 podman[261794]: 2025-10-14 08:44:16.248148343 +0000 UTC m=+0.181220979 container init 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:44:16 np0005486808 podman[261794]: 2025-10-14 08:44:16.258405053 +0000 UTC m=+0.191477679 container start 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:44:16 np0005486808 podman[261794]: 2025-10-14 08:44:16.2623872 +0000 UTC m=+0.195459826 container attach 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]: {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    "0": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "devices": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "/dev/loop3"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            ],
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_name": "ceph_lv0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_size": "21470642176",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "name": "ceph_lv0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "tags": {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_name": "ceph",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.crush_device_class": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.encrypted": "0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_id": "0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.vdo": "0"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            },
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "vg_name": "ceph_vg0"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        }
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    ],
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    "1": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "devices": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "/dev/loop4"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            ],
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_name": "ceph_lv1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_size": "21470642176",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "name": "ceph_lv1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "tags": {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_name": "ceph",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.crush_device_class": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.encrypted": "0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_id": "1",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.vdo": "0"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            },
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "vg_name": "ceph_vg1"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        }
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    ],
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    "2": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "devices": [
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "/dev/loop5"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            ],
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_name": "ceph_lv2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_size": "21470642176",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "name": "ceph_lv2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "tags": {
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.cluster_name": "ceph",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.crush_device_class": "",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.encrypted": "0",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osd_id": "2",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:                "ceph.vdo": "0"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            },
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "type": "block",
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:            "vg_name": "ceph_vg2"
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:        }
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]:    ]
Oct 14 04:44:17 np0005486808 infallible_goldstine[261810]: }
Oct 14 04:44:17 np0005486808 systemd[1]: libpod-9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a.scope: Deactivated successfully.
Oct 14 04:44:17 np0005486808 podman[261794]: 2025-10-14 08:44:17.0417856 +0000 UTC m=+0.974858276 container died 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:44:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cada18722dacc93a0dde26f54dfea25708f1d4739e95a29f7224e605a8be6b17-merged.mount: Deactivated successfully.
Oct 14 04:44:17 np0005486808 podman[261794]: 2025-10-14 08:44:17.118618433 +0000 UTC m=+1.051691049 container remove 9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldstine, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:44:17 np0005486808 systemd[1]: libpod-conmon-9bb8852dafc034d07c045c9b20a6401d6dd59017fb99768ab2aebf08a8578f3a.scope: Deactivated successfully.
Oct 14 04:44:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:17 np0005486808 podman[261822]: 2025-10-14 08:44:17.166119761 +0000 UTC m=+0.084544692 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:44:17 np0005486808 podman[261820]: 2025-10-14 08:44:17.166113751 +0000 UTC m=+0.096034942 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:44:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.713493535 +0000 UTC m=+0.041549404 container create 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:44:17 np0005486808 systemd[1]: Started libpod-conmon-45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44.scope.
Oct 14 04:44:17 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.697109025 +0000 UTC m=+0.025164914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.793219728 +0000 UTC m=+0.121275677 container init 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.803522439 +0000 UTC m=+0.131578318 container start 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.806882781 +0000 UTC m=+0.134938690 container attach 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:44:17 np0005486808 sharp_bardeen[262033]: 167 167
Oct 14 04:44:17 np0005486808 systemd[1]: libpod-45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44.scope: Deactivated successfully.
Oct 14 04:44:17 np0005486808 conmon[262033]: conmon 45778bbcf88c352671ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44.scope/container/memory.events
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.80969585 +0000 UTC m=+0.137751719 container died 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:44:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-190ddb2a550859f4163c18d13d47a06c11c55d658de06c339ddb3e66b0fc67f3-merged.mount: Deactivated successfully.
Oct 14 04:44:17 np0005486808 podman[262017]: 2025-10-14 08:44:17.856799238 +0000 UTC m=+0.184855107 container remove 45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:44:17 np0005486808 systemd[1]: libpod-conmon-45778bbcf88c352671ceef0dbeb06c5898c7a0ddc535090e8e4d6c3d8eac2e44.scope: Deactivated successfully.
Oct 14 04:44:18 np0005486808 podman[262059]: 2025-10-14 08:44:18.056916476 +0000 UTC m=+0.078554796 container create 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:44:18 np0005486808 systemd[1]: Started libpod-conmon-353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843.scope.
Oct 14 04:44:18 np0005486808 podman[262059]: 2025-10-14 08:44:18.021872582 +0000 UTC m=+0.043510962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:44:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:44:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c14c736e03229dcba41f04f0aac8b8b5b69dcd48a417431f6a50039c6ecb190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c14c736e03229dcba41f04f0aac8b8b5b69dcd48a417431f6a50039c6ecb190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c14c736e03229dcba41f04f0aac8b8b5b69dcd48a417431f6a50039c6ecb190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c14c736e03229dcba41f04f0aac8b8b5b69dcd48a417431f6a50039c6ecb190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:44:18 np0005486808 podman[262059]: 2025-10-14 08:44:18.143895767 +0000 UTC m=+0.165534147 container init 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:44:18 np0005486808 podman[262059]: 2025-10-14 08:44:18.154620118 +0000 UTC m=+0.176258408 container start 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:44:18 np0005486808 podman[262059]: 2025-10-14 08:44:18.17642024 +0000 UTC m=+0.198058640 container attach 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:44:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]: {
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_id": 2,
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "type": "bluestore"
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    },
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_id": 1,
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "type": "bluestore"
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    },
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_id": 0,
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:        "type": "bluestore"
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]:    }
Oct 14 04:44:19 np0005486808 crazy_rosalind[262075]: }
Oct 14 04:44:19 np0005486808 systemd[1]: libpod-353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843.scope: Deactivated successfully.
Oct 14 04:44:19 np0005486808 podman[262059]: 2025-10-14 08:44:19.211323468 +0000 UTC m=+1.232961758 container died 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:44:19 np0005486808 systemd[1]: libpod-353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843.scope: Consumed 1.060s CPU time.
Oct 14 04:44:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0c14c736e03229dcba41f04f0aac8b8b5b69dcd48a417431f6a50039c6ecb190-merged.mount: Deactivated successfully.
Oct 14 04:44:19 np0005486808 podman[262059]: 2025-10-14 08:44:19.271347601 +0000 UTC m=+1.292985911 container remove 353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rosalind, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:44:19 np0005486808 systemd[1]: libpod-conmon-353e02456eb02c8c80976711a394cb97417011b454d9a06e9f861865401ff843.scope: Deactivated successfully.
Oct 14 04:44:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:44:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:44:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 57a9de11-3383-45b0-a834-d559b4f5cfad does not exist
Oct 14 04:44:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d00c9d58-6afe-48ee-a37d-020bd2c16bb3 does not exist
Oct 14 04:44:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:44:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 14 04:44:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275678020' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 14 04:44:22 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 14 04:44:22 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 14 04:44:22 np0005486808 ceph-mgr[74543]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 14 04:44:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:44:32
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', '.rgw.root']
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:44:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:44:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:36 np0005486808 podman[262173]: 2025-10-14 08:44:36.661752051 +0000 UTC m=+0.074199680 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:44:36 np0005486808 podman[262172]: 2025-10-14 08:44:36.693991836 +0000 UTC m=+0.106533328 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 14 04:44:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:44:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:44:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:47 np0005486808 podman[262208]: 2025-10-14 08:44:47.67029565 +0000 UTC m=+0.080231667 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 14 04:44:47 np0005486808 podman[262207]: 2025-10-14 08:44:47.721307853 +0000 UTC m=+0.133204978 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 14 04:44:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 04:44:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:44:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:44:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.315 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.316 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.336 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.336 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.336 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.336 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.336 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.357 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.358 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.358 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.358 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.358 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290334957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290334957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:45:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389595085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:45:05 np0005486808 nova_compute[259627]: 2025-10-14 08:45:05.815 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.051 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.052 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.052 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.052 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.144 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.144 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.170 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:45:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:45:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6506065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.583 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.590 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.609 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.610 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:45:06 np0005486808 nova_compute[259627]: 2025-10-14 08:45:06.611 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:07.004 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:07.004 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:07.005 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:45:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.252 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.253 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.253 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.268 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.269 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.270 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:07 np0005486808 nova_compute[259627]: 2025-10-14 08:45:07.270 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:45:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:07 np0005486808 podman[262294]: 2025-10-14 08:45:07.661974106 +0000 UTC m=+0.073209615 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd)
Oct 14 04:45:07 np0005486808 podman[262295]: 2025-10-14 08:45:07.673153669 +0000 UTC m=+0.077467980 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 04:45:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:11.782 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:45:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:11.784 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:45:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:45:11.786 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:45:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:18 np0005486808 podman[262335]: 2025-10-14 08:45:18.690636003 +0000 UTC m=+0.091098722 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 14 04:45:18 np0005486808 podman[262334]: 2025-10-14 08:45:18.69380536 +0000 UTC m=+0.103223087 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 04:45:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d0ddc7d3-78b2-482f-870b-3a36c1c0f885 does not exist
Oct 14 04:45:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 775b3684-b520-4624-a302-bfe218307f16 does not exist
Oct 14 04:45:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d7a2994f-6eca-4f85-9718-0ed0fc6d5fe6 does not exist
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:45:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.022859166 +0000 UTC m=+0.070904799 container create 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 14 04:45:21 np0005486808 systemd[1]: Started libpod-conmon-7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526.scope.
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:20.99225091 +0000 UTC m=+0.040296533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.114776147 +0000 UTC m=+0.162821720 container init 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.126331679 +0000 UTC m=+0.174377212 container start 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.13047806 +0000 UTC m=+0.178523633 container attach 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:45:21 np0005486808 inspiring_wiles[262667]: 167 167
Oct 14 04:45:21 np0005486808 systemd[1]: libpod-7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526.scope: Deactivated successfully.
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.134317343 +0000 UTC m=+0.182362906 container died 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 04:45:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-91165a897484024d82c6ac183ec2a767892e55320e822193069638dbb4b86d95-merged.mount: Deactivated successfully.
Oct 14 04:45:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:21 np0005486808 podman[262650]: 2025-10-14 08:45:21.197709379 +0000 UTC m=+0.245754942 container remove 7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wiles, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:45:21 np0005486808 systemd[1]: libpod-conmon-7c40c4a0cd234c733d4406947fdd42a3c07271e13d738abd6b08613692b15526.scope: Deactivated successfully.
Oct 14 04:45:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:45:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:45:21 np0005486808 podman[262692]: 2025-10-14 08:45:21.424510098 +0000 UTC m=+0.064001952 container create 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:45:21 np0005486808 systemd[1]: Started libpod-conmon-3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1.scope.
Oct 14 04:45:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:21 np0005486808 podman[262692]: 2025-10-14 08:45:21.403608318 +0000 UTC m=+0.043100172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:21 np0005486808 podman[262692]: 2025-10-14 08:45:21.514601894 +0000 UTC m=+0.154093748 container init 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:45:21 np0005486808 podman[262692]: 2025-10-14 08:45:21.530328567 +0000 UTC m=+0.169820411 container start 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:45:21 np0005486808 podman[262692]: 2025-10-14 08:45:21.534273513 +0000 UTC m=+0.173765427 container attach 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:45:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:22 np0005486808 competent_allen[262708]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:45:22 np0005486808 competent_allen[262708]: --> relative data size: 1.0
Oct 14 04:45:22 np0005486808 competent_allen[262708]: --> All data devices are unavailable
Oct 14 04:45:22 np0005486808 systemd[1]: libpod-3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1.scope: Deactivated successfully.
Oct 14 04:45:22 np0005486808 systemd[1]: libpod-3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1.scope: Consumed 1.016s CPU time.
Oct 14 04:45:22 np0005486808 podman[262737]: 2025-10-14 08:45:22.626466969 +0000 UTC m=+0.025860442 container died 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:45:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3a048845f654cc4374299f7ea0b20c1c0d876dcea5a8be47b639106ec059bf92-merged.mount: Deactivated successfully.
Oct 14 04:45:22 np0005486808 podman[262737]: 2025-10-14 08:45:22.702097392 +0000 UTC m=+0.101490825 container remove 3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:45:22 np0005486808 systemd[1]: libpod-conmon-3ebf24b00c5588118a851cf258f3fd739e0bc97e18e99e0362c20766a8c5b5d1.scope: Deactivated successfully.
Oct 14 04:45:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.415527634 +0000 UTC m=+0.057280847 container create 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:45:23 np0005486808 systemd[1]: Started libpod-conmon-147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c.scope.
Oct 14 04:45:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.382859098 +0000 UTC m=+0.024612411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.495919204 +0000 UTC m=+0.137672467 container init 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.51095989 +0000 UTC m=+0.152713113 container start 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.513994564 +0000 UTC m=+0.155747807 container attach 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:45:23 np0005486808 condescending_hoover[262910]: 167 167
Oct 14 04:45:23 np0005486808 systemd[1]: libpod-147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c.scope: Deactivated successfully.
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.516251849 +0000 UTC m=+0.158005082 container died 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:45:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3432e77a602699e7bef432b7dba8f66e745637a24f0e86d692119f07c9301f99-merged.mount: Deactivated successfully.
Oct 14 04:45:23 np0005486808 podman[262893]: 2025-10-14 08:45:23.599439717 +0000 UTC m=+0.241192940 container remove 147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:45:23 np0005486808 systemd[1]: libpod-conmon-147ed38184032d24a46899491ac00a7f4be36c2f1297ead7a599b0e53e86217c.scope: Deactivated successfully.
Oct 14 04:45:23 np0005486808 podman[262932]: 2025-10-14 08:45:23.812155152 +0000 UTC m=+0.058001414 container create 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:45:23 np0005486808 systemd[1]: Started libpod-conmon-7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4.scope.
Oct 14 04:45:23 np0005486808 podman[262932]: 2025-10-14 08:45:23.780432018 +0000 UTC m=+0.026278270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a8c9bf33bf5103757c138fd4b77e15319444a610fa4c87a8f08f8feb00e0e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a8c9bf33bf5103757c138fd4b77e15319444a610fa4c87a8f08f8feb00e0e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a8c9bf33bf5103757c138fd4b77e15319444a610fa4c87a8f08f8feb00e0e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a8c9bf33bf5103757c138fd4b77e15319444a610fa4c87a8f08f8feb00e0e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:23 np0005486808 podman[262932]: 2025-10-14 08:45:23.918421652 +0000 UTC m=+0.164267934 container init 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 14 04:45:23 np0005486808 podman[262932]: 2025-10-14 08:45:23.937780704 +0000 UTC m=+0.183626956 container start 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:45:23 np0005486808 podman[262932]: 2025-10-14 08:45:23.942093429 +0000 UTC m=+0.187939731 container attach 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]: {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    "0": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "devices": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "/dev/loop3"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            ],
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_name": "ceph_lv0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_size": "21470642176",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "name": "ceph_lv0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "tags": {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_name": "ceph",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.crush_device_class": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.encrypted": "0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_id": "0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.vdo": "0"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            },
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "vg_name": "ceph_vg0"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        }
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    ],
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    "1": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "devices": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "/dev/loop4"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            ],
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_name": "ceph_lv1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_size": "21470642176",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "name": "ceph_lv1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "tags": {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_name": "ceph",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.crush_device_class": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.encrypted": "0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_id": "1",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.vdo": "0"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            },
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "vg_name": "ceph_vg1"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        }
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    ],
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    "2": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "devices": [
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "/dev/loop5"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            ],
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_name": "ceph_lv2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_size": "21470642176",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "name": "ceph_lv2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "tags": {
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.cluster_name": "ceph",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.crush_device_class": "",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.encrypted": "0",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osd_id": "2",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:                "ceph.vdo": "0"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            },
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "type": "block",
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:            "vg_name": "ceph_vg2"
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:        }
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]:    ]
Oct 14 04:45:24 np0005486808 elastic_hellman[262948]: }
Oct 14 04:45:24 np0005486808 systemd[1]: libpod-7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4.scope: Deactivated successfully.
Oct 14 04:45:24 np0005486808 podman[262932]: 2025-10-14 08:45:24.746585851 +0000 UTC m=+0.992432073 container died 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:45:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-48a8c9bf33bf5103757c138fd4b77e15319444a610fa4c87a8f08f8feb00e0e6-merged.mount: Deactivated successfully.
Oct 14 04:45:24 np0005486808 podman[262932]: 2025-10-14 08:45:24.838339308 +0000 UTC m=+1.084185520 container remove 7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:45:24 np0005486808 systemd[1]: libpod-conmon-7c46eb7f44b1950cb0ee727532f4707f6b29872f8d4169c18851d4f8d0271ad4.scope: Deactivated successfully.
Oct 14 04:45:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.491219474 +0000 UTC m=+0.061811098 container create ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:45:25 np0005486808 systemd[1]: Started libpod-conmon-ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d.scope.
Oct 14 04:45:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.472320723 +0000 UTC m=+0.042912367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.576767809 +0000 UTC m=+0.147359473 container init ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.583568145 +0000 UTC m=+0.154159789 container start ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:45:25 np0005486808 nice_neumann[263129]: 167 167
Oct 14 04:45:25 np0005486808 systemd[1]: libpod-ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d.scope: Deactivated successfully.
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.590248748 +0000 UTC m=+0.160840392 container attach ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.591396276 +0000 UTC m=+0.161987890 container died ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:45:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9ac11129d221fbd1b1d77c65c9e66b08a2d130155f340d36a0aa7d6a0770aa38-merged.mount: Deactivated successfully.
Oct 14 04:45:25 np0005486808 podman[263112]: 2025-10-14 08:45:25.633867421 +0000 UTC m=+0.204459035 container remove ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:45:25 np0005486808 systemd[1]: libpod-conmon-ce58a4ddcf92aaf2f55ea59018d0e5fa42ebe505401262f18c8943a0af828f1d.scope: Deactivated successfully.
Oct 14 04:45:25 np0005486808 podman[263152]: 2025-10-14 08:45:25.803138688 +0000 UTC m=+0.058901927 container create 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:45:25 np0005486808 systemd[1]: Started libpod-conmon-9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3.scope.
Oct 14 04:45:25 np0005486808 podman[263152]: 2025-10-14 08:45:25.774052668 +0000 UTC m=+0.029815957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:45:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:45:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5906e4a9c9dae01ff4399c7863ff42633254d4e3f3878e0d48027322b439cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5906e4a9c9dae01ff4399c7863ff42633254d4e3f3878e0d48027322b439cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5906e4a9c9dae01ff4399c7863ff42633254d4e3f3878e0d48027322b439cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5906e4a9c9dae01ff4399c7863ff42633254d4e3f3878e0d48027322b439cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:45:25 np0005486808 podman[263152]: 2025-10-14 08:45:25.907467071 +0000 UTC m=+0.163230380 container init 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:45:25 np0005486808 podman[263152]: 2025-10-14 08:45:25.9148292 +0000 UTC m=+0.170592409 container start 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 04:45:25 np0005486808 podman[263152]: 2025-10-14 08:45:25.921948384 +0000 UTC m=+0.177711673 container attach 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:45:26 np0005486808 silly_benz[263168]: {
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_id": 2,
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "type": "bluestore"
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    },
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_id": 1,
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "type": "bluestore"
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    },
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_id": 0,
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:45:26 np0005486808 silly_benz[263168]:        "type": "bluestore"
Oct 14 04:45:26 np0005486808 silly_benz[263168]:    }
Oct 14 04:45:26 np0005486808 silly_benz[263168]: }
Oct 14 04:45:26 np0005486808 systemd[1]: libpod-9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3.scope: Deactivated successfully.
Oct 14 04:45:26 np0005486808 podman[263152]: 2025-10-14 08:45:26.866672224 +0000 UTC m=+1.122435463 container died 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:45:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b5906e4a9c9dae01ff4399c7863ff42633254d4e3f3878e0d48027322b439cf6-merged.mount: Deactivated successfully.
Oct 14 04:45:26 np0005486808 podman[263152]: 2025-10-14 08:45:26.989078458 +0000 UTC m=+1.244841697 container remove 9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:45:27 np0005486808 systemd[1]: libpod-conmon-9aabde9555bef738b44de729e5578994bcf6fae61609f94ff68d63410762ecd3.scope: Deactivated successfully.
Oct 14 04:45:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:45:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:45:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:27 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7d62922e-e153-4ddf-849c-f0b685503816 does not exist
Oct 14 04:45:27 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 087a04e3-d538-4dc4-9089-74e1f8a253aa does not exist
Oct 14 04:45:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:45:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:45:32
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log']
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:45:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:45:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:38 np0005486808 podman[263263]: 2025-10-14 08:45:38.679277263 +0000 UTC m=+0.082538573 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:45:38 np0005486808 podman[263262]: 2025-10-14 08:45:38.688215761 +0000 UTC m=+0.103569066 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 04:45:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:45:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:45:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:49 np0005486808 podman[263307]: 2025-10-14 08:45:49.668618025 +0000 UTC m=+0.073413721 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 04:45:49 np0005486808 podman[263306]: 2025-10-14 08:45:49.707574314 +0000 UTC m=+0.115161918 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 14 04:45:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:45:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:45:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:03 np0005486808 nova_compute[259627]: 2025-10-14 08:46:03.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:03 np0005486808 nova_compute[259627]: 2025-10-14 08:46:03.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:03 np0005486808 nova_compute[259627]: 2025-10-14 08:46:03.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:46:04 np0005486808 nova_compute[259627]: 2025-10-14 08:46:04.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:46:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949445894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:46:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:46:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949445894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:46:05 np0005486808 nova_compute[259627]: 2025-10-14 08:46:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:06 np0005486808 nova_compute[259627]: 2025-10-14 08:46:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:06 np0005486808 nova_compute[259627]: 2025-10-14 08:46:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:06 np0005486808 nova_compute[259627]: 2025-10-14 08:46:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:46:07.005 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:46:07.005 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:46:07.006 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.015 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.016 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:46:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:46:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198112486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.708 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.710 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.711 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.711 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.796 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.797 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:46:07 np0005486808 nova_compute[259627]: 2025-10-14 08:46:07.823 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:46:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:46:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129611622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:46:08 np0005486808 nova_compute[259627]: 2025-10-14 08:46:08.278 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:46:08 np0005486808 nova_compute[259627]: 2025-10-14 08:46:08.283 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:46:08 np0005486808 nova_compute[259627]: 2025-10-14 08:46:08.299 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:46:08 np0005486808 nova_compute[259627]: 2025-10-14 08:46:08.301 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:46:08 np0005486808 nova_compute[259627]: 2025-10-14 08:46:08.302 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:46:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:09 np0005486808 nova_compute[259627]: 2025-10-14 08:46:09.301 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:09 np0005486808 nova_compute[259627]: 2025-10-14 08:46:09.302 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:46:09 np0005486808 nova_compute[259627]: 2025-10-14 08:46:09.302 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:46:09 np0005486808 nova_compute[259627]: 2025-10-14 08:46:09.325 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:46:09 np0005486808 nova_compute[259627]: 2025-10-14 08:46:09.326 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:46:09 np0005486808 podman[263405]: 2025-10-14 08:46:09.668232103 +0000 UTC m=+0.077725236 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:09 np0005486808 podman[263406]: 2025-10-14 08:46:09.668481759 +0000 UTC m=+0.073744691 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 04:46:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:20 np0005486808 podman[263446]: 2025-10-14 08:46:20.667665809 +0000 UTC m=+0.072115152 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:46:20 np0005486808 podman[263445]: 2025-10-14 08:46:20.717875558 +0000 UTC m=+0.127702970 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Oct 14 04:46:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:23 np0005486808 systemd[1]: packagekit.service: Deactivated successfully.
Oct 14 04:46:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:28 np0005486808 podman[263665]: 2025-10-14 08:46:28.151717183 +0000 UTC m=+0.094780704 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:46:28 np0005486808 podman[263665]: 2025-10-14 08:46:28.263520011 +0000 UTC m=+0.206583482 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:46:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:46:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:46:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7ee9ca42-51e3-4bae-afb1-5d2d1fabce76 does not exist
Oct 14 04:46:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 06fe5543-5eef-4897-8e73-4e2e82e1ce52 does not exist
Oct 14 04:46:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c953c8dd-3939-490b-9f33-4a7a581b139e does not exist
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.483675992 +0000 UTC m=+0.043245063 container create d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:30 np0005486808 systemd[1]: Started libpod-conmon-d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7.scope.
Oct 14 04:46:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.462115218 +0000 UTC m=+0.021684319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.559583844 +0000 UTC m=+0.119152955 container init d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.570977987 +0000 UTC m=+0.130547088 container start d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.574364417 +0000 UTC m=+0.133933528 container attach d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:46:30 np0005486808 crazy_shaw[264111]: 167 167
Oct 14 04:46:30 np0005486808 systemd[1]: libpod-d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7.scope: Deactivated successfully.
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.577980304 +0000 UTC m=+0.137549405 container died d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:46:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-990a95a30737891226a6f5ea1142d8a52cc6c6ff3d708acd83dd5c84d4b94866-merged.mount: Deactivated successfully.
Oct 14 04:46:30 np0005486808 podman[264095]: 2025-10-14 08:46:30.636517051 +0000 UTC m=+0.196086122 container remove d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:46:30 np0005486808 systemd[1]: libpod-conmon-d9af980aeefec499bbba6982a48c1589969fce4333c0dafb6108ddcdf7a23db7.scope: Deactivated successfully.
Oct 14 04:46:30 np0005486808 podman[264134]: 2025-10-14 08:46:30.858036289 +0000 UTC m=+0.058102228 container create 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:46:30 np0005486808 systemd[1]: Started libpod-conmon-1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc.scope.
Oct 14 04:46:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:30 np0005486808 podman[264134]: 2025-10-14 08:46:30.841627138 +0000 UTC m=+0.041693097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:30 np0005486808 podman[264134]: 2025-10-14 08:46:30.95315526 +0000 UTC m=+0.153221279 container init 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:46:30 np0005486808 podman[264134]: 2025-10-14 08:46:30.960148857 +0000 UTC m=+0.160214786 container start 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:46:30 np0005486808 podman[264134]: 2025-10-14 08:46:30.964229934 +0000 UTC m=+0.164295913 container attach 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:46:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:31 np0005486808 heuristic_ardinghelli[264151]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:46:31 np0005486808 heuristic_ardinghelli[264151]: --> relative data size: 1.0
Oct 14 04:46:31 np0005486808 heuristic_ardinghelli[264151]: --> All data devices are unavailable
Oct 14 04:46:32 np0005486808 systemd[1]: libpod-1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc.scope: Deactivated successfully.
Oct 14 04:46:32 np0005486808 systemd[1]: libpod-1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc.scope: Consumed 1.015s CPU time.
Oct 14 04:46:32 np0005486808 podman[264134]: 2025-10-14 08:46:32.021223457 +0000 UTC m=+1.221289446 container died 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:46:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9ffccc98330c344e6009a6e70874e1279cd0c1f5be286aa2d512224a2af58ba1-merged.mount: Deactivated successfully.
Oct 14 04:46:32 np0005486808 podman[264134]: 2025-10-14 08:46:32.086807603 +0000 UTC m=+1.286873542 container remove 1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ardinghelli, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:32 np0005486808 systemd[1]: libpod-conmon-1210854ee3ab5ed17d8e9359a57eb9f0ce06416cf257cc12c464d2b99ce604bc.scope: Deactivated successfully.
Oct 14 04:46:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:46:32
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'backups']
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.782971602 +0000 UTC m=+0.057293929 container create 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:46:32 np0005486808 systemd[1]: Started libpod-conmon-43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d.scope.
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.754598415 +0000 UTC m=+0.028920802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.87293151 +0000 UTC m=+0.147253897 container init 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:46:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.886219397 +0000 UTC m=+0.160541714 container start 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.890341665 +0000 UTC m=+0.164664042 container attach 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:32 np0005486808 funny_jepsen[264351]: 167 167
Oct 14 04:46:32 np0005486808 systemd[1]: libpod-43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d.scope: Deactivated successfully.
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.893674995 +0000 UTC m=+0.167997322 container died 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:46:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2e05ba009f8e8514aa1dfbc611553d46b036f0403b63a834b2060ffabf27d53e-merged.mount: Deactivated successfully.
Oct 14 04:46:32 np0005486808 podman[264334]: 2025-10-14 08:46:32.947676914 +0000 UTC m=+0.221999241 container remove 43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:32 np0005486808 systemd[1]: libpod-conmon-43f8322b20240b49f078b278230837d7201b834cb80082a2eb4a149f03b71d6d.scope: Deactivated successfully.
Oct 14 04:46:33 np0005486808 podman[264375]: 2025-10-14 08:46:33.138844748 +0000 UTC m=+0.049605486 container create 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:46:33 np0005486808 systemd[1]: Started libpod-conmon-45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d.scope.
Oct 14 04:46:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197be51e44f3813c9cc59bcbf80cae9da9f62ddbc5a7ae89dae4af4513bf86c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197be51e44f3813c9cc59bcbf80cae9da9f62ddbc5a7ae89dae4af4513bf86c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197be51e44f3813c9cc59bcbf80cae9da9f62ddbc5a7ae89dae4af4513bf86c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197be51e44f3813c9cc59bcbf80cae9da9f62ddbc5a7ae89dae4af4513bf86c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:33 np0005486808 podman[264375]: 2025-10-14 08:46:33.114897266 +0000 UTC m=+0.025658054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:33 np0005486808 podman[264375]: 2025-10-14 08:46:33.224159954 +0000 UTC m=+0.134920702 container init 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:46:33 np0005486808 podman[264375]: 2025-10-14 08:46:33.243328422 +0000 UTC m=+0.154089130 container start 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:46:33 np0005486808 podman[264375]: 2025-10-14 08:46:33.247324207 +0000 UTC m=+0.158084915 container attach 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:46:34 np0005486808 brave_edison[264391]: {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    "0": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "devices": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "/dev/loop3"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            ],
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_name": "ceph_lv0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_size": "21470642176",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "name": "ceph_lv0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "tags": {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_name": "ceph",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.crush_device_class": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.encrypted": "0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_id": "0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.vdo": "0"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            },
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "vg_name": "ceph_vg0"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        }
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    ],
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    "1": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "devices": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "/dev/loop4"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            ],
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_name": "ceph_lv1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_size": "21470642176",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "name": "ceph_lv1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "tags": {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_name": "ceph",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.crush_device_class": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.encrypted": "0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_id": "1",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.vdo": "0"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            },
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "vg_name": "ceph_vg1"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        }
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    ],
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    "2": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "devices": [
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "/dev/loop5"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            ],
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_name": "ceph_lv2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_size": "21470642176",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "name": "ceph_lv2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "tags": {
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.cluster_name": "ceph",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.crush_device_class": "",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.encrypted": "0",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osd_id": "2",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:                "ceph.vdo": "0"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            },
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "type": "block",
Oct 14 04:46:34 np0005486808 brave_edison[264391]:            "vg_name": "ceph_vg2"
Oct 14 04:46:34 np0005486808 brave_edison[264391]:        }
Oct 14 04:46:34 np0005486808 brave_edison[264391]:    ]
Oct 14 04:46:34 np0005486808 brave_edison[264391]: }
Oct 14 04:46:34 np0005486808 systemd[1]: libpod-45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d.scope: Deactivated successfully.
Oct 14 04:46:34 np0005486808 podman[264375]: 2025-10-14 08:46:34.028990378 +0000 UTC m=+0.939751116 container died 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-197be51e44f3813c9cc59bcbf80cae9da9f62ddbc5a7ae89dae4af4513bf86c2-merged.mount: Deactivated successfully.
Oct 14 04:46:34 np0005486808 podman[264375]: 2025-10-14 08:46:34.090091087 +0000 UTC m=+1.000851785 container remove 45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:46:34 np0005486808 systemd[1]: libpod-conmon-45f7792d8c010b0feeffd715fefd1f5527f7d998e84a483ab8c0c43a8df2fa8d.scope: Deactivated successfully.
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.817128084 +0000 UTC m=+0.048234533 container create f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:46:34 np0005486808 systemd[1]: Started libpod-conmon-f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c.scope.
Oct 14 04:46:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.799278788 +0000 UTC m=+0.030385287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.906535298 +0000 UTC m=+0.137641777 container init f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.917309445 +0000 UTC m=+0.148415914 container start f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.92127666 +0000 UTC m=+0.152383219 container attach f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:46:34 np0005486808 reverent_shtern[264571]: 167 167
Oct 14 04:46:34 np0005486808 systemd[1]: libpod-f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c.scope: Deactivated successfully.
Oct 14 04:46:34 np0005486808 conmon[264571]: conmon f97f13c3954508938705 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c.scope/container/memory.events
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.925022339 +0000 UTC m=+0.156128838 container died f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:46:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e2b532e5342eddb1a68b883cdbd73eb328256c2ad406247297f93fff1bda584d-merged.mount: Deactivated successfully.
Oct 14 04:46:34 np0005486808 podman[264555]: 2025-10-14 08:46:34.97574466 +0000 UTC m=+0.206851149 container remove f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:46:34 np0005486808 systemd[1]: libpod-conmon-f97f13c395450893870580db5d0d94de9c1f09ffbc5365e25aaaac21f0b1d98c.scope: Deactivated successfully.
Oct 14 04:46:35 np0005486808 podman[264595]: 2025-10-14 08:46:35.195175069 +0000 UTC m=+0.065638968 container create ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:46:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:35 np0005486808 systemd[1]: Started libpod-conmon-ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8.scope.
Oct 14 04:46:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:46:35 np0005486808 podman[264595]: 2025-10-14 08:46:35.172528438 +0000 UTC m=+0.042992347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:46:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb1aa00be1733ecd9e8f3568cc0c45f0fb0f5ce45a11be4a0a30862d2f918f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb1aa00be1733ecd9e8f3568cc0c45f0fb0f5ce45a11be4a0a30862d2f918f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb1aa00be1733ecd9e8f3568cc0c45f0fb0f5ce45a11be4a0a30862d2f918f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb1aa00be1733ecd9e8f3568cc0c45f0fb0f5ce45a11be4a0a30862d2f918f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:46:35 np0005486808 podman[264595]: 2025-10-14 08:46:35.285959096 +0000 UTC m=+0.156422975 container init ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:46:35 np0005486808 podman[264595]: 2025-10-14 08:46:35.299587991 +0000 UTC m=+0.170051850 container start ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:46:35 np0005486808 podman[264595]: 2025-10-14 08:46:35.306114017 +0000 UTC m=+0.176577956 container attach ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]: {
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_id": 2,
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "type": "bluestore"
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    },
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_id": 1,
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "type": "bluestore"
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    },
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_id": 0,
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:        "type": "bluestore"
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]:    }
Oct 14 04:46:36 np0005486808 cranky_hellman[264611]: }
Oct 14 04:46:36 np0005486808 systemd[1]: libpod-ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8.scope: Deactivated successfully.
Oct 14 04:46:36 np0005486808 podman[264644]: 2025-10-14 08:46:36.356361278 +0000 UTC m=+0.040360154 container died ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cbb1aa00be1733ecd9e8f3568cc0c45f0fb0f5ce45a11be4a0a30862d2f918f3-merged.mount: Deactivated successfully.
Oct 14 04:46:36 np0005486808 podman[264644]: 2025-10-14 08:46:36.43014856 +0000 UTC m=+0.114147436 container remove ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:46:36 np0005486808 systemd[1]: libpod-conmon-ca0e2bdd8e21e98a55e6de1377c79a906cdcb1b2050255bd5826579f86fe65b8.scope: Deactivated successfully.
Oct 14 04:46:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:46:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:46:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e80eb036-88c0-46ee-93c3-9ab8b11ef54a does not exist
Oct 14 04:46:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ee2ed283-478c-4bd7-80d8-17e74dd85ab0 does not exist
Oct 14 04:46:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:46:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:40 np0005486808 podman[264710]: 2025-10-14 08:46:40.664066704 +0000 UTC m=+0.070950664 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:46:40 np0005486808 podman[264711]: 2025-10-14 08:46:40.689931442 +0000 UTC m=+0.099037235 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3)
Oct 14 04:46:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:46:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:46:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:51 np0005486808 podman[264751]: 2025-10-14 08:46:51.688825294 +0000 UTC m=+0.087417057 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:46:51 np0005486808 podman[264750]: 2025-10-14 08:46:51.696545589 +0000 UTC m=+0.108628484 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 04:46:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:46:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:46:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:03 np0005486808 nova_compute[259627]: 2025-10-14 08:47:03.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:04 np0005486808 nova_compute[259627]: 2025-10-14 08:47:04.373 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:04 np0005486808 nova_compute[259627]: 2025-10-14 08:47:04.373 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:47:04 np0005486808 nova_compute[259627]: 2025-10-14 08:47:04.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:04 np0005486808 nova_compute[259627]: 2025-10-14 08:47:04.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:47:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1445387632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:47:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:47:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1445387632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:47:05 np0005486808 nova_compute[259627]: 2025-10-14 08:47:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:06 np0005486808 nova_compute[259627]: 2025-10-14 08:47:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:06 np0005486808 nova_compute[259627]: 2025-10-14 08:47:06.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:47:07.005 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:47:07.006 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:47:07.006 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:47:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:47:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547737020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.435 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.698 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.700 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.700 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.700 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.788 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.789 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:47:07 np0005486808 nova_compute[259627]: 2025-10-14 08:47:07.806 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:47:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:47:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791924134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:47:08 np0005486808 nova_compute[259627]: 2025-10-14 08:47:08.280 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:47:08 np0005486808 nova_compute[259627]: 2025-10-14 08:47:08.289 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:47:08 np0005486808 nova_compute[259627]: 2025-10-14 08:47:08.325 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:47:08 np0005486808 nova_compute[259627]: 2025-10-14 08:47:08.327 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:47:08 np0005486808 nova_compute[259627]: 2025-10-14 08:47:08.327 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:47:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.326 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.327 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.327 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.360 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.361 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:09 np0005486808 nova_compute[259627]: 2025-10-14 08:47:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:47:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:11 np0005486808 podman[264840]: 2025-10-14 08:47:11.683713433 +0000 UTC m=+0.082857909 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:47:11 np0005486808 podman[264839]: 2025-10-14 08:47:11.697903791 +0000 UTC m=+0.099134067 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:47:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:22 np0005486808 podman[264880]: 2025-10-14 08:47:22.654866278 +0000 UTC m=+0.061366976 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 04:47:22 np0005486808 podman[264879]: 2025-10-14 08:47:22.715538247 +0000 UTC m=+0.121020020 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:47:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:47:32
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.meta']
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:47:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:47:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 826a17f6-3335-4cc9-beb7-4a3649153c78 does not exist
Oct 14 04:47:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e3a0fc48-6b56-4136-ba17-321a022918e6 does not exist
Oct 14 04:47:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d8f2e26b-5ea5-4d9c-840e-9a77a85f8f2f does not exist
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:47:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:47:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:47:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.588480685 +0000 UTC m=+0.074110720 container create a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:47:38 np0005486808 systemd[1]: Started libpod-conmon-a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c.scope.
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.558429238 +0000 UTC m=+0.044059293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.675667777 +0000 UTC m=+0.161297832 container init a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.68754629 +0000 UTC m=+0.173176295 container start a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.691516705 +0000 UTC m=+0.177146770 container attach a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:47:38 np0005486808 gallant_goldstine[265213]: 167 167
Oct 14 04:47:38 np0005486808 systemd[1]: libpod-a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c.scope: Deactivated successfully.
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.694525707 +0000 UTC m=+0.180155752 container died a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:47:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-20a9a26e45ba43e738d5dab128bfb9bcc9b203cf32dfe7f33aa9d2c1e290d928-merged.mount: Deactivated successfully.
Oct 14 04:47:38 np0005486808 podman[265196]: 2025-10-14 08:47:38.743188769 +0000 UTC m=+0.228818794 container remove a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:47:38 np0005486808 systemd[1]: libpod-conmon-a618a726d9d4a7dac50694bc65cf0c096ab8cff519e9141e346a0cb2e408b85c.scope: Deactivated successfully.
Oct 14 04:47:38 np0005486808 podman[265238]: 2025-10-14 08:47:38.974431759 +0000 UTC m=+0.054517932 container create 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:47:39 np0005486808 systemd[1]: Started libpod-conmon-49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4.scope.
Oct 14 04:47:39 np0005486808 podman[265238]: 2025-10-14 08:47:38.958216332 +0000 UTC m=+0.038302515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:39 np0005486808 podman[265238]: 2025-10-14 08:47:39.082064749 +0000 UTC m=+0.162150932 container init 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:47:39 np0005486808 podman[265238]: 2025-10-14 08:47:39.097868566 +0000 UTC m=+0.177954769 container start 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 04:47:39 np0005486808 podman[265238]: 2025-10-14 08:47:39.102704032 +0000 UTC m=+0.182790215 container attach 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:47:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:40 np0005486808 unruffled_hermann[265254]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:47:40 np0005486808 unruffled_hermann[265254]: --> relative data size: 1.0
Oct 14 04:47:40 np0005486808 unruffled_hermann[265254]: --> All data devices are unavailable
Oct 14 04:47:40 np0005486808 systemd[1]: libpod-49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4.scope: Deactivated successfully.
Oct 14 04:47:40 np0005486808 podman[265283]: 2025-10-14 08:47:40.205378964 +0000 UTC m=+0.042698710 container died 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:47:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-441d498ae20c65a1e73ef0cfa01e220e6b63bb7e8afba3a6eed0ede57c44c3c0-merged.mount: Deactivated successfully.
Oct 14 04:47:40 np0005486808 podman[265283]: 2025-10-14 08:47:40.273662105 +0000 UTC m=+0.110981801 container remove 49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hermann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:47:40 np0005486808 systemd[1]: libpod-conmon-49a482c3239bb2ffdba727c9333395302e26b5b6cda2929b9dd2d233b3b372d4.scope: Deactivated successfully.
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.013296292 +0000 UTC m=+0.045995789 container create b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:47:41 np0005486808 systemd[1]: Started libpod-conmon-b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948.scope.
Oct 14 04:47:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:40.998734894 +0000 UTC m=+0.031434411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.103185658 +0000 UTC m=+0.135885165 container init b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.113828232 +0000 UTC m=+0.146527769 container start b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.118068693 +0000 UTC m=+0.150768220 container attach b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:47:41 np0005486808 upbeat_mcnulty[265454]: 167 167
Oct 14 04:47:41 np0005486808 systemd[1]: libpod-b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948.scope: Deactivated successfully.
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.120822689 +0000 UTC m=+0.153522216 container died b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 04:47:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4bd6efb88f84901b33b0eb661af9d1df1e1d16fb77f118bdb48877f22ae24037-merged.mount: Deactivated successfully.
Oct 14 04:47:41 np0005486808 podman[265438]: 2025-10-14 08:47:41.161745096 +0000 UTC m=+0.194444623 container remove b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:47:41 np0005486808 systemd[1]: libpod-conmon-b049473b23ae376bcafe8ef8f91c0fe7fc59ebda7a05bb85b386f0ff4ceec948.scope: Deactivated successfully.
Oct 14 04:47:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:41 np0005486808 podman[265478]: 2025-10-14 08:47:41.371719898 +0000 UTC m=+0.057400791 container create b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:47:41 np0005486808 systemd[1]: Started libpod-conmon-b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e.scope.
Oct 14 04:47:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8384e8a65befb48294a33480ea75dc75b765649badafa12aadf50a974cd99aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8384e8a65befb48294a33480ea75dc75b765649badafa12aadf50a974cd99aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8384e8a65befb48294a33480ea75dc75b765649badafa12aadf50a974cd99aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8384e8a65befb48294a33480ea75dc75b765649badafa12aadf50a974cd99aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:41 np0005486808 podman[265478]: 2025-10-14 08:47:41.353688898 +0000 UTC m=+0.039369811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:41 np0005486808 podman[265478]: 2025-10-14 08:47:41.455824306 +0000 UTC m=+0.141505189 container init b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:47:41 np0005486808 podman[265478]: 2025-10-14 08:47:41.462397703 +0000 UTC m=+0.148078586 container start b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:47:41 np0005486808 podman[265478]: 2025-10-14 08:47:41.466114602 +0000 UTC m=+0.151795485 container attach b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]: {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    "0": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "devices": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "/dev/loop3"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            ],
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_name": "ceph_lv0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_size": "21470642176",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "name": "ceph_lv0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "tags": {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_name": "ceph",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.crush_device_class": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.encrypted": "0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_id": "0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.vdo": "0"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            },
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "vg_name": "ceph_vg0"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        }
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    ],
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    "1": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "devices": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "/dev/loop4"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            ],
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_name": "ceph_lv1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_size": "21470642176",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "name": "ceph_lv1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "tags": {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_name": "ceph",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.crush_device_class": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.encrypted": "0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_id": "1",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.vdo": "0"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            },
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "vg_name": "ceph_vg1"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        }
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    ],
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    "2": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "devices": [
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "/dev/loop5"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            ],
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_name": "ceph_lv2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_size": "21470642176",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "name": "ceph_lv2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "tags": {
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.cluster_name": "ceph",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.crush_device_class": "",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.encrypted": "0",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osd_id": "2",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:                "ceph.vdo": "0"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            },
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "type": "block",
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:            "vg_name": "ceph_vg2"
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:        }
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]:    ]
Oct 14 04:47:42 np0005486808 busy_zhukovsky[265494]: }
Oct 14 04:47:42 np0005486808 systemd[1]: libpod-b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e.scope: Deactivated successfully.
Oct 14 04:47:42 np0005486808 podman[265504]: 2025-10-14 08:47:42.297253533 +0000 UTC m=+0.042778582 container died b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:47:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8384e8a65befb48294a33480ea75dc75b765649badafa12aadf50a974cd99aa2-merged.mount: Deactivated successfully.
Oct 14 04:47:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:42 np0005486808 podman[265504]: 2025-10-14 08:47:42.354628503 +0000 UTC m=+0.100153502 container remove b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:47:42 np0005486808 podman[265506]: 2025-10-14 08:47:42.355042483 +0000 UTC m=+0.091294921 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:47:42 np0005486808 podman[265503]: 2025-10-14 08:47:42.357129273 +0000 UTC m=+0.094458196 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 04:47:42 np0005486808 systemd[1]: libpod-conmon-b1d5f0ecb44ee10ad5f7a8c8a5fa5fab99974ec4fad3db011ff824e0cf407a0e.scope: Deactivated successfully.
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:47:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.030326874 +0000 UTC m=+0.050803844 container create 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:47:43 np0005486808 systemd[1]: Started libpod-conmon-2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f.scope.
Oct 14 04:47:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.006608228 +0000 UTC m=+0.027085238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.110844706 +0000 UTC m=+0.131321646 container init 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.119152594 +0000 UTC m=+0.139629554 container start 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.123464427 +0000 UTC m=+0.143941357 container attach 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:47:43 np0005486808 gracious_poitras[265711]: 167 167
Oct 14 04:47:43 np0005486808 systemd[1]: libpod-2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f.scope: Deactivated successfully.
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.126474149 +0000 UTC m=+0.146951119 container died 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:47:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-61f354c01a929dffee8bee62779eb9f557d0316d3d120bcd1f9f8f84dd0278da-merged.mount: Deactivated successfully.
Oct 14 04:47:43 np0005486808 podman[265694]: 2025-10-14 08:47:43.172801495 +0000 UTC m=+0.193278465 container remove 2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:47:43 np0005486808 systemd[1]: libpod-conmon-2fe41f648522fe977f9855b7af57a5f2237cd4f838108fc5d7faebf8db2b140f.scope: Deactivated successfully.
Oct 14 04:47:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:43 np0005486808 podman[265738]: 2025-10-14 08:47:43.371766424 +0000 UTC m=+0.050099346 container create 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:47:43 np0005486808 systemd[1]: Started libpod-conmon-8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c.scope.
Oct 14 04:47:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:47:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44518a463b8702b43c3f9eca3e443e551abf52f424fd0ece177ce1ffee1e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44518a463b8702b43c3f9eca3e443e551abf52f424fd0ece177ce1ffee1e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44518a463b8702b43c3f9eca3e443e551abf52f424fd0ece177ce1ffee1e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:43 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44518a463b8702b43c3f9eca3e443e551abf52f424fd0ece177ce1ffee1e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:47:43 np0005486808 podman[265738]: 2025-10-14 08:47:43.355616128 +0000 UTC m=+0.033949080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:47:43 np0005486808 podman[265738]: 2025-10-14 08:47:43.460564944 +0000 UTC m=+0.138897946 container init 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:47:43 np0005486808 podman[265738]: 2025-10-14 08:47:43.466846174 +0000 UTC m=+0.145179136 container start 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:47:43 np0005486808 podman[265738]: 2025-10-14 08:47:43.471520545 +0000 UTC m=+0.149853497 container attach 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]: {
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_id": 2,
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "type": "bluestore"
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    },
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_id": 1,
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "type": "bluestore"
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    },
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_id": 0,
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:        "type": "bluestore"
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]:    }
Oct 14 04:47:44 np0005486808 quizzical_antonelli[265755]: }
Oct 14 04:47:44 np0005486808 systemd[1]: libpod-8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c.scope: Deactivated successfully.
Oct 14 04:47:44 np0005486808 podman[265738]: 2025-10-14 08:47:44.575507401 +0000 UTC m=+1.253840353 container died 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:47:44 np0005486808 systemd[1]: libpod-8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c.scope: Consumed 1.110s CPU time.
Oct 14 04:47:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8a44518a463b8702b43c3f9eca3e443e551abf52f424fd0ece177ce1ffee1e2f-merged.mount: Deactivated successfully.
Oct 14 04:47:44 np0005486808 podman[265738]: 2025-10-14 08:47:44.633778052 +0000 UTC m=+1.312111014 container remove 8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:47:44 np0005486808 systemd[1]: libpod-conmon-8f7536a4f8f130bee175444ecf1250804cda6c64e26eb28dc5abf6db8f9daa7c.scope: Deactivated successfully.
Oct 14 04:47:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:47:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:47:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 03862a9e-cfb6-44c6-9b59-d914b15f2095 does not exist
Oct 14 04:47:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0ff4fcf2-bc10-4670-9fd8-c7b51c8223f9 does not exist
Oct 14 04:47:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:47:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:53 np0005486808 podman[265851]: 2025-10-14 08:47:53.656954188 +0000 UTC m=+0.068077227 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 04:47:53 np0005486808 podman[265850]: 2025-10-14 08:47:53.699225457 +0000 UTC m=+0.112222630 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:47:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:47:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.330965) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679331074, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3452780, "memory_usage": 3516496, "flush_reason": "Manual Compaction"}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679355770, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3377141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16237, "largest_seqno": 18289, "table_properties": {"data_size": 3367849, "index_size": 5850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18466, "raw_average_key_size": 19, "raw_value_size": 3349352, "raw_average_value_size": 3597, "num_data_blocks": 265, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431452, "oldest_key_time": 1760431452, "file_creation_time": 1760431679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 24876 microseconds, and 13097 cpu microseconds.
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.355844) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3377141 bytes OK
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.355873) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.358573) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.358599) EVENT_LOG_v1 {"time_micros": 1760431679358590, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.358622) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3444182, prev total WAL file size 3444818, number of live WAL files 2.
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.363242) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3297KB)], [38(7571KB)]
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679363312, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11130856, "oldest_snapshot_seqno": -1}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4370 keys, 9342047 bytes, temperature: kUnknown
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679410714, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9342047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9309163, "index_size": 20870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 105599, "raw_average_key_size": 24, "raw_value_size": 9226559, "raw_average_value_size": 2111, "num_data_blocks": 889, "num_entries": 4370, "num_filter_entries": 4370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.411208) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9342047 bytes
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.412911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.0 rd, 196.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4884, records dropped: 514 output_compression: NoCompression
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.412950) EVENT_LOG_v1 {"time_micros": 1760431679412932, "job": 18, "event": "compaction_finished", "compaction_time_micros": 47562, "compaction_time_cpu_micros": 26056, "output_level": 6, "num_output_files": 1, "total_output_size": 9342047, "num_input_records": 4884, "num_output_records": 4370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679414346, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431679417606, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.363098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.417733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.417741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.417744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.417747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:47:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:47:59.417750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.355069) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682355107, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 250, "total_data_size": 42699, "memory_usage": 47696, "flush_reason": "Manual Compaction"}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682360219, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 42224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18290, "largest_seqno": 18565, "table_properties": {"data_size": 40342, "index_size": 111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5116, "raw_average_key_size": 19, "raw_value_size": 36690, "raw_average_value_size": 137, "num_data_blocks": 5, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431679, "oldest_key_time": 1760431679, "file_creation_time": 1760431682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5177 microseconds, and 629 cpu microseconds.
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.360248) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 42224 bytes OK
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.360261) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.362611) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.362661) EVENT_LOG_v1 {"time_micros": 1760431682362650, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.362686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 40613, prev total WAL file size 40613, number of live WAL files 2.
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.363273) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(41KB)], [41(9123KB)]
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682363372, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9384271, "oldest_snapshot_seqno": -1}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4129 keys, 6093053 bytes, temperature: kUnknown
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682425657, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6093053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6066431, "index_size": 15185, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101030, "raw_average_key_size": 24, "raw_value_size": 5992629, "raw_average_value_size": 1451, "num_data_blocks": 642, "num_entries": 4129, "num_filter_entries": 4129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.425953) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6093053 bytes
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.427367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 97.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 8.9 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(366.6) write-amplify(144.3) OK, records in: 4636, records dropped: 507 output_compression: NoCompression
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.427405) EVENT_LOG_v1 {"time_micros": 1760431682427388, "job": 20, "event": "compaction_finished", "compaction_time_micros": 62362, "compaction_time_cpu_micros": 38162, "output_level": 6, "num_output_files": 1, "total_output_size": 6093053, "num_input_records": 4636, "num_output_records": 4129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682427606, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431682430951, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.363111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.431123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.431133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.431138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.431142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:48:02.431145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:03 np0005486808 nova_compute[259627]: 2025-10-14 08:48:03.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:03 np0005486808 nova_compute[259627]: 2025-10-14 08:48:03.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 04:48:04 np0005486808 nova_compute[259627]: 2025-10-14 08:48:03.998 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 04:48:04 np0005486808 nova_compute[259627]: 2025-10-14 08:48:04.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:04 np0005486808 nova_compute[259627]: 2025-10-14 08:48:04.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 04:48:04 np0005486808 nova_compute[259627]: 2025-10-14 08:48:04.014 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:48:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1337471430' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:48:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:48:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1337471430' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.022 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.022 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.023 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.023 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:06 np0005486808 nova_compute[259627]: 2025-10-14 08:48:06.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:48:07.006 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:48:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:48:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:48:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:07 np0005486808 nova_compute[259627]: 2025-10-14 08:48:07.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:08 np0005486808 nova_compute[259627]: 2025-10-14 08:48:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:48:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:48:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978792493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.482 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.605 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.605 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5151MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.606 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.606 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.906 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:48:09 np0005486808 nova_compute[259627]: 2025-10-14 08:48:09.906 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.013 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.114 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.115 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.143 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.176 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.198 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:48:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:48:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108605781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.651 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.656 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.679 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.682 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:48:10 np0005486808 nova_compute[259627]: 2025-10-14 08:48:10.682 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:48:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:11 np0005486808 nova_compute[259627]: 2025-10-14 08:48:11.684 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:11 np0005486808 nova_compute[259627]: 2025-10-14 08:48:11.685 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:48:11 np0005486808 nova_compute[259627]: 2025-10-14 08:48:11.685 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:48:11 np0005486808 nova_compute[259627]: 2025-10-14 08:48:11.750 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:48:11 np0005486808 nova_compute[259627]: 2025-10-14 08:48:11.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:48:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:12 np0005486808 podman[265935]: 2025-10-14 08:48:12.66073788 +0000 UTC m=+0.070222405 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 14 04:48:12 np0005486808 podman[265934]: 2025-10-14 08:48:12.660807042 +0000 UTC m=+0.075142796 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:48:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:24 np0005486808 podman[265970]: 2025-10-14 08:48:24.710368774 +0000 UTC m=+0.114124582 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 04:48:24 np0005486808 podman[265969]: 2025-10-14 08:48:24.728486418 +0000 UTC m=+0.135889876 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 14 04:48:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:48:32
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:48:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:48:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:48:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:48:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:43 np0005486808 podman[266011]: 2025-10-14 08:48:43.653024267 +0000 UTC m=+0.067155900 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 14 04:48:43 np0005486808 podman[266012]: 2025-10-14 08:48:43.671783817 +0000 UTC m=+0.074651673 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 14 04:48:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e26d7120-b1c6-412f-bdb5-eda3d692df10 does not exist
Oct 14 04:48:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5e6dbb26-6d2d-4387-a3fb-548a46338b5c does not exist
Oct 14 04:48:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e2eb525f-5aea-4844-9bcc-4de67c052f61 does not exist
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:48:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.294230013 +0000 UTC m=+0.040958606 container create c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:48:46 np0005486808 systemd[1]: Started libpod-conmon-c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b.scope.
Oct 14 04:48:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.273806612 +0000 UTC m=+0.020535225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:48:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.378970073 +0000 UTC m=+0.125698706 container init c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.386272492 +0000 UTC m=+0.133001095 container start c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.389648565 +0000 UTC m=+0.136377158 container attach c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:48:46 np0005486808 relaxed_meninsky[266337]: 167 167
Oct 14 04:48:46 np0005486808 systemd[1]: libpod-c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b.scope: Deactivated successfully.
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.39388971 +0000 UTC m=+0.140618313 container died c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 14 04:48:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-27e15eb7128e39544a2a509fca92c5269e424a4144863f3a6bbae23e743d118c-merged.mount: Deactivated successfully.
Oct 14 04:48:46 np0005486808 podman[266321]: 2025-10-14 08:48:46.435926651 +0000 UTC m=+0.182655254 container remove c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meninsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:48:46 np0005486808 systemd[1]: libpod-conmon-c80c26a08bbd3baf6054c4a9000cbc0e85d37f76a91922f1f8d54abd79bc5d2b.scope: Deactivated successfully.
Oct 14 04:48:46 np0005486808 podman[266361]: 2025-10-14 08:48:46.645888055 +0000 UTC m=+0.057426121 container create 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:48:46 np0005486808 systemd[1]: Started libpod-conmon-27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403.scope.
Oct 14 04:48:46 np0005486808 podman[266361]: 2025-10-14 08:48:46.618831241 +0000 UTC m=+0.030369357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:46 np0005486808 podman[266361]: 2025-10-14 08:48:46.771858887 +0000 UTC m=+0.183396933 container init 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:48:46 np0005486808 podman[266361]: 2025-10-14 08:48:46.779877594 +0000 UTC m=+0.191415630 container start 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:48:46 np0005486808 podman[266361]: 2025-10-14 08:48:46.807535242 +0000 UTC m=+0.219073288 container attach 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:48:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:47 np0005486808 exciting_cori[266377]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:48:47 np0005486808 exciting_cori[266377]: --> relative data size: 1.0
Oct 14 04:48:47 np0005486808 exciting_cori[266377]: --> All data devices are unavailable
Oct 14 04:48:47 np0005486808 systemd[1]: libpod-27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403.scope: Deactivated successfully.
Oct 14 04:48:47 np0005486808 systemd[1]: libpod-27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403.scope: Consumed 1.039s CPU time.
Oct 14 04:48:47 np0005486808 podman[266406]: 2025-10-14 08:48:47.913666661 +0000 UTC m=+0.025057206 container died 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:48:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a57a622cdbbd9dc0dbcd456bb5a3772cc0662765c8d203f4457604069f167d1c-merged.mount: Deactivated successfully.
Oct 14 04:48:47 np0005486808 podman[266406]: 2025-10-14 08:48:47.97918463 +0000 UTC m=+0.090575185 container remove 27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:48:47 np0005486808 systemd[1]: libpod-conmon-27feb27c587ff1bffb8906c7f566ada30f4318d5ca9da9de40be0105c030a403.scope: Deactivated successfully.
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.731364892 +0000 UTC m=+0.067517548 container create 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:48:48 np0005486808 systemd[1]: Started libpod-conmon-1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659.scope.
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.705497827 +0000 UTC m=+0.041650573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.823301958 +0000 UTC m=+0.159454674 container init 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.834875653 +0000 UTC m=+0.171028309 container start 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.838672856 +0000 UTC m=+0.174825622 container attach 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:48:48 np0005486808 ecstatic_snyder[266577]: 167 167
Oct 14 04:48:48 np0005486808 systemd[1]: libpod-1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659.scope: Deactivated successfully.
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.842565341 +0000 UTC m=+0.178718007 container died 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:48:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8b812c658c7665b4c12abb35c4020ff44712e5fd14fcb2e1b33835fb94aaaf64-merged.mount: Deactivated successfully.
Oct 14 04:48:48 np0005486808 podman[266562]: 2025-10-14 08:48:48.887198377 +0000 UTC m=+0.223351053 container remove 1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:48:48 np0005486808 systemd[1]: libpod-conmon-1dc084106c877bee6fa04ed8cd7099635ce3a6c1f3c7cb5fd9d989b52a606659.scope: Deactivated successfully.
Oct 14 04:48:49 np0005486808 podman[266601]: 2025-10-14 08:48:49.128190302 +0000 UTC m=+0.068676967 container create 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:48:49 np0005486808 systemd[1]: Started libpod-conmon-64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6.scope.
Oct 14 04:48:49 np0005486808 podman[266601]: 2025-10-14 08:48:49.105496395 +0000 UTC m=+0.045983060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9913c7a7ee18477bb7154455868ce9e047bb8072d6fa58a759683416f2f092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9913c7a7ee18477bb7154455868ce9e047bb8072d6fa58a759683416f2f092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9913c7a7ee18477bb7154455868ce9e047bb8072d6fa58a759683416f2f092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9913c7a7ee18477bb7154455868ce9e047bb8072d6fa58a759683416f2f092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:49 np0005486808 podman[266601]: 2025-10-14 08:48:49.222440485 +0000 UTC m=+0.162927130 container init 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:48:49 np0005486808 podman[266601]: 2025-10-14 08:48:49.230632646 +0000 UTC m=+0.171119271 container start 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:48:49 np0005486808 podman[266601]: 2025-10-14 08:48:49.239559235 +0000 UTC m=+0.180045860 container attach 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:48:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]: {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    "0": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "devices": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "/dev/loop3"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            ],
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_name": "ceph_lv0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_size": "21470642176",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "name": "ceph_lv0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "tags": {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_name": "ceph",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.crush_device_class": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.encrypted": "0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_id": "0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.vdo": "0"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            },
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "vg_name": "ceph_vg0"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        }
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    ],
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    "1": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "devices": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "/dev/loop4"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            ],
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_name": "ceph_lv1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_size": "21470642176",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "name": "ceph_lv1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "tags": {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_name": "ceph",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.crush_device_class": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.encrypted": "0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_id": "1",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.vdo": "0"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            },
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "vg_name": "ceph_vg1"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        }
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    ],
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    "2": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "devices": [
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "/dev/loop5"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            ],
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_name": "ceph_lv2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_size": "21470642176",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "name": "ceph_lv2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "tags": {
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.cluster_name": "ceph",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.crush_device_class": "",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.encrypted": "0",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osd_id": "2",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:                "ceph.vdo": "0"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            },
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "type": "block",
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:            "vg_name": "ceph_vg2"
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:        }
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]:    ]
Oct 14 04:48:49 np0005486808 goofy_heyrovsky[266617]: }
Oct 14 04:48:50 np0005486808 systemd[1]: libpod-64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6.scope: Deactivated successfully.
Oct 14 04:48:50 np0005486808 podman[266628]: 2025-10-14 08:48:50.073889164 +0000 UTC m=+0.042529285 container died 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:48:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8a9913c7a7ee18477bb7154455868ce9e047bb8072d6fa58a759683416f2f092-merged.mount: Deactivated successfully.
Oct 14 04:48:50 np0005486808 podman[266628]: 2025-10-14 08:48:50.149180992 +0000 UTC m=+0.117821053 container remove 64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 04:48:50 np0005486808 systemd[1]: libpod-conmon-64d45dab328fdcae795628038b8ad5ea72f64b79247dd8f2f8e4434903e31ea6.scope: Deactivated successfully.
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.804981519 +0000 UTC m=+0.068266367 container create 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:48:50 np0005486808 systemd[1]: Started libpod-conmon-39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988.scope.
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.774371937 +0000 UTC m=+0.037656815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.910702894 +0000 UTC m=+0.173987762 container init 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.921438487 +0000 UTC m=+0.184723305 container start 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.925024625 +0000 UTC m=+0.188309463 container attach 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:48:50 np0005486808 nifty_wiles[266799]: 167 167
Oct 14 04:48:50 np0005486808 systemd[1]: libpod-39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988.scope: Deactivated successfully.
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.929111355 +0000 UTC m=+0.192396183 container died 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:48:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a9a12552a2763fa9fd55fa969768161655fe463839d47cf370fd5c1f8a92c5b4-merged.mount: Deactivated successfully.
Oct 14 04:48:50 np0005486808 podman[266783]: 2025-10-14 08:48:50.96556464 +0000 UTC m=+0.228849458 container remove 39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:48:50 np0005486808 systemd[1]: libpod-conmon-39b39710da5f3c969a8546a297411119f76e20398dbc9a4d8e3e4873da885988.scope: Deactivated successfully.
Oct 14 04:48:51 np0005486808 podman[266825]: 2025-10-14 08:48:51.18313003 +0000 UTC m=+0.062660129 container create e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:48:51 np0005486808 systemd[1]: Started libpod-conmon-e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1.scope.
Oct 14 04:48:51 np0005486808 podman[266825]: 2025-10-14 08:48:51.156502277 +0000 UTC m=+0.036032436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:48:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:48:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb6cc0576e7e191c7fa7253b9abfb156b37d86ea083ca969887e9e465fb1079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb6cc0576e7e191c7fa7253b9abfb156b37d86ea083ca969887e9e465fb1079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb6cc0576e7e191c7fa7253b9abfb156b37d86ea083ca969887e9e465fb1079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb6cc0576e7e191c7fa7253b9abfb156b37d86ea083ca969887e9e465fb1079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:48:51 np0005486808 podman[266825]: 2025-10-14 08:48:51.277631 +0000 UTC m=+0.157161129 container init e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:48:51 np0005486808 podman[266825]: 2025-10-14 08:48:51.291764437 +0000 UTC m=+0.171294536 container start e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:48:51 np0005486808 podman[266825]: 2025-10-14 08:48:51.29760727 +0000 UTC m=+0.177137419 container attach e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:48:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]: {
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_id": 2,
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "type": "bluestore"
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    },
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_id": 1,
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "type": "bluestore"
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    },
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_id": 0,
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:        "type": "bluestore"
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]:    }
Oct 14 04:48:52 np0005486808 naughty_feynman[266842]: }
Oct 14 04:48:52 np0005486808 systemd[1]: libpod-e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1.scope: Deactivated successfully.
Oct 14 04:48:52 np0005486808 systemd[1]: libpod-e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1.scope: Consumed 1.049s CPU time.
Oct 14 04:48:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:52 np0005486808 podman[266875]: 2025-10-14 08:48:52.38096459 +0000 UTC m=+0.030061209 container died e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:48:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-deb6cc0576e7e191c7fa7253b9abfb156b37d86ea083ca969887e9e465fb1079-merged.mount: Deactivated successfully.
Oct 14 04:48:53 np0005486808 podman[266875]: 2025-10-14 08:48:53.025173522 +0000 UTC m=+0.674270121 container remove e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_feynman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:48:53 np0005486808 systemd[1]: libpod-conmon-e6d5eb49b9d5f327da62b0ce23a0425c43823cdf8ff8177d43477a12d8c5e6b1.scope: Deactivated successfully.
Oct 14 04:48:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:48:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:48:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a666d91-bb24-4333-833f-61acf84d8ab7 does not exist
Oct 14 04:48:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2250b801-af20-4372-b26d-8bbcce14a41c does not exist
Oct 14 04:48:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:48:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:55 np0005486808 podman[266941]: 2025-10-14 08:48:55.677876621 +0000 UTC m=+0.082486735 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 04:48:55 np0005486808 podman[266940]: 2025-10-14 08:48:55.722740713 +0000 UTC m=+0.127857910 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:48:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:48:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:48:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:49:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874871742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:49:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:49:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874871742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:49:05 np0005486808 nova_compute[259627]: 2025-10-14 08:49:05.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:05 np0005486808 nova_compute[259627]: 2025-10-14 08:49:05.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:05 np0005486808 nova_compute[259627]: 2025-10-14 08:49:05.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:05 np0005486808 nova_compute[259627]: 2025-10-14 08:49:05.996 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:49:06 np0005486808 nova_compute[259627]: 2025-10-14 08:49:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:49:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:49:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:49:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:49:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:07 np0005486808 nova_compute[259627]: 2025-10-14 08:49:07.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:07 np0005486808 nova_compute[259627]: 2025-10-14 08:49:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:08 np0005486808 nova_compute[259627]: 2025-10-14 08:49:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:10 np0005486808 nova_compute[259627]: 2025-10-14 08:49:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.012 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.012 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:49:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:49:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573973738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.455 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.589 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.591 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.591 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.591 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.671 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.672 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:49:11 np0005486808 nova_compute[259627]: 2025-10-14 08:49:11.691 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:49:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:49:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605830336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:49:12 np0005486808 nova_compute[259627]: 2025-10-14 08:49:12.136 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:49:12 np0005486808 nova_compute[259627]: 2025-10-14 08:49:12.144 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:49:12 np0005486808 nova_compute[259627]: 2025-10-14 08:49:12.161 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:49:12 np0005486808 nova_compute[259627]: 2025-10-14 08:49:12.163 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:49:12 np0005486808 nova_compute[259627]: 2025-10-14 08:49:12.164 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:49:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:13 np0005486808 nova_compute[259627]: 2025-10-14 08:49:13.164 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:13 np0005486808 nova_compute[259627]: 2025-10-14 08:49:13.165 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:49:13 np0005486808 nova_compute[259627]: 2025-10-14 08:49:13.165 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:49:13 np0005486808 nova_compute[259627]: 2025-10-14 08:49:13.186 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:49:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:13 np0005486808 nova_compute[259627]: 2025-10-14 08:49:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:49:14 np0005486808 podman[267027]: 2025-10-14 08:49:14.659286735 +0000 UTC m=+0.069521297 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:49:14 np0005486808 podman[267028]: 2025-10-14 08:49:14.687138979 +0000 UTC m=+0.089406086 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:49:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.374622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757374672, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 854, "num_deletes": 255, "total_data_size": 1129621, "memory_usage": 1150400, "flush_reason": "Manual Compaction"}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757385526, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1119298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18566, "largest_seqno": 19419, "table_properties": {"data_size": 1115001, "index_size": 1949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 8971, "raw_average_key_size": 18, "raw_value_size": 1106411, "raw_average_value_size": 2267, "num_data_blocks": 89, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431683, "oldest_key_time": 1760431683, "file_creation_time": 1760431757, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10967 microseconds, and 7094 cpu microseconds.
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.385588) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1119298 bytes OK
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.385614) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.387179) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.387202) EVENT_LOG_v1 {"time_micros": 1760431757387194, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.387226) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1125404, prev total WAL file size 1126561, number of live WAL files 2.
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.388053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1093KB)], [44(5950KB)]
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757388114, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7212351, "oldest_snapshot_seqno": -1}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4095 keys, 7082714 bytes, temperature: kUnknown
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757423462, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7082714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7054766, "index_size": 16638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 101401, "raw_average_key_size": 24, "raw_value_size": 6979970, "raw_average_value_size": 1704, "num_data_blocks": 701, "num_entries": 4095, "num_filter_entries": 4095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760431757, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.423715) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7082714 bytes
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.425207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 200.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.8 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(12.8) write-amplify(6.3) OK, records in: 4617, records dropped: 522 output_compression: NoCompression
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.425227) EVENT_LOG_v1 {"time_micros": 1760431757425218, "job": 22, "event": "compaction_finished", "compaction_time_micros": 35417, "compaction_time_cpu_micros": 15402, "output_level": 6, "num_output_files": 1, "total_output_size": 7082714, "num_input_records": 4617, "num_output_records": 4095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757425558, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760431757426823, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.387913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.426952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.426960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.426963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.426966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:17 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:49:17.426969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:49:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:26 np0005486808 podman[267068]: 2025-10-14 08:49:26.648048136 +0000 UTC m=+0.053733550 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 14 04:49:26 np0005486808 podman[267067]: 2025-10-14 08:49:26.659706952 +0000 UTC m=+0.080828905 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 04:49:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:49:32
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'backups']
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:49:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:49:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:49:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:49:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:45 np0005486808 podman[267113]: 2025-10-14 08:49:45.667719715 +0000 UTC m=+0.073895034 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid)
Oct 14 04:49:45 np0005486808 podman[267112]: 2025-10-14 08:49:45.685383119 +0000 UTC m=+0.089778915 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:49:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:49:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7dd4fc4b-f3a6-4833-b404-49d0ca08057e does not exist
Oct 14 04:49:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4e2382e3-f7cc-4ab4-926a-55dd554694f9 does not exist
Oct 14 04:49:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dcb28198-f1ca-41ef-b172-0441c8c12273 does not exist
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:49:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.744615035 +0000 UTC m=+0.039691725 container create c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:49:54 np0005486808 systemd[1]: Started libpod-conmon-c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10.scope.
Oct 14 04:49:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.726669725 +0000 UTC m=+0.021746455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.826221468 +0000 UTC m=+0.121298248 container init c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.83484257 +0000 UTC m=+0.129919260 container start c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.838152521 +0000 UTC m=+0.133229311 container attach c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:49:54 np0005486808 agitated_agnesi[267436]: 167 167
Oct 14 04:49:54 np0005486808 systemd[1]: libpod-c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10.scope: Deactivated successfully.
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.840420487 +0000 UTC m=+0.135497207 container died c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-285093fd6c130b1b9ff8e3f466be00fed0c7ec5e5aa114739e314432ede11a7f-merged.mount: Deactivated successfully.
Oct 14 04:49:54 np0005486808 podman[267420]: 2025-10-14 08:49:54.891559372 +0000 UTC m=+0.186636092 container remove c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:49:54 np0005486808 systemd[1]: libpod-conmon-c66896606ef4a8306a7402b8d174d5748ec89d2bb12ea3620bef3b3ea9410e10.scope: Deactivated successfully.
Oct 14 04:49:55 np0005486808 podman[267460]: 2025-10-14 08:49:55.046625288 +0000 UTC m=+0.049859945 container create a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:55 np0005486808 systemd[1]: Started libpod-conmon-a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f.scope.
Oct 14 04:49:55 np0005486808 podman[267460]: 2025-10-14 08:49:55.019651826 +0000 UTC m=+0.022886593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:55 np0005486808 podman[267460]: 2025-10-14 08:49:55.1448712 +0000 UTC m=+0.148105857 container init a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:49:55 np0005486808 podman[267460]: 2025-10-14 08:49:55.15995688 +0000 UTC m=+0.163191537 container start a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:49:55 np0005486808 podman[267460]: 2025-10-14 08:49:55.163823895 +0000 UTC m=+0.167058552 container attach a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:49:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:56 np0005486808 busy_rosalind[267476]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:49:56 np0005486808 busy_rosalind[267476]: --> relative data size: 1.0
Oct 14 04:49:56 np0005486808 busy_rosalind[267476]: --> All data devices are unavailable
Oct 14 04:49:56 np0005486808 systemd[1]: libpod-a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f.scope: Deactivated successfully.
Oct 14 04:49:56 np0005486808 podman[267460]: 2025-10-14 08:49:56.136446597 +0000 UTC m=+1.139681254 container died a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e1b1cca091e2a4858c40cf6f8bf7270b3c7ab29ac36a711871c80f9daffb10e2-merged.mount: Deactivated successfully.
Oct 14 04:49:56 np0005486808 podman[267460]: 2025-10-14 08:49:56.192078572 +0000 UTC m=+1.195313239 container remove a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:49:56 np0005486808 systemd[1]: libpod-conmon-a391efbd3f2343515a190437bbdfd679d771ba22f962c9e57fd9bdcf8e14a46f.scope: Deactivated successfully.
Oct 14 04:49:56 np0005486808 podman[267655]: 2025-10-14 08:49:56.969987746 +0000 UTC m=+0.051019233 container create 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:49:57 np0005486808 systemd[1]: Started libpod-conmon-21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae.scope.
Oct 14 04:49:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:56.946052019 +0000 UTC m=+0.027083576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:57.04797604 +0000 UTC m=+0.129007517 container init 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:57.053968977 +0000 UTC m=+0.135000444 container start 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:49:57 np0005486808 musing_mccarthy[267674]: 167 167
Oct 14 04:49:57 np0005486808 systemd[1]: libpod-21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae.scope: Deactivated successfully.
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:57.058695113 +0000 UTC m=+0.139726580 container attach 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:57.059094283 +0000 UTC m=+0.140125750 container died 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c4faf19870764369ea09f309cec708b7730442ca622f573aedfcfa2cc9f2e98c-merged.mount: Deactivated successfully.
Oct 14 04:49:57 np0005486808 podman[267655]: 2025-10-14 08:49:57.108875515 +0000 UTC m=+0.189906982 container remove 21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:49:57 np0005486808 podman[267673]: 2025-10-14 08:49:57.117319132 +0000 UTC m=+0.092217494 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 04:49:57 np0005486808 systemd[1]: libpod-conmon-21ff27dc54efea4dbca5c61f9c1df4304814d9f2003e4c41d1a1524111f2d4ae.scope: Deactivated successfully.
Oct 14 04:49:57 np0005486808 podman[267670]: 2025-10-14 08:49:57.143849294 +0000 UTC m=+0.122148200 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 04:49:57 np0005486808 podman[267735]: 2025-10-14 08:49:57.275499085 +0000 UTC m=+0.051776042 container create 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:49:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:57 np0005486808 systemd[1]: Started libpod-conmon-73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a.scope.
Oct 14 04:49:57 np0005486808 podman[267735]: 2025-10-14 08:49:57.248087782 +0000 UTC m=+0.024364749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcf9475a689915fbeee3368cd6277f9274656f71eaed080a53c78923c83880c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:49:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcf9475a689915fbeee3368cd6277f9274656f71eaed080a53c78923c83880c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcf9475a689915fbeee3368cd6277f9274656f71eaed080a53c78923c83880c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcf9475a689915fbeee3368cd6277f9274656f71eaed080a53c78923c83880c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:57 np0005486808 podman[267735]: 2025-10-14 08:49:57.398962965 +0000 UTC m=+0.175239902 container init 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:49:57 np0005486808 podman[267735]: 2025-10-14 08:49:57.40809808 +0000 UTC m=+0.184374997 container start 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 04:49:57 np0005486808 podman[267735]: 2025-10-14 08:49:57.412159249 +0000 UTC m=+0.188436196 container attach 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]: {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    "0": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "devices": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "/dev/loop3"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            ],
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_name": "ceph_lv0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_size": "21470642176",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "name": "ceph_lv0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "tags": {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_name": "ceph",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.crush_device_class": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.encrypted": "0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_id": "0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.vdo": "0"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            },
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "vg_name": "ceph_vg0"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        }
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    ],
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    "1": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "devices": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "/dev/loop4"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            ],
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_name": "ceph_lv1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_size": "21470642176",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "name": "ceph_lv1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "tags": {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_name": "ceph",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.crush_device_class": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.encrypted": "0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_id": "1",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.vdo": "0"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            },
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "vg_name": "ceph_vg1"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        }
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    ],
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    "2": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "devices": [
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "/dev/loop5"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            ],
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_name": "ceph_lv2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_size": "21470642176",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "name": "ceph_lv2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "tags": {
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.cluster_name": "ceph",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.crush_device_class": "",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.encrypted": "0",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osd_id": "2",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:                "ceph.vdo": "0"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            },
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "type": "block",
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:            "vg_name": "ceph_vg2"
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:        }
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]:    ]
Oct 14 04:49:58 np0005486808 nifty_kapitsa[267751]: }
Oct 14 04:49:58 np0005486808 systemd[1]: libpod-73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a.scope: Deactivated successfully.
Oct 14 04:49:58 np0005486808 podman[267735]: 2025-10-14 08:49:58.197240989 +0000 UTC m=+0.973517906 container died 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:49:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7dcf9475a689915fbeee3368cd6277f9274656f71eaed080a53c78923c83880c-merged.mount: Deactivated successfully.
Oct 14 04:49:58 np0005486808 podman[267735]: 2025-10-14 08:49:58.248086837 +0000 UTC m=+1.024363744 container remove 73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:49:58 np0005486808 systemd[1]: libpod-conmon-73f87e2aae29a13ffc37dab4f940677ab4ad31763b254dff2fe0838eb728c95a.scope: Deactivated successfully.
Oct 14 04:49:58 np0005486808 podman[267915]: 2025-10-14 08:49:58.983896538 +0000 UTC m=+0.066411802 container create af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:49:59 np0005486808 systemd[1]: Started libpod-conmon-af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104.scope.
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:58.952545588 +0000 UTC m=+0.035060872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:59.08462701 +0000 UTC m=+0.167142364 container init af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:59.100339166 +0000 UTC m=+0.182854470 container start af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:59.105146954 +0000 UTC m=+0.187662318 container attach af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:59 np0005486808 zealous_margulis[267932]: 167 167
Oct 14 04:49:59 np0005486808 systemd[1]: libpod-af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104.scope: Deactivated successfully.
Oct 14 04:49:59 np0005486808 conmon[267932]: conmon af8b7d37aa010ddd05e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104.scope/container/memory.events
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:59.109385428 +0000 UTC m=+0.191900722 container died af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:49:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f616b30fb0235ff816d089784036c4d294fd16a744a1a5851927e38e7f0ce00d-merged.mount: Deactivated successfully.
Oct 14 04:49:59 np0005486808 podman[267915]: 2025-10-14 08:49:59.156921965 +0000 UTC m=+0.239437259 container remove af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 04:49:59 np0005486808 systemd[1]: libpod-conmon-af8b7d37aa010ddd05e1d4b786e65e0e109a6583ce9ee2f884f67d4b8e863104.scope: Deactivated successfully.
Oct 14 04:49:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:49:59 np0005486808 podman[267956]: 2025-10-14 08:49:59.403829544 +0000 UTC m=+0.069478485 container create a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:49:59 np0005486808 systemd[1]: Started libpod-conmon-a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1.scope.
Oct 14 04:49:59 np0005486808 podman[267956]: 2025-10-14 08:49:59.376312519 +0000 UTC m=+0.041961510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:49:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:49:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9a0a0a6ca381278cf5e2df751b19000d953a421d67cd4d085abc3e3094b94b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9a0a0a6ca381278cf5e2df751b19000d953a421d67cd4d085abc3e3094b94b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9a0a0a6ca381278cf5e2df751b19000d953a421d67cd4d085abc3e3094b94b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9a0a0a6ca381278cf5e2df751b19000d953a421d67cd4d085abc3e3094b94b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:49:59 np0005486808 podman[267956]: 2025-10-14 08:49:59.550484093 +0000 UTC m=+0.216133094 container init a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:59 np0005486808 podman[267956]: 2025-10-14 08:49:59.562529589 +0000 UTC m=+0.228178530 container start a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:49:59 np0005486808 podman[267956]: 2025-10-14 08:49:59.569295705 +0000 UTC m=+0.234944716 container attach a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:50:00 np0005486808 charming_shannon[267972]: {
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_id": 2,
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "type": "bluestore"
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    },
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_id": 1,
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "type": "bluestore"
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    },
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_id": 0,
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:        "type": "bluestore"
Oct 14 04:50:00 np0005486808 charming_shannon[267972]:    }
Oct 14 04:50:00 np0005486808 charming_shannon[267972]: }
Oct 14 04:50:00 np0005486808 systemd[1]: libpod-a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1.scope: Deactivated successfully.
Oct 14 04:50:00 np0005486808 podman[267956]: 2025-10-14 08:50:00.67646848 +0000 UTC m=+1.342117421 container died a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:50:00 np0005486808 systemd[1]: libpod-a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1.scope: Consumed 1.113s CPU time.
Oct 14 04:50:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0c9a0a0a6ca381278cf5e2df751b19000d953a421d67cd4d085abc3e3094b94b-merged.mount: Deactivated successfully.
Oct 14 04:50:00 np0005486808 podman[267956]: 2025-10-14 08:50:00.913321154 +0000 UTC m=+1.578970105 container remove a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 04:50:00 np0005486808 systemd[1]: libpod-conmon-a5c7ab358310d3faeb29385c022f9421a5cbb0e3b5cbd4650ff0ad83af9445c1.scope: Deactivated successfully.
Oct 14 04:50:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:50:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:50:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:50:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:50:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fa635491-16c6-4b41-81c9-36246dbf82bb does not exist
Oct 14 04:50:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3e069338-f45e-4a1f-a496-d25c65f49900 does not exist
Oct 14 04:50:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:50:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:50:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:50:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621627060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:50:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:50:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621627060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:50:07.007 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:50:07.009 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:50:07.009 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:50:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:07 np0005486808 nova_compute[259627]: 2025-10-14 08:50:07.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:07 np0005486808 nova_compute[259627]: 2025-10-14 08:50:07.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:07 np0005486808 nova_compute[259627]: 2025-10-14 08:50:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:07 np0005486808 nova_compute[259627]: 2025-10-14 08:50:07.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:50:08 np0005486808 nova_compute[259627]: 2025-10-14 08:50:08.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:08 np0005486808 nova_compute[259627]: 2025-10-14 08:50:08.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:08 np0005486808 nova_compute[259627]: 2025-10-14 08:50:08.980 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:11 np0005486808 nova_compute[259627]: 2025-10-14 08:50:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:11 np0005486808 nova_compute[259627]: 2025-10-14 08:50:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:50:11 np0005486808 nova_compute[259627]: 2025-10-14 08:50:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:50:11 np0005486808 nova_compute[259627]: 2025-10-14 08:50:11.995 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:50:11 np0005486808 nova_compute[259627]: 2025-10-14 08:50:11.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.026 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.026 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.027 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:50:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:50:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094062094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.491 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.651 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.652 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5112MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.652 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.652 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.712 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.713 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:50:12 np0005486808 nova_compute[259627]: 2025-10-14 08:50:12.729 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:50:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:50:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981067037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:50:13 np0005486808 nova_compute[259627]: 2025-10-14 08:50:13.229 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:50:13 np0005486808 nova_compute[259627]: 2025-10-14 08:50:13.238 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:50:13 np0005486808 nova_compute[259627]: 2025-10-14 08:50:13.263 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:50:13 np0005486808 nova_compute[259627]: 2025-10-14 08:50:13.266 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:50:13 np0005486808 nova_compute[259627]: 2025-10-14 08:50:13.267 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:50:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:15 np0005486808 nova_compute[259627]: 2025-10-14 08:50:15.250 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:50:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:16 np0005486808 podman[268112]: 2025-10-14 08:50:16.651818369 +0000 UTC m=+0.065589561 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:50:16 np0005486808 podman[268111]: 2025-10-14 08:50:16.654605597 +0000 UTC m=+0.068474622 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 14 04:50:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:27 np0005486808 podman[268150]: 2025-10-14 08:50:27.682904644 +0000 UTC m=+0.089906728 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 04:50:27 np0005486808 podman[268149]: 2025-10-14 08:50:27.747890469 +0000 UTC m=+0.159192049 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 04:50:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:50:32
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta']
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:50:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:50:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:50:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:50:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:47 np0005486808 podman[268192]: 2025-10-14 08:50:47.647542381 +0000 UTC m=+0.059652726 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 14 04:50:47 np0005486808 podman[268191]: 2025-10-14 08:50:47.655736782 +0000 UTC m=+0.075206387 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 14 04:50:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 14 04:50:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 14 04:50:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 14 04:50:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 14 04:50:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 14 04:50:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 14 04:50:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:50:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 14 04:50:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 14 04:50:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 14 04:50:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 13 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.1 MiB/s wr, 15 op/s
Oct 14 04:50:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 14 04:50:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 14 04:50:55 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 14 04:50:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 7.0 MiB/s wr, 61 op/s
Oct 14 04:50:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:50:58 np0005486808 podman[268232]: 2025-10-14 08:50:58.652397891 +0000 UTC m=+0.059409340 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 14 04:50:58 np0005486808 podman[268231]: 2025-10-14 08:50:58.690108516 +0000 UTC m=+0.106615878 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 04:50:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 6.0 MiB/s wr, 52 op/s
Oct 14 04:51:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d9a611fb-43f0-4bd3-b914-e781f1cfa66c does not exist
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7afd5d98-3ad2-4c63-99f2-0bc7d6e9a70d does not exist
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 64b0b4dd-562f-41b3-88e5-e5d9ea98c534 does not exist
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.620339063 +0000 UTC m=+0.036652491 container create 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 04:51:02 np0005486808 systemd[1]: Started libpod-conmon-3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1.scope.
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:51:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:51:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.693540509 +0000 UTC m=+0.109853957 container init 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.699806603 +0000 UTC m=+0.116120031 container start 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.603988621 +0000 UTC m=+0.020302069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.703552855 +0000 UTC m=+0.119866283 container attach 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:51:02 np0005486808 relaxed_tharp[268567]: 167 167
Oct 14 04:51:02 np0005486808 systemd[1]: libpod-3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1.scope: Deactivated successfully.
Oct 14 04:51:02 np0005486808 conmon[268567]: conmon 3a4eb70e465d0ef23fd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1.scope/container/memory.events
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.708068366 +0000 UTC m=+0.124381794 container died 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:51:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c15666d8ff351b54de53fe6c03c8c21a0485d025b77ea23872639692754d122a-merged.mount: Deactivated successfully.
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:51:02 np0005486808 podman[268550]: 2025-10-14 08:51:02.749291748 +0000 UTC m=+0.165605176 container remove 3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 04:51:02 np0005486808 systemd[1]: libpod-conmon-3a4eb70e465d0ef23fd7258e8b32661cc67b07c7272b2660dae3a69dbf39a1b1.scope: Deactivated successfully.
Oct 14 04:51:02 np0005486808 podman[268589]: 2025-10-14 08:51:02.922176941 +0000 UTC m=+0.035834000 container create d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:51:02 np0005486808 systemd[1]: Started libpod-conmon-d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586.scope.
Oct 14 04:51:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:03 np0005486808 podman[268589]: 2025-10-14 08:51:02.907361787 +0000 UTC m=+0.021018866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:03 np0005486808 podman[268589]: 2025-10-14 08:51:03.004672956 +0000 UTC m=+0.118330045 container init d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:51:03 np0005486808 podman[268589]: 2025-10-14 08:51:03.0162547 +0000 UTC m=+0.129911769 container start d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:51:03 np0005486808 podman[268589]: 2025-10-14 08:51:03.019319106 +0000 UTC m=+0.132976185 container attach d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:51:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.6 MiB/s wr, 35 op/s
Oct 14 04:51:03 np0005486808 festive_bohr[268606]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:51:03 np0005486808 festive_bohr[268606]: --> relative data size: 1.0
Oct 14 04:51:03 np0005486808 festive_bohr[268606]: --> All data devices are unavailable
Oct 14 04:51:04 np0005486808 systemd[1]: libpod-d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586.scope: Deactivated successfully.
Oct 14 04:51:04 np0005486808 podman[268589]: 2025-10-14 08:51:04.036738667 +0000 UTC m=+1.150395726 container died d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:51:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5d08196473275e70951d193b6229b5a0019f99b535834624679abcba1417ef24-merged.mount: Deactivated successfully.
Oct 14 04:51:04 np0005486808 podman[268589]: 2025-10-14 08:51:04.092869035 +0000 UTC m=+1.206526084 container remove d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 04:51:04 np0005486808 systemd[1]: libpod-conmon-d0df100135a619b4a35f0578828a05fd58a5e64dce231fa5c777a61164e73586.scope: Deactivated successfully.
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.781283852 +0000 UTC m=+0.042174766 container create 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:51:04 np0005486808 systemd[1]: Started libpod-conmon-71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076.scope.
Oct 14 04:51:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.851927766 +0000 UTC m=+0.112818720 container init 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.761591679 +0000 UTC m=+0.022482583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.857426741 +0000 UTC m=+0.118317655 container start 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.862960147 +0000 UTC m=+0.123851081 container attach 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:51:04 np0005486808 tender_pascal[268803]: 167 167
Oct 14 04:51:04 np0005486808 systemd[1]: libpod-71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076.scope: Deactivated successfully.
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.864944805 +0000 UTC m=+0.125835709 container died 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:51:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f22f33eeda66c1184255c926d9835b4fbb773e869706a1d63579a3463b04fdbe-merged.mount: Deactivated successfully.
Oct 14 04:51:04 np0005486808 podman[268786]: 2025-10-14 08:51:04.903351078 +0000 UTC m=+0.164241982 container remove 71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:51:04 np0005486808 systemd[1]: libpod-conmon-71e2b21e5e24e65c39bb8f5ef120deabce0bda6f146ff6424ab121a7b58cd076.scope: Deactivated successfully.
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.108507974 +0000 UTC m=+0.067122039 container create fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 04:51:05 np0005486808 systemd[1]: Started libpod-conmon-fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb.scope.
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.07412564 +0000 UTC m=+0.032739735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8428876f6652c9d63cb105591a4663406baf8af41adb6a1bd8a95a5541900421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8428876f6652c9d63cb105591a4663406baf8af41adb6a1bd8a95a5541900421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8428876f6652c9d63cb105591a4663406baf8af41adb6a1bd8a95a5541900421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8428876f6652c9d63cb105591a4663406baf8af41adb6a1bd8a95a5541900421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.19432068 +0000 UTC m=+0.152934775 container init fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.200073371 +0000 UTC m=+0.158687456 container start fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.203958866 +0000 UTC m=+0.162572961 container attach fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:51:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.9 MiB/s wr, 30 op/s
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2994995167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2994995167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 14 04:51:05 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]: {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    "0": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "devices": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "/dev/loop3"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            ],
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_name": "ceph_lv0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_size": "21470642176",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "name": "ceph_lv0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "tags": {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_name": "ceph",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.crush_device_class": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.encrypted": "0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_id": "0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.vdo": "0"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            },
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "vg_name": "ceph_vg0"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        }
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    ],
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    "1": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "devices": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "/dev/loop4"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            ],
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_name": "ceph_lv1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_size": "21470642176",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "name": "ceph_lv1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "tags": {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_name": "ceph",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.crush_device_class": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.encrypted": "0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_id": "1",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.vdo": "0"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            },
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "vg_name": "ceph_vg1"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        }
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    ],
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    "2": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "devices": [
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "/dev/loop5"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            ],
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_name": "ceph_lv2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_size": "21470642176",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "name": "ceph_lv2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "tags": {
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.cluster_name": "ceph",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.crush_device_class": "",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.encrypted": "0",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osd_id": "2",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:                "ceph.vdo": "0"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            },
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "type": "block",
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:            "vg_name": "ceph_vg2"
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:        }
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]:    ]
Oct 14 04:51:05 np0005486808 hardcore_hugle[268844]: }
Oct 14 04:51:05 np0005486808 systemd[1]: libpod-fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb.scope: Deactivated successfully.
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.930897309 +0000 UTC m=+0.889511414 container died fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:51:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8428876f6652c9d63cb105591a4663406baf8af41adb6a1bd8a95a5541900421-merged.mount: Deactivated successfully.
Oct 14 04:51:05 np0005486808 podman[268827]: 2025-10-14 08:51:05.9952949 +0000 UTC m=+0.953908965 container remove fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:51:06 np0005486808 systemd[1]: libpod-conmon-fcade87d49a788358d663661254212c9749c77eb7f75c4c64c6d33895fbe16fb.scope: Deactivated successfully.
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.637631556 +0000 UTC m=+0.037107372 container create a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:51:06 np0005486808 systemd[1]: Started libpod-conmon-a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c.scope.
Oct 14 04:51:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.713316904 +0000 UTC m=+0.112792740 container init a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.620858084 +0000 UTC m=+0.020333910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.720313135 +0000 UTC m=+0.119788951 container start a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.723556305 +0000 UTC m=+0.123032151 container attach a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:51:06 np0005486808 hopeful_elbakyan[269027]: 167 167
Oct 14 04:51:06 np0005486808 systemd[1]: libpod-a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c.scope: Deactivated successfully.
Oct 14 04:51:06 np0005486808 conmon[269027]: conmon a8424dcd9a571392729a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c.scope/container/memory.events
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.726973969 +0000 UTC m=+0.126449785 container died a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:51:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0bcaa7efbe3f9ae7dd6a05d49d814d420d564f2a5e890eb9dc78ebe3919f0987-merged.mount: Deactivated successfully.
Oct 14 04:51:06 np0005486808 podman[269010]: 2025-10-14 08:51:06.768418286 +0000 UTC m=+0.167894122 container remove a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:51:06 np0005486808 systemd[1]: libpod-conmon-a8424dcd9a571392729a9da4390024cbb03ef86e42a315faf3caad93f513db2c.scope: Deactivated successfully.
Oct 14 04:51:06 np0005486808 podman[269051]: 2025-10-14 08:51:06.964590191 +0000 UTC m=+0.057979804 container create a5e505dba36c20b57ec8525f9cb3077a225e7c8d379ec274b3bfa10ecfe17ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:51:07.008 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:51:07.009 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:51:07.009 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:51:07 np0005486808 systemd[1]: Started libpod-conmon-a5e505dba36c20b57ec8525f9cb3077a225e7c8d379ec274b3bfa10ecfe17ecc.scope.
Oct 14 04:51:07 np0005486808 podman[269051]: 2025-10-14 08:51:06.93928758 +0000 UTC m=+0.032677223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:51:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:51:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842fb8866bc8083711ae3760fdbd0d87048bc7b7490968314c8d5a66690541e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842fb8866bc8083711ae3760fdbd0d87048bc7b7490968314c8d5a66690541e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842fb8866bc8083711ae3760fdbd0d87048bc7b7490968314c8d5a66690541e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842fb8866bc8083711ae3760fdbd0d87048bc7b7490968314c8d5a66690541e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:51:07 np0005486808 podman[269051]: 2025-10-14 08:51:07.071409583 +0000 UTC m=+0.164799206 container init a5e505dba36c20b57ec8525f9cb3077a225e7c8d379ec274b3bfa10ecfe17ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wright, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:51:07 np0005486808 podman[269051]: 2025-10-14 08:51:07.087890787 +0000 UTC m=+0.181280360 container start a5e505dba36c20b57ec8525f9cb3077a225e7c8d379ec274b3bfa10ecfe17ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 04:51:07 np0005486808 podman[269051]: 2025-10-14 08:51:07.09409216 +0000 UTC m=+0.187481723 container attach a5e505dba36c20b57ec8525f9cb3077a225e7c8d379ec274b3bfa10ecfe17ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:51:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Oct 14 04:51:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:51:07 np0005486808 nova_compute[259627]: 2025-10-14 08:51:07.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:51:07 np0005486808 nova_compute[259627]: 2025-10-14 08:51:07.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:51:07 np0005486808 nova_compute[259627]: 2025-10-14 08:51:07.976 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:51:08 np0005486808 focused_wright[269067]: {
Oct 14 04:51:08 np0005486808 focused_wright[269067]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:51:08 np0005486808 focused_wright[269067]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:51:08 np0005486808 focused_wright[269067]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:51:08 np0005486808 focused_wright[269067]:        "osd_id": 2,
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:51:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:51:43 np0005486808 rsyslogd[1002]: imjournal: 189 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 04:51:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:51:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4635 writes, 20K keys, 4635 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4635 writes, 4635 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1345 writes, 6083 keys, 1345 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s#012Interval WAL: 1345 writes, 1345 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    122.2      0.20              0.09        12    0.017       0      0       0.0       0.0#012  L6      1/0    7.19 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1    194.8    159.2      0.48              0.27        11    0.044     47K   5777       0.0       0.0#012 Sum      1/0    7.19 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1    137.8    148.4      0.68              0.36        23    0.030     47K   5777       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    145.0    146.1      0.31              0.16        10    0.031     23K   2577       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    194.8    159.2      0.48              0.27        11    0.044     47K   5777       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    126.5      0.19              0.09        11    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 308.00 MB usage: 8.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000112 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(560,8.26 MB,2.68231%) FilterBlock(24,140.73 KB,0.044622%) IndexBlock(24,266.19 KB,0.0843989%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 04:51:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:51:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:49 np0005486808 podman[269290]: 2025-10-14 08:51:49.674864053 +0000 UTC m=+0.082460866 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:51:49 np0005486808 podman[269291]: 2025-10-14 08:51:49.677295993 +0000 UTC m=+0.080497388 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:51:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:51:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:51:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:51:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:00 np0005486808 podman[269331]: 2025-10-14 08:52:00.678739663 +0000 UTC m=+0.078943668 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:52:00 np0005486808 podman[269332]: 2025-10-14 08:52:00.678739583 +0000 UTC m=+0.072961062 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 14 04:52:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:52:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025110651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:52:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:52:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025110651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:52:07.009 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:52:07.010 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:52:07.010 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:52:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:08 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:52:08 np0005486808 nova_compute[259627]: 2025-10-14 08:52:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:08 np0005486808 nova_compute[259627]: 2025-10-14 08:52:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:08 np0005486808 nova_compute[259627]: 2025-10-14 08:52:08.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:09 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1c274801-dbdd-4eb9-840b-b3beebf48130 does not exist
Oct 14 04:52:09 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3a7efc4c-312d-4248-98fe-7ac402ffea62 does not exist
Oct 14 04:52:09 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 068fb985-434c-4df6-806e-9d4a62a820ac does not exist
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.808759586 +0000 UTC m=+0.057088172 container create aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:52:09 np0005486808 systemd[1]: Started libpod-conmon-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope.
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.788708344 +0000 UTC m=+0.037036910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.917916315 +0000 UTC m=+0.166244931 container init aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.928868054 +0000 UTC m=+0.177196600 container start aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.932691038 +0000 UTC m=+0.181019634 container attach aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:52:09 np0005486808 tender_pascal[269666]: 167 167
Oct 14 04:52:09 np0005486808 systemd[1]: libpod-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope: Deactivated successfully.
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.938685285 +0000 UTC m=+0.187013871 container died aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:52:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-409aef0883151e5a6f3c5486ecc340c982fee4333f7a791b6e49bafbdf35d07f-merged.mount: Deactivated successfully.
Oct 14 04:52:09 np0005486808 nova_compute[259627]: 2025-10-14 08:52:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:09 np0005486808 nova_compute[259627]: 2025-10-14 08:52:09.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:09 np0005486808 podman[269649]: 2025-10-14 08:52:09.995051988 +0000 UTC m=+0.243380574 container remove aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:52:10 np0005486808 systemd[1]: libpod-conmon-aae7b166092ed5e0ffa72228b101c293f4c2f0d5b70493943ac6a4fc2e73cb5a.scope: Deactivated successfully.
Oct 14 04:52:10 np0005486808 podman[269690]: 2025-10-14 08:52:10.186962789 +0000 UTC m=+0.044806891 container create a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:52:10 np0005486808 systemd[1]: Started libpod-conmon-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope.
Oct 14 04:52:10 np0005486808 podman[269690]: 2025-10-14 08:52:10.165579334 +0000 UTC m=+0.023423476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:10 np0005486808 podman[269690]: 2025-10-14 08:52:10.286644705 +0000 UTC m=+0.144488897 container init a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:52:10 np0005486808 podman[269690]: 2025-10-14 08:52:10.294552549 +0000 UTC m=+0.152396651 container start a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 04:52:10 np0005486808 podman[269690]: 2025-10-14 08:52:10.298037305 +0000 UTC m=+0.155881447 container attach a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:52:10 np0005486808 nova_compute[259627]: 2025-10-14 08:52:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:10 np0005486808 nova_compute[259627]: 2025-10-14 08:52:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:11 np0005486808 sleepy_wilson[269706]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:52:11 np0005486808 sleepy_wilson[269706]: --> relative data size: 1.0
Oct 14 04:52:11 np0005486808 sleepy_wilson[269706]: --> All data devices are unavailable
Oct 14 04:52:11 np0005486808 systemd[1]: libpod-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Deactivated successfully.
Oct 14 04:52:11 np0005486808 podman[269690]: 2025-10-14 08:52:11.374175669 +0000 UTC m=+1.232019811 container died a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 04:52:11 np0005486808 systemd[1]: libpod-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Consumed 1.036s CPU time.
Oct 14 04:52:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-21093fd9ae23e0b29271fd3435a6a5a831f5cfeb3e5ede9036e1924d438a8c34-merged.mount: Deactivated successfully.
Oct 14 04:52:11 np0005486808 podman[269690]: 2025-10-14 08:52:11.440846705 +0000 UTC m=+1.298690807 container remove a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:52:11 np0005486808 systemd[1]: libpod-conmon-a468277ce60cee674fb33ef2803c50110f4ea57530685d43b03288ac478baac4.scope: Deactivated successfully.
Oct 14 04:52:11 np0005486808 nova_compute[259627]: 2025-10-14 08:52:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.011 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.290982661 +0000 UTC m=+0.068221616 container create 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:52:12 np0005486808 systemd[1]: Started libpod-conmon-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope.
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.260425311 +0000 UTC m=+0.037664306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.399634258 +0000 UTC m=+0.176873273 container init 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:52:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.414527773 +0000 UTC m=+0.191766728 container start 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.418905641 +0000 UTC m=+0.196144706 container attach 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:52:12 np0005486808 wonderful_sammet[269927]: 167 167
Oct 14 04:52:12 np0005486808 systemd[1]: libpod-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope: Deactivated successfully.
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.42376709 +0000 UTC m=+0.201006045 container died 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:52:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0ca96595b1fcefeddab3765e87cc95f98c7815f4c1c349d5a2e18e97c7dafb89-merged.mount: Deactivated successfully.
Oct 14 04:52:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:52:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708719720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:52:12 np0005486808 podman[269911]: 2025-10-14 08:52:12.47020468 +0000 UTC m=+0.247443595 container remove 594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.483 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:52:12 np0005486808 systemd[1]: libpod-conmon-594baa311da22a02dcf3b87773e06fc7cfb42a8cfe8f63ae5b6ae5eeaeb41116.scope: Deactivated successfully.
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.661 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.663 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.663 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:52:12 np0005486808 podman[269953]: 2025-10-14 08:52:12.665498503 +0000 UTC m=+0.057591984 container create 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:52:12 np0005486808 systemd[1]: Started libpod-conmon-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope.
Oct 14 04:52:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:12 np0005486808 podman[269953]: 2025-10-14 08:52:12.631323454 +0000 UTC m=+0.023416915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:12 np0005486808 podman[269953]: 2025-10-14 08:52:12.740358771 +0000 UTC m=+0.132452222 container init 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.751 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.751 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:52:12 np0005486808 podman[269953]: 2025-10-14 08:52:12.754670972 +0000 UTC m=+0.146764403 container start 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:52:12 np0005486808 podman[269953]: 2025-10-14 08:52:12.758746882 +0000 UTC m=+0.150840373 container attach 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:52:12 np0005486808 nova_compute[259627]: 2025-10-14 08:52:12.767 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:52:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:52:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812353846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:52:13 np0005486808 nova_compute[259627]: 2025-10-14 08:52:13.183 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:52:13 np0005486808 nova_compute[259627]: 2025-10-14 08:52:13.189 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:52:13 np0005486808 nova_compute[259627]: 2025-10-14 08:52:13.205 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:52:13 np0005486808 nova_compute[259627]: 2025-10-14 08:52:13.207 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:52:13 np0005486808 nova_compute[259627]: 2025-10-14 08:52:13.208 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:52:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]: {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    "0": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "devices": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "/dev/loop3"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            ],
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_name": "ceph_lv0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_size": "21470642176",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "name": "ceph_lv0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "tags": {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_name": "ceph",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.crush_device_class": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.encrypted": "0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_id": "0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.vdo": "0"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            },
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "vg_name": "ceph_vg0"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        }
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    ],
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    "1": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "devices": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "/dev/loop4"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            ],
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_name": "ceph_lv1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_size": "21470642176",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "name": "ceph_lv1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "tags": {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_name": "ceph",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.crush_device_class": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.encrypted": "0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_id": "1",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.vdo": "0"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            },
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "vg_name": "ceph_vg1"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        }
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    ],
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    "2": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "devices": [
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "/dev/loop5"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            ],
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_name": "ceph_lv2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_size": "21470642176",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "name": "ceph_lv2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "tags": {
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.cluster_name": "ceph",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.crush_device_class": "",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.encrypted": "0",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osd_id": "2",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:                "ceph.vdo": "0"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            },
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "type": "block",
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:            "vg_name": "ceph_vg2"
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:        }
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]:    ]
Oct 14 04:52:13 np0005486808 friendly_varahamihira[269969]: }
Oct 14 04:52:13 np0005486808 systemd[1]: libpod-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope: Deactivated successfully.
Oct 14 04:52:13 np0005486808 podman[269953]: 2025-10-14 08:52:13.476258483 +0000 UTC m=+0.868351964 container died 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 04:52:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5cf68233f8591ccbdc6dda09f411c0032b64a0a5cbed00ed02b906cf83dc6de1-merged.mount: Deactivated successfully.
Oct 14 04:52:13 np0005486808 podman[269953]: 2025-10-14 08:52:13.532927154 +0000 UTC m=+0.925020595 container remove 05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:52:13 np0005486808 systemd[1]: libpod-conmon-05dfe6c5bdf0e13fb416a137b9a1cf1f58a4c3ea80b5012affacaeeeb86eada7.scope: Deactivated successfully.
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.353590037 +0000 UTC m=+0.049886895 container create 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:52:14 np0005486808 systemd[1]: Started libpod-conmon-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope.
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.331863064 +0000 UTC m=+0.028159922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.451437429 +0000 UTC m=+0.147734327 container init 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.462093631 +0000 UTC m=+0.158390449 container start 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.466728794 +0000 UTC m=+0.163025702 container attach 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 04:52:14 np0005486808 kind_mirzakhani[270169]: 167 167
Oct 14 04:52:14 np0005486808 systemd[1]: libpod-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope: Deactivated successfully.
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.46899005 +0000 UTC m=+0.165286868 container died 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:52:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c329689a0a7a787ec3784b1c5a30cd52156c3259b726c465f1df95f15ecceecf-merged.mount: Deactivated successfully.
Oct 14 04:52:14 np0005486808 podman[270152]: 2025-10-14 08:52:14.5105292 +0000 UTC m=+0.206826018 container remove 5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:52:14 np0005486808 systemd[1]: libpod-conmon-5ac6339b2bd6d9b0566d745994875d9c00b0f72b2c82a7060673b994f4fa6684.scope: Deactivated successfully.
Oct 14 04:52:14 np0005486808 podman[270192]: 2025-10-14 08:52:14.730517739 +0000 UTC m=+0.058284201 container create 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:52:14 np0005486808 systemd[1]: Started libpod-conmon-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope.
Oct 14 04:52:14 np0005486808 podman[270192]: 2025-10-14 08:52:14.714400344 +0000 UTC m=+0.042166836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:52:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:52:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:52:14 np0005486808 podman[270192]: 2025-10-14 08:52:14.837379542 +0000 UTC m=+0.165146034 container init 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 04:52:14 np0005486808 podman[270192]: 2025-10-14 08:52:14.851649172 +0000 UTC m=+0.179415644 container start 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:52:14 np0005486808 podman[270192]: 2025-10-14 08:52:14.855112707 +0000 UTC m=+0.182879229 container attach 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:52:15 np0005486808 nova_compute[259627]: 2025-10-14 08:52:15.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:15 np0005486808 nova_compute[259627]: 2025-10-14 08:52:15.209 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:52:15 np0005486808 nova_compute[259627]: 2025-10-14 08:52:15.210 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:52:15 np0005486808 nova_compute[259627]: 2025-10-14 08:52:15.225 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:52:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]: {
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_id": 2,
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "type": "bluestore"
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    },
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_id": 1,
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "type": "bluestore"
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    },
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_id": 0,
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:        "type": "bluestore"
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]:    }
Oct 14 04:52:15 np0005486808 unruffled_bose[270208]: }
Oct 14 04:52:15 np0005486808 systemd[1]: libpod-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Deactivated successfully.
Oct 14 04:52:15 np0005486808 podman[270192]: 2025-10-14 08:52:15.964344372 +0000 UTC m=+1.292110864 container died 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:52:15 np0005486808 systemd[1]: libpod-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Consumed 1.120s CPU time.
Oct 14 04:52:15 np0005486808 nova_compute[259627]: 2025-10-14 08:52:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:52:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-dc6d32625c2766b74f1c4f660a494c4f6ee6cb6cbc29c012da8ac7976a2aed3c-merged.mount: Deactivated successfully.
Oct 14 04:52:16 np0005486808 podman[270192]: 2025-10-14 08:52:16.029603694 +0000 UTC m=+1.357370166 container remove 5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 04:52:16 np0005486808 systemd[1]: libpod-conmon-5d4a0b511f1d14760c02d846fb87005f1f48f8c89270ba3f0509cb0856d45196.scope: Deactivated successfully.
Oct 14 04:52:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:52:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:52:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:16 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dacf5ff0-cfd0-47de-9b86-73616d47f846 does not exist
Oct 14 04:52:16 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d4ecdeb-15e9-4cde-88cf-9a12de49aad1 does not exist
Oct 14 04:52:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:52:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:20 np0005486808 podman[270305]: 2025-10-14 08:52:20.668667428 +0000 UTC m=+0.076887388 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, container_name=multipathd)
Oct 14 04:52:20 np0005486808 podman[270306]: 2025-10-14 08:52:20.69400101 +0000 UTC m=+0.102226280 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 04:52:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:31 np0005486808 podman[270347]: 2025-10-14 08:52:31.729558312 +0000 UTC m=+0.095995058 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 04:52:31 np0005486808 podman[270346]: 2025-10-14 08:52:31.746281635 +0000 UTC m=+0.116210577 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:52:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:52:32
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:52:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:52:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:33 np0005486808 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 04:52:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:52:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:52:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:51 np0005486808 podman[270388]: 2025-10-14 08:52:51.65913139 +0000 UTC m=+0.072298944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 04:52:51 np0005486808 podman[270389]: 2025-10-14 08:52:51.668126032 +0000 UTC m=+0.075903823 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 04:52:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:52:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:52:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:02 np0005486808 podman[270430]: 2025-10-14 08:53:02.643322554 +0000 UTC m=+0.051382178 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:02 np0005486808 podman[270429]: 2025-10-14 08:53:02.734887843 +0000 UTC m=+0.147783596 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:04 np0005486808 nova_compute[259627]: 2025-10-14 08:53:04.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:53:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:53:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:53:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864568504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:53:05 np0005486808 nova_compute[259627]: 2025-10-14 08:53:05.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:05 np0005486808 nova_compute[259627]: 2025-10-14 08:53:05.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 04:53:06 np0005486808 nova_compute[259627]: 2025-10-14 08:53:06.022 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 04:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.011 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:07 np0005486808 nova_compute[259627]: 2025-10-14 08:53:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:07 np0005486808 nova_compute[259627]: 2025-10-14 08:53:07.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 04:53:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 14 04:53:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 14 04:53:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 14 04:53:09 np0005486808 nova_compute[259627]: 2025-10-14 08:53:08.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:09 np0005486808 nova_compute[259627]: 2025-10-14 08:53:08.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:53:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:09 np0005486808 nova_compute[259627]: 2025-10-14 08:53:09.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:09 np0005486808 nova_compute[259627]: 2025-10-14 08:53:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 14 04:53:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 14 04:53:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 14 04:53:10 np0005486808 nova_compute[259627]: 2025-10-14 08:53:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:10 np0005486808 nova_compute[259627]: 2025-10-14 08:53:10.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 04:53:11 np0005486808 nova_compute[259627]: 2025-10-14 08:53:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:11 np0005486808 nova_compute[259627]: 2025-10-14 08:53:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:53:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633643948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.459 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.637 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.639 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.639 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.640 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.860 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.861 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:53:12 np0005486808 nova_compute[259627]: 2025-10-14 08:53:12.915 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.000 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.001 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.018 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.044 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.062 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Oct 14 04:53:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:53:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218190398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.553 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.559 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.574 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.577 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:53:13 np0005486808 nova_compute[259627]: 2025-10-14 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.573 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.596 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.596 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.597 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.619 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:53:16 np0005486808 nova_compute[259627]: 2025-10-14 08:53:16.620 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:17 np0005486808 nova_compute[259627]: 2025-10-14 08:53:17.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:53:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7eea08f5-8c31-4b65-a2bb-324b4e2fb3e4 does not exist
Oct 14 04:53:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2a8bd657-90eb-4b7b-8891-0ba26cffa796 does not exist
Oct 14 04:53:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e43b6c7d-75ae-4278-8f4b-237e9aa331bf does not exist
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:53:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.477361487 +0000 UTC m=+0.044677783 container create 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:53:18 np0005486808 systemd[1]: Started libpod-conmon-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope.
Oct 14 04:53:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.457911927 +0000 UTC m=+0.025228263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.562857266 +0000 UTC m=+0.130173572 container init 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.570701059 +0000 UTC m=+0.138017345 container start 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.57436017 +0000 UTC m=+0.141676476 container attach 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:53:18 np0005486808 reverent_clarke[270923]: 167 167
Oct 14 04:53:18 np0005486808 systemd[1]: libpod-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope: Deactivated successfully.
Oct 14 04:53:18 np0005486808 conmon[270923]: conmon 0ab2f9e7a67f5e319541 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope/container/memory.events
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.577949918 +0000 UTC m=+0.145266234 container died 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:53:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d63a9cce15094f3ffbd1810c8bc38bc8152499d4ea538600ce8f8598ca607d91-merged.mount: Deactivated successfully.
Oct 14 04:53:18 np0005486808 podman[270907]: 2025-10-14 08:53:18.62588546 +0000 UTC m=+0.193201766 container remove 0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 04:53:18 np0005486808 systemd[1]: libpod-conmon-0ab2f9e7a67f5e3195418edba9e9182159869d4d1bde1802789a2bd9ef45070c.scope: Deactivated successfully.
Oct 14 04:53:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:53:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:53:18 np0005486808 podman[270947]: 2025-10-14 08:53:18.808596827 +0000 UTC m=+0.056086364 container create 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:53:18 np0005486808 systemd[1]: Started libpod-conmon-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope.
Oct 14 04:53:18 np0005486808 podman[270947]: 2025-10-14 08:53:18.789775343 +0000 UTC m=+0.037264890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:18 np0005486808 podman[270947]: 2025-10-14 08:53:18.904475022 +0000 UTC m=+0.151964549 container init 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:53:18 np0005486808 podman[270947]: 2025-10-14 08:53:18.919236786 +0000 UTC m=+0.166726313 container start 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:53:18 np0005486808 podman[270947]: 2025-10-14 08:53:18.922754062 +0000 UTC m=+0.170243579 container attach 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 04:53:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 14 04:53:19 np0005486808 tender_rubin[270963]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:53:19 np0005486808 tender_rubin[270963]: --> relative data size: 1.0
Oct 14 04:53:19 np0005486808 tender_rubin[270963]: --> All data devices are unavailable
Oct 14 04:53:20 np0005486808 systemd[1]: libpod-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Deactivated successfully.
Oct 14 04:53:20 np0005486808 podman[270947]: 2025-10-14 08:53:20.014651602 +0000 UTC m=+1.262141169 container died 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:53:20 np0005486808 systemd[1]: libpod-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Consumed 1.044s CPU time.
Oct 14 04:53:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-681aa9254bee7257c54a95e84530a04197114e5793af66d1d194cf538852bb10-merged.mount: Deactivated successfully.
Oct 14 04:53:20 np0005486808 podman[270947]: 2025-10-14 08:53:20.089348345 +0000 UTC m=+1.336837882 container remove 0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:53:20 np0005486808 systemd[1]: libpod-conmon-0369fa301c2baa9fa314bd9a6e13d1b4300ac0fd68b1f42997030a438456efdc.scope: Deactivated successfully.
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.809299862 +0000 UTC m=+0.066895001 container create c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:53:20 np0005486808 systemd[1]: Started libpod-conmon-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope.
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.780702337 +0000 UTC m=+0.038297496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.907830372 +0000 UTC m=+0.165425561 container init c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.91868443 +0000 UTC m=+0.176279559 container start c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.922305599 +0000 UTC m=+0.179900778 container attach c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:53:20 np0005486808 crazy_noyce[271159]: 167 167
Oct 14 04:53:20 np0005486808 systemd[1]: libpod-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope: Deactivated successfully.
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.925609681 +0000 UTC m=+0.183204800 container died c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:53:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-588c460178e3b04deace152e42c5e5a2cc2edcae8d8de0b700dd2da9dbf684de-merged.mount: Deactivated successfully.
Oct 14 04:53:20 np0005486808 podman[271143]: 2025-10-14 08:53:20.979065299 +0000 UTC m=+0.236660398 container remove c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:53:20 np0005486808 systemd[1]: libpod-conmon-c204e9ae9addd8d4211a5200ac853a95fe24f80c5160036d26bdb7fafa873f09.scope: Deactivated successfully.
Oct 14 04:53:21 np0005486808 podman[271185]: 2025-10-14 08:53:21.18058931 +0000 UTC m=+0.054800043 container create b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:53:21 np0005486808 systemd[1]: Started libpod-conmon-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope.
Oct 14 04:53:21 np0005486808 podman[271185]: 2025-10-14 08:53:21.157693555 +0000 UTC m=+0.031904278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:21 np0005486808 podman[271185]: 2025-10-14 08:53:21.290430999 +0000 UTC m=+0.164641722 container init b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:53:21 np0005486808 podman[271185]: 2025-10-14 08:53:21.29696955 +0000 UTC m=+0.171180253 container start b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 04:53:21 np0005486808 podman[271185]: 2025-10-14 08:53:21.301295087 +0000 UTC m=+0.175505820 container attach b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 04:53:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]: {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    "0": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "devices": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "/dev/loop3"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            ],
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_name": "ceph_lv0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_size": "21470642176",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "name": "ceph_lv0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "tags": {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_name": "ceph",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.crush_device_class": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.encrypted": "0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_id": "0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.vdo": "0"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            },
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "vg_name": "ceph_vg0"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        }
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    ],
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    "1": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "devices": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "/dev/loop4"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            ],
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_name": "ceph_lv1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_size": "21470642176",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "name": "ceph_lv1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "tags": {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_name": "ceph",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.crush_device_class": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.encrypted": "0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_id": "1",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.vdo": "0"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            },
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "vg_name": "ceph_vg1"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        }
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    ],
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    "2": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "devices": [
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "/dev/loop5"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            ],
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_name": "ceph_lv2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_size": "21470642176",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "name": "ceph_lv2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "tags": {
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.cluster_name": "ceph",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.crush_device_class": "",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.encrypted": "0",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osd_id": "2",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:                "ceph.vdo": "0"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            },
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "type": "block",
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:            "vg_name": "ceph_vg2"
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:        }
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]:    ]
Oct 14 04:53:22 np0005486808 romantic_clarke[271202]: }
Oct 14 04:53:22 np0005486808 systemd[1]: libpod-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope: Deactivated successfully.
Oct 14 04:53:22 np0005486808 podman[271185]: 2025-10-14 08:53:22.065082095 +0000 UTC m=+0.939292868 container died b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:53:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-57a79211c3bc7a95077d1e7417e7456b6320ffcd41fcbdfda57f9080950f934b-merged.mount: Deactivated successfully.
Oct 14 04:53:22 np0005486808 podman[271185]: 2025-10-14 08:53:22.145342145 +0000 UTC m=+1.019552838 container remove b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:53:22 np0005486808 systemd[1]: libpod-conmon-b42619031d0273028fdb6a9ee0f2c858b9bd77fafd431f9cd883faebc1d1c824.scope: Deactivated successfully.
Oct 14 04:53:22 np0005486808 podman[271222]: 2025-10-14 08:53:22.191539774 +0000 UTC m=+0.084922886 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:53:22 np0005486808 podman[271212]: 2025-10-14 08:53:22.19946665 +0000 UTC m=+0.088864243 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 04:53:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.829368186 +0000 UTC m=+0.047156264 container create 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 04:53:22 np0005486808 systemd[1]: Started libpod-conmon-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope.
Oct 14 04:53:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.807935237 +0000 UTC m=+0.025723345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.916045714 +0000 UTC m=+0.133833812 container init 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.92318772 +0000 UTC m=+0.140975808 container start 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.927297781 +0000 UTC m=+0.145085859 container attach 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:53:22 np0005486808 systemd[1]: libpod-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope: Deactivated successfully.
Oct 14 04:53:22 np0005486808 cool_volhard[271420]: 167 167
Oct 14 04:53:22 np0005486808 conmon[271420]: conmon 420a9d419a3d1538a263 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope/container/memory.events
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.932218233 +0000 UTC m=+0.150006311 container died 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:53:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-afe1858844f5d53133ffba805c59e947253236ffa9e0473d5e57d1be2b17c53f-merged.mount: Deactivated successfully.
Oct 14 04:53:22 np0005486808 podman[271403]: 2025-10-14 08:53:22.977465299 +0000 UTC m=+0.195253387 container remove 420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:53:22 np0005486808 systemd[1]: libpod-conmon-420a9d419a3d1538a263b0b5a76e1abf8dfe4fcd733031f27896b686f1bbd490.scope: Deactivated successfully.
Oct 14 04:53:23 np0005486808 podman[271446]: 2025-10-14 08:53:23.191052237 +0000 UTC m=+0.052707491 container create 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 04:53:23 np0005486808 systemd[1]: Started libpod-conmon-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope.
Oct 14 04:53:23 np0005486808 podman[271446]: 2025-10-14 08:53:23.164951973 +0000 UTC m=+0.026607317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:53:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:53:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:53:23 np0005486808 podman[271446]: 2025-10-14 08:53:23.297519703 +0000 UTC m=+0.159174977 container init 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:53:23 np0005486808 podman[271446]: 2025-10-14 08:53:23.308660888 +0000 UTC m=+0.170316152 container start 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:53:23 np0005486808 podman[271446]: 2025-10-14 08:53:23.312716478 +0000 UTC m=+0.174371782 container attach 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:53:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]: {
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_id": 2,
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "type": "bluestore"
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    },
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_id": 1,
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "type": "bluestore"
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    },
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_id": 0,
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:        "type": "bluestore"
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]:    }
Oct 14 04:53:24 np0005486808 youthful_neumann[271463]: }
Oct 14 04:53:24 np0005486808 systemd[1]: libpod-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Deactivated successfully.
Oct 14 04:53:24 np0005486808 systemd[1]: libpod-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Consumed 1.078s CPU time.
Oct 14 04:53:24 np0005486808 podman[271446]: 2025-10-14 08:53:24.380913533 +0000 UTC m=+1.242568777 container died 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:53:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0994fdfeeb8d5ee654a6055551b946e229266d599c6e6adfdcf045430de87c2d-merged.mount: Deactivated successfully.
Oct 14 04:53:24 np0005486808 podman[271446]: 2025-10-14 08:53:24.46674534 +0000 UTC m=+1.328400584 container remove 94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:53:24 np0005486808 systemd[1]: libpod-conmon-94c085544d30b98f7f20fd9b60e9f107fd4dedcc57103217c8a7de4c813af377.scope: Deactivated successfully.
Oct 14 04:53:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:53:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:53:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d25ef22-aa9e-4e5e-af58-40fe647aaf59 does not exist
Oct 14 04:53:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 477461a4-fb52-4466-9bb4-7f51789677b3 does not exist
Oct 14 04:53:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Oct 14 04:53:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:53:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:53:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6036 writes, 24K keys, 6036 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6036 writes, 1099 syncs, 5.49 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 387 writes, 815 keys, 387 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s#012Interval WAL: 387 writes, 184 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 04:53:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:30.854 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:53:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:30.856 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:53:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:53:31.857 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:53:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:53:32
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:53:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:53:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:33 np0005486808 podman[271562]: 2025-10-14 08:53:33.710410706 +0000 UTC m=+0.123421155 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 04:53:33 np0005486808 podman[271561]: 2025-10-14 08:53:33.710480998 +0000 UTC m=+0.123084397 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 04:53:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:53:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7204 writes, 29K keys, 7204 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7204 writes, 1461 syncs, 4.93 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 410 writes, 1133 keys, 410 commit groups, 1.0 writes per commit group, ingest: 0.62 MB, 0.00 MB/s#012Interval WAL: 410 writes, 188 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 04:53:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 04:53:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6132 writes, 25K keys, 6132 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6132 writes, 1126 syncs, 5.45 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 533 writes, 1494 keys, 533 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s#012Interval WAL: 533 writes, 250 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 04:53:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 04:53:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:53:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:53:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.213 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.215 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.243 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.380 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.381 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.401 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.407 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.407 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.417 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.418 2 INFO nova.compute.claims [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.540 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:48 np0005486808 nova_compute[259627]: 2025-10-14 08:53:48.572 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:53:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370884652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.053 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.059 2 DEBUG nova.compute.provider_tree [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.083 2 DEBUG nova.scheduler.client.report [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.112 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.113 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.117 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.127 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.128 2 INFO nova.compute.claims [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.211 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.237 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.257 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.317 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.357 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.360 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.361 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating image(s)#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.405 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.440 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.469 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.507 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.508 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:53:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1449041474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.781 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.789 2 DEBUG nova.compute.provider_tree [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.809 2 DEBUG nova.scheduler.client.report [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.847 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.848 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.938 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.939 2 DEBUG nova.network.neutron [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:53:49 np0005486808 nova_compute[259627]: 2025-10-14 08:53:49.971 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.015 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.138 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.140 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.140 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating image(s)#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.162 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.187 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.214 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.219 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.837 2 DEBUG nova.virt.libvirt.imagebackend [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/a4789543-f429-47d7-9f79-80a9d90a59f9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/a4789543-f429-47d7-9f79-80a9d90a59f9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.842 2 DEBUG nova.network.neutron [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:53:50 np0005486808 nova_compute[259627]: 2025-10-14 08:53:50.843 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:53:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.370 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.456 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.457 2 DEBUG nova.virt.images [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] a4789543-f429-47d7-9f79-80a9d90a59f9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.459 2 DEBUG nova.privsep.utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.460 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.649 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.part /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.655 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:52 np0005486808 podman[271771]: 2025-10-14 08:53:52.695242128 +0000 UTC m=+0.090816301 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 14 04:53:52 np0005486808 podman[271770]: 2025-10-14 08:53:52.708828573 +0000 UTC m=+0.111827290 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.743 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963.converted --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.745 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.774 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.778 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.803 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 2.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.804 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.831 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:52 np0005486808 nova_compute[259627]: 2025-10-14 08:53:52.835 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:53:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 14 04:53:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 14 04:53:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 14 04:53:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 14 04:53:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 14 04:53:54 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 14 04:53:54 np0005486808 nova_compute[259627]: 2025-10-14 08:53:54.817 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:54 np0005486808 nova_compute[259627]: 2025-10-14 08:53:54.838 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:54 np0005486808 nova_compute[259627]: 2025-10-14 08:53:54.910 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] resizing rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:53:54 np0005486808 nova_compute[259627]: 2025-10-14 08:53:54.941 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] resizing rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.006 2 DEBUG nova.objects.instance [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.039 2 DEBUG nova.objects.instance [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.041 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Ensure instance console log exists: /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.042 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.043 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.044 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.049 2 WARNING nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.055 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.055 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Ensure instance console log exists: /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.057 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.058 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.058 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.059 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.host [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.060 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.061 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.062 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.063 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.063 2 DEBUG nova.virt.hardware [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.066 2 DEBUG nova.privsep.utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.066 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.081 2 WARNING nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.088 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.088 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.host [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.092 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.093 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.094 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.095 2 DEBUG nova.virt.hardware [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.098 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471100098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.481 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558469147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.511 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.516 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.534 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.557 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.560 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:53:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483536475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.925 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.927 2 DEBUG nova.objects.instance [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:53:55 np0005486808 nova_compute[259627]: 2025-10-14 08:53:55.949 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <uuid>2ce8be7a-3198-4f1c-ba79-e11a24581a60</uuid>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <name>instance-00000001</name>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:name>tempest-AutoAllocateNetworkTest-server-865433973</nova:name>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:53:55</nova:creationTime>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:user uuid="4bed0ea53b244e579f278f95b35bfc0d">tempest-AutoAllocateNetworkTest-907131017-project-member</nova:user>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <nova:project uuid="6e03adb4741d4f1c8279abf27fb2b6a1">tempest-AutoAllocateNetworkTest-907131017</nova:project>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="serial">2ce8be7a-3198-4f1c-ba79-e11a24581a60</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="uuid">2ce8be7a-3198-4f1c-ba79-e11a24581a60</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/console.log" append="off"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:53:55 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:53:55 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:53:55 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:53:55 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.003 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.005 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.005 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Using config drive#033[00m
Oct 14 04:53:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:53:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3799170032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.023 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.029 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.030 2 DEBUG nova.objects.instance [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.057 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <uuid>0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</uuid>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <name>instance-00000002</name>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerExternalEventsTest-server-98356922</nova:name>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:53:55</nova:creationTime>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:user uuid="352e09590ad54449b344c1cf9ed31e15">tempest-ServerExternalEventsTest-1253606656-project-member</nova:user>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <nova:project uuid="e2a0546071664670a3e6a70205cf65a4">tempest-ServerExternalEventsTest-1253606656</nova:project>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="serial">0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="uuid">0a24666a-3d83-4fd9-8a89-c0585d8dc8e3</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/console.log" append="off"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:53:56 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:53:56 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:53:56 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:53:56 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.114 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.114 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.115 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Using config drive#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.134 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.614 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Creating config drive at /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.623 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xf1o1nj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.716 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Creating config drive at /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.727 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7hh3n6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.755 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xf1o1nj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.792 2 DEBUG nova.storage.rbd_utils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] rbd image 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.797 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.868 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg7hh3n6h" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.911 2 DEBUG nova.storage.rbd_utils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] rbd image 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.917 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.981 2 DEBUG oslo_concurrency.processutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:56 np0005486808 nova_compute[259627]: 2025-10-14 08:53:56.983 2 INFO nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deleting local config drive /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3/disk.config because it was imported into RBD.#033[00m
Oct 14 04:53:57 np0005486808 systemd[1]: Starting libvirt secret daemon...
Oct 14 04:53:57 np0005486808 systemd[1]: Started libvirt secret daemon.
Oct 14 04:53:57 np0005486808 nova_compute[259627]: 2025-10-14 08:53:57.116 2 DEBUG oslo_concurrency.processutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config 2ce8be7a-3198-4f1c-ba79-e11a24581a60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:53:57 np0005486808 nova_compute[259627]: 2025-10-14 08:53:57.117 2 INFO nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deleting local config drive /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60/disk.config because it was imported into RBD.#033[00m
Oct 14 04:53:57 np0005486808 systemd-machined[214636]: New machine qemu-1-instance-00000002.
Oct 14 04:53:57 np0005486808 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 14 04:53:57 np0005486808 systemd-machined[214636]: New machine qemu-2-instance-00000001.
Oct 14 04:53:57 np0005486808 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Oct 14 04:53:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:53:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 14 04:53:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 82 op/s
Oct 14 04:53:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 14 04:53:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.489 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.488253, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.490 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.493 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.493 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.497 2 INFO nova.virt.libvirt.driver [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance spawned successfully.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.497 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.551 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.561 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.562 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.563 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.563 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.564 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.564 2 DEBUG nova.virt.libvirt.driver [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.568 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.569 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.570 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.574 2 INFO nova.virt.libvirt.driver [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance spawned successfully.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.574 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.619 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.490978, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.619 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Started (Lifecycle Event)#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.640 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.651 2 INFO nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 8.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.652 2 DEBUG nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.653 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.654 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.654 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.655 2 DEBUG nova.virt.libvirt.driver [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.664 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.664 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.5618749, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.664 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.689 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.716 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.716 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432038.5623958, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.717 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Started (Lifecycle Event)#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.725 2 INFO nova.compute.manager [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 10.21 seconds to build instance.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.733 2 INFO nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.733 2 DEBUG nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.734 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.739 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.746 2 DEBUG oslo_concurrency.lockutils [None req-e2c55ae0-0aa7-4369-80e8-f9eeba68038f 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.768 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.797 2 INFO nova.compute.manager [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 10.44 seconds to build instance.#033[00m
Oct 14 04:53:58 np0005486808 nova_compute[259627]: 2025-10-14 08:53:58.822 2 DEBUG oslo_concurrency.lockutils [None req-654889ff-6e98-40a6-9140-c9f0cbdd06fa 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:53:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 134 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 109 op/s
Oct 14 04:53:59 np0005486808 nova_compute[259627]: 2025-10-14 08:53:59.938 2 DEBUG nova.compute.manager [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:53:59 np0005486808 nova_compute[259627]: 2025-10-14 08:53:59.939 2 DEBUG nova.compute.manager [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:53:59 np0005486808 nova_compute[259627]: 2025-10-14 08:53:59.940 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Acquiring lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:53:59 np0005486808 nova_compute[259627]: 2025-10-14 08:53:59.940 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Acquired lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:53:59 np0005486808 nova_compute[259627]: 2025-10-14 08:53:59.942 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.187 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.188 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.188 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.189 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.190 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.192 2 INFO nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Terminating instance#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.194 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.197 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.419 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.420 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.421 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.421 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.422 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.424 2 INFO nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Terminating instance#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.425 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.426 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquired lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:00 np0005486808 nova_compute[259627]: 2025-10-14 08:54:00.426 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 5.4 MiB/s wr, 306 op/s
Oct 14 04:54:01 np0005486808 nova_compute[259627]: 2025-10-14 08:54:01.743 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:01 np0005486808 nova_compute[259627]: 2025-10-14 08:54:01.746 2 DEBUG nova.network.neutron [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:01 np0005486808 nova_compute[259627]: 2025-10-14 08:54:01.761 2 DEBUG oslo_concurrency.lockutils [None req-83eff381-a19d-4fb7-a9b1-a49387258c00 767ba0b62bad4323bf9810b31b7bf2c1 83e54416b4e149f881ead0c3ff615944 - - default default] Releasing lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:01 np0005486808 nova_compute[259627]: 2025-10-14 08:54:01.762 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquired lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:01 np0005486808 nova_compute[259627]: 2025-10-14 08:54:01.762 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:02 np0005486808 nova_compute[259627]: 2025-10-14 08:54:02.747 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:02 np0005486808 nova_compute[259627]: 2025-10-14 08:54:02.904 2 DEBUG nova.network.neutron [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:02 np0005486808 nova_compute[259627]: 2025-10-14 08:54:02.922 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Releasing lock "refresh_cache-2ce8be7a-3198-4f1c-ba79-e11a24581a60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:02 np0005486808 nova_compute[259627]: 2025-10-14 08:54:02.923 2 DEBUG nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:54:02 np0005486808 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 14 04:54:02 np0005486808 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 5.779s CPU time.
Oct 14 04:54:02 np0005486808 systemd-machined[214636]: Machine qemu-2-instance-00000001 terminated.
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.059 2 DEBUG nova.network.neutron [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.081 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Releasing lock "refresh_cache-0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.081 2 DEBUG nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.152 2 INFO nova.virt.libvirt.driver [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance destroyed successfully.#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.153 2 DEBUG nova.objects.instance [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lazy-loading 'resources' on Instance uuid 2ce8be7a-3198-4f1c-ba79-e11a24581a60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:03 np0005486808 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 14 04:54:03 np0005486808 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 5.936s CPU time.
Oct 14 04:54:03 np0005486808 systemd-machined[214636]: Machine qemu-1-instance-00000002 terminated.
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.306 2 INFO nova.virt.libvirt.driver [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance destroyed successfully.#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.306 2 DEBUG nova.objects.instance [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lazy-loading 'resources' on Instance uuid 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 4.8 MiB/s wr, 273 op/s
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.612 2 INFO nova.virt.libvirt.driver [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deleting instance files /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60_del#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.614 2 INFO nova.virt.libvirt.driver [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deletion of /var/lib/nova/instances/2ce8be7a-3198-4f1c-ba79-e11a24581a60_del complete#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.694 2 DEBUG nova.virt.libvirt.host [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.695 2 INFO nova.virt.libvirt.host [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] UEFI support detected#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.698 2 INFO nova.compute.manager [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.699 2 DEBUG oslo.service.loopingcall [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.700 2 DEBUG nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.700 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.776 2 INFO nova.virt.libvirt.driver [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deleting instance files /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_del#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.777 2 INFO nova.virt.libvirt.driver [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deletion of /var/lib/nova/instances/0a24666a-3d83-4fd9-8a89-c0585d8dc8e3_del complete#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.834 2 INFO nova.compute.manager [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.834 2 DEBUG oslo.service.loopingcall [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.835 2 DEBUG nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:54:03 np0005486808 nova_compute[259627]: 2025-10-14 08:54:03.835 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.167 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.171 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.184 2 DEBUG nova.network.neutron [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.186 2 DEBUG nova.network.neutron [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.199 2 INFO nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.205 2 INFO nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Took 0.50 seconds to deallocate network for instance.#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.279 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.279 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.290 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.364 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:04 np0005486808 podman[272466]: 2025-10-14 08:54:04.691087903 +0000 UTC m=+0.094578413 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:54:04 np0005486808 podman[272465]: 2025-10-14 08:54:04.723738219 +0000 UTC m=+0.138075557 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 04:54:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2217850088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.899 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.909 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.952 2 ERROR nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] [req-b3b1c8bb-c8eb-443e-9601-9858c5acfd1b] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 92105e1d-1743-46e3-a494-858b4331398a.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-b3b1c8bb-c8eb-443e-9601-9858c5acfd1b"}]}#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.972 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.992 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 04:54:04 np0005486808 nova_compute[259627]: 2025-10-14 08:54:04.992 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.008 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.031 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.098 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324356703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001144178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.613 2 DEBUG oslo_concurrency.processutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.621 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.670 2 DEBUG nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updated inventory for provider 92105e1d-1743-46e3-a494-858b4331398a with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.671 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating resource provider 92105e1d-1743-46e3-a494-858b4331398a generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.671 2 DEBUG nova.compute.provider_tree [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.706 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.709 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.748 2 INFO nova.scheduler.client.report [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Deleted allocations for instance 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.777 2 DEBUG oslo_concurrency.processutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:05 np0005486808 nova_compute[259627]: 2025-10-14 08:54:05.840 2 DEBUG oslo_concurrency.lockutils [None req-5a445a25-2e66-44fb-a688-c9e035206596 352e09590ad54449b344c1cf9ed31e15 e2a0546071664670a3e6a70205cf65a4 - - default default] Lock "0a24666a-3d83-4fd9-8a89-c0585d8dc8e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104233409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.273 2 DEBUG oslo_concurrency.processutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.282 2 DEBUG nova.compute.provider_tree [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.298 2 DEBUG nova.scheduler.client.report [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.334 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.355 2 INFO nova.scheduler.client.report [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Deleted allocations for instance 2ce8be7a-3198-4f1c-ba79-e11a24581a60#033[00m
Oct 14 04:54:06 np0005486808 nova_compute[259627]: 2025-10-14 08:54:06.437 2 DEBUG oslo_concurrency.lockutils [None req-cbede0b6-14c4-4e4e-941c-de8342f05866 4bed0ea53b244e579f278f95b35bfc0d 6e03adb4741d4f1c8279abf27fb2b6a1 - - default default] Lock "2ce8be7a-3198-4f1c-ba79-e11a24581a60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 247 op/s
Oct 14 04:54:08 np0005486808 nova_compute[259627]: 2025-10-14 08:54:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:08 np0005486808 nova_compute[259627]: 2025-10-14 08:54:08.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:54:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 04:54:09 np0005486808 nova_compute[259627]: 2025-10-14 08:54:09.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:10 np0005486808 nova_compute[259627]: 2025-10-14 08:54:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:10 np0005486808 nova_compute[259627]: 2025-10-14 08:54:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 205 op/s
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:11 np0005486808 nova_compute[259627]: 2025-10-14 08:54:11.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.000 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149918691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.424 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.580 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.582 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.657 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:54:12 np0005486808 nova_compute[259627]: 2025-10-14 08:54:12.680 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369913060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:13 np0005486808 nova_compute[259627]: 2025-10-14 08:54:13.120 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:13 np0005486808 nova_compute[259627]: 2025-10-14 08:54:13.128 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:54:13 np0005486808 nova_compute[259627]: 2025-10-14 08:54:13.148 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:54:13 np0005486808 nova_compute[259627]: 2025-10-14 08:54:13.183 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:54:13 np0005486808 nova_compute[259627]: 2025-10-14 08:54:13.184 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 04:54:15 np0005486808 nova_compute[259627]: 2025-10-14 08:54:15.184 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Oct 14 04:54:16 np0005486808 nova_compute[259627]: 2025-10-14 08:54:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:16 np0005486808 nova_compute[259627]: 2025-10-14 08:54:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:54:16 np0005486808 nova_compute[259627]: 2025-10-14 08:54:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:54:16 np0005486808 nova_compute[259627]: 2025-10-14 08:54:16.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:54:16 np0005486808 nova_compute[259627]: 2025-10-14 08:54:16.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:54:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432043.1496763, 2ce8be7a-3198-4f1c-ba79-e11a24581a60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.152 2 INFO nova.compute.manager [-] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.184 2 DEBUG nova.compute.manager [None req-826ba085-5d16-4aba-b751-d5c8d841ead3 - - - - - -] [instance: 2ce8be7a-3198-4f1c-ba79-e11a24581a60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.301 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432043.3011541, 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.302 2 INFO nova.compute.manager [-] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:54:18 np0005486808 nova_compute[259627]: 2025-10-14 08:54:18.327 2 DEBUG nova.compute.manager [None req-bb3d20c3-0c23-436d-a3e4-a85c05662c7a - - - - - -] [instance: 0a24666a-3d83-4fd9-8a89-c0585d8dc8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:23 np0005486808 podman[272602]: 2025-10-14 08:54:23.686307573 +0000 UTC m=+0.085449079 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 04:54:23 np0005486808 podman[272601]: 2025-10-14 08:54:23.724119316 +0000 UTC m=+0.129789323 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 14 04:54:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:54:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:54:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.019870572 +0000 UTC m=+0.049609195 container create 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:54:27 np0005486808 systemd[1]: Started libpod-conmon-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope.
Oct 14 04:54:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:26.994678281 +0000 UTC m=+0.024416984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.099994608 +0000 UTC m=+0.129733241 container init 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.106688833 +0000 UTC m=+0.136427476 container start 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.110519248 +0000 UTC m=+0.140257881 container attach 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:54:27 np0005486808 frosty_ptolemy[273046]: 167 167
Oct 14 04:54:27 np0005486808 systemd[1]: libpod-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope: Deactivated successfully.
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.112792234 +0000 UTC m=+0.142530877 container died 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:54:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6f3bc74af7b6a9b005f5fab2465204fa9bd9ffc60599360f32a29ee68606060b-merged.mount: Deactivated successfully.
Oct 14 04:54:27 np0005486808 podman[273030]: 2025-10-14 08:54:27.158797549 +0000 UTC m=+0.188536162 container remove 3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:54:27 np0005486808 systemd[1]: libpod-conmon-3454f502f849506dffd42c3c871e58fafffdaa9a96ae457ea4ad070ffa81390e.scope: Deactivated successfully.
Oct 14 04:54:27 np0005486808 podman[273070]: 2025-10-14 08:54:27.30927855 +0000 UTC m=+0.045068102 container create eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 04:54:27 np0005486808 systemd[1]: Started libpod-conmon-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope.
Oct 14 04:54:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:27 np0005486808 podman[273070]: 2025-10-14 08:54:27.286743514 +0000 UTC m=+0.022533116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:27 np0005486808 podman[273070]: 2025-10-14 08:54:27.405149315 +0000 UTC m=+0.140938967 container init eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:54:27 np0005486808 podman[273070]: 2025-10-14 08:54:27.416206757 +0000 UTC m=+0.151996349 container start eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:54:27 np0005486808 podman[273070]: 2025-10-14 08:54:27.41997906 +0000 UTC m=+0.155768612 container attach eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:54:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]: [
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:    {
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "available": false,
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "ceph_device": false,
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "lsm_data": {},
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "lvs": [],
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "path": "/dev/sr0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "rejected_reasons": [
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "Has a FileSystem",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "Insufficient space (<5GB)"
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        ],
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        "sys_api": {
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "actuators": null,
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "device_nodes": "sr0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "devname": "sr0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "human_readable_size": "482.00 KB",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "id_bus": "ata",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "model": "QEMU DVD-ROM",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "nr_requests": "2",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "parent": "/dev/sr0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "partitions": {},
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "path": "/dev/sr0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "removable": "1",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "rev": "2.5+",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "ro": "0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "rotational": "0",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "sas_address": "",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "sas_device_handle": "",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "scheduler_mode": "mq-deadline",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "sectors": 0,
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "sectorsize": "2048",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "size": 493568.0,
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "support_discard": "2048",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "type": "disk",
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:            "vendor": "QEMU"
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:        }
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]:    }
Oct 14 04:54:28 np0005486808 cool_engelbart[273087]: ]
Oct 14 04:54:28 np0005486808 systemd[1]: libpod-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Deactivated successfully.
Oct 14 04:54:28 np0005486808 systemd[1]: libpod-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Consumed 1.533s CPU time.
Oct 14 04:54:28 np0005486808 podman[273070]: 2025-10-14 08:54:28.891145175 +0000 UTC m=+1.626934727 container died eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 04:54:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7c36a433cc282e90e55e772c6f60116a67ebac7f44367d7875e6a153e418924f-merged.mount: Deactivated successfully.
Oct 14 04:54:28 np0005486808 podman[273070]: 2025-10-14 08:54:28.955440581 +0000 UTC m=+1.691230123 container remove eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:54:28 np0005486808 systemd[1]: libpod-conmon-eda9e79637c22c7f9b11b04ffcd90d01d2c65cf9301037363eaa73d97ac0607b.scope: Deactivated successfully.
Oct 14 04:54:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2a96176a-3a0f-4d4c-ae60-09f9183e6f69 does not exist
Oct 14 04:54:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 456e7942-f6da-4403-a642-671685e26a85 does not exist
Oct 14 04:54:29 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev acd3702a-0fb0-4347-b483-43cd66b6f2d8 does not exist
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:29 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:54:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.660616854 +0000 UTC m=+0.062448112 container create a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:54:29 np0005486808 systemd[1]: Started libpod-conmon-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope.
Oct 14 04:54:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.638131859 +0000 UTC m=+0.039963117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.747874336 +0000 UTC m=+0.149705624 container init a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.760823835 +0000 UTC m=+0.162655083 container start a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.76547566 +0000 UTC m=+0.167306918 container attach a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:54:29 np0005486808 loving_hugle[275266]: 167 167
Oct 14 04:54:29 np0005486808 systemd[1]: libpod-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope: Deactivated successfully.
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.768618037 +0000 UTC m=+0.170449295 container died a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:54:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1fcf1fb91b8081beaa4a339846f1ed6de75a2f25ffc3782d4164984bce0ef3a4-merged.mount: Deactivated successfully.
Oct 14 04:54:29 np0005486808 podman[275250]: 2025-10-14 08:54:29.822940367 +0000 UTC m=+0.224771615 container remove a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:54:29 np0005486808 systemd[1]: libpod-conmon-a75b447f8b18d6d1c101c52a23095a32038e9c177472744fa2da76c04d294604.scope: Deactivated successfully.
Oct 14 04:54:30 np0005486808 podman[275290]: 2025-10-14 08:54:30.051296949 +0000 UTC m=+0.042874798 container create 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:54:30 np0005486808 systemd[1]: Started libpod-conmon-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope.
Oct 14 04:54:30 np0005486808 podman[275290]: 2025-10-14 08:54:30.030473876 +0000 UTC m=+0.022051715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:30 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:30 np0005486808 podman[275290]: 2025-10-14 08:54:30.152808673 +0000 UTC m=+0.144386492 container init 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:54:30 np0005486808 podman[275290]: 2025-10-14 08:54:30.1591701 +0000 UTC m=+0.150747919 container start 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:54:30 np0005486808 podman[275290]: 2025-10-14 08:54:30.163409045 +0000 UTC m=+0.154986864 container attach 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:54:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:31.119 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:54:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:31.122 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:54:31 np0005486808 sharp_herschel[275306]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:54:31 np0005486808 sharp_herschel[275306]: --> relative data size: 1.0
Oct 14 04:54:31 np0005486808 sharp_herschel[275306]: --> All data devices are unavailable
Oct 14 04:54:31 np0005486808 systemd[1]: libpod-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Deactivated successfully.
Oct 14 04:54:31 np0005486808 systemd[1]: libpod-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Consumed 1.146s CPU time.
Oct 14 04:54:31 np0005486808 podman[275290]: 2025-10-14 08:54:31.346173937 +0000 UTC m=+1.337751806 container died 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:54:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a662eb70535f52a04fba1ef8e0f11b0e012cabbe4e11a7d4ad6030dc1603a5b1-merged.mount: Deactivated successfully.
Oct 14 04:54:31 np0005486808 podman[275290]: 2025-10-14 08:54:31.428304121 +0000 UTC m=+1.419881960 container remove 54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:54:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:31 np0005486808 systemd[1]: libpod-conmon-54aa2136a14d0211ba68aada8d946ada6229c19abe3ca5b73dfb403d7029bb87.scope: Deactivated successfully.
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.280409528 +0000 UTC m=+0.057238493 container create 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 04:54:32 np0005486808 systemd[1]: Started libpod-conmon-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope.
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.252453139 +0000 UTC m=+0.029282204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.373590306 +0000 UTC m=+0.150419361 container init 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.382216119 +0000 UTC m=+0.159045084 container start 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:54:32 np0005486808 youthful_rhodes[275506]: 167 167
Oct 14 04:54:32 np0005486808 systemd[1]: libpod-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope: Deactivated successfully.
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.386822353 +0000 UTC m=+0.163651358 container attach 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:54:32 np0005486808 conmon[275506]: conmon 6de3d57a1597297df8bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope/container/memory.events
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.387714525 +0000 UTC m=+0.164543500 container died 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:54:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-586100e25cef6ae844784efd284ac3f9b7a3d0d9dbc58a38312ccb350b58b6f7-merged.mount: Deactivated successfully.
Oct 14 04:54:32 np0005486808 podman[275490]: 2025-10-14 08:54:32.427529397 +0000 UTC m=+0.204358382 container remove 6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:54:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:32 np0005486808 systemd[1]: libpod-conmon-6de3d57a1597297df8bd38c7798c68503e8856f7a7d3dc2609abe8c342553901.scope: Deactivated successfully.
Oct 14 04:54:32 np0005486808 podman[275530]: 2025-10-14 08:54:32.650446195 +0000 UTC m=+0.046506228 container create d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:54:32 np0005486808 systemd[1]: Started libpod-conmon-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope.
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:54:32
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups']
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:54:32 np0005486808 podman[275530]: 2025-10-14 08:54:32.628550655 +0000 UTC m=+0.024610678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:54:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:32 np0005486808 podman[275530]: 2025-10-14 08:54:32.773867659 +0000 UTC m=+0.169927742 container init d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:54:32 np0005486808 podman[275530]: 2025-10-14 08:54:32.78527744 +0000 UTC m=+0.181337483 container start d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 04:54:32 np0005486808 podman[275530]: 2025-10-14 08:54:32.790550051 +0000 UTC m=+0.186610094 container attach d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:54:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:54:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]: {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    "0": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "devices": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "/dev/loop3"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            ],
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_name": "ceph_lv0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_size": "21470642176",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "name": "ceph_lv0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "tags": {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_name": "ceph",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.crush_device_class": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.encrypted": "0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_id": "0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.vdo": "0"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            },
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "vg_name": "ceph_vg0"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        }
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    ],
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    "1": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "devices": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "/dev/loop4"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            ],
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_name": "ceph_lv1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_size": "21470642176",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "name": "ceph_lv1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "tags": {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_name": "ceph",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.crush_device_class": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.encrypted": "0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_id": "1",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.vdo": "0"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            },
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "vg_name": "ceph_vg1"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        }
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    ],
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    "2": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "devices": [
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "/dev/loop5"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            ],
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_name": "ceph_lv2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_size": "21470642176",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "name": "ceph_lv2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "tags": {
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.cluster_name": "ceph",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.crush_device_class": "",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.encrypted": "0",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osd_id": "2",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:                "ceph.vdo": "0"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            },
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "type": "block",
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:            "vg_name": "ceph_vg2"
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:        }
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]:    ]
Oct 14 04:54:33 np0005486808 quirky_lovelace[275547]: }
Oct 14 04:54:33 np0005486808 systemd[1]: libpod-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope: Deactivated successfully.
Oct 14 04:54:33 np0005486808 podman[275530]: 2025-10-14 08:54:33.566867978 +0000 UTC m=+0.962928171 container died d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:54:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-656c6cca969e86ccb1017915e3263d30c605f2817e29dba9fc149e6d7dbd94a0-merged.mount: Deactivated successfully.
Oct 14 04:54:33 np0005486808 podman[275530]: 2025-10-14 08:54:33.624350596 +0000 UTC m=+1.020410599 container remove d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:54:33 np0005486808 systemd[1]: libpod-conmon-d28cbc6d46f31618dec9057a166f26d98b381f698e2a17c6ab84e70b5c49ed61.scope: Deactivated successfully.
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.223075423 +0000 UTC m=+0.047201425 container create d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:54:34 np0005486808 systemd[1]: Started libpod-conmon-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope.
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.199237115 +0000 UTC m=+0.023363177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.306054339 +0000 UTC m=+0.130180371 container init d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.31133484 +0000 UTC m=+0.135460852 container start d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.314495148 +0000 UTC m=+0.138621210 container attach d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 04:54:34 np0005486808 infallible_grothendieck[275725]: 167 167
Oct 14 04:54:34 np0005486808 systemd[1]: libpod-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope: Deactivated successfully.
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.315718348 +0000 UTC m=+0.139844360 container died d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:54:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa51882d786509b7bf76ac687330626d800da1b67b4734597f3e53a64fd2de15-merged.mount: Deactivated successfully.
Oct 14 04:54:34 np0005486808 podman[275709]: 2025-10-14 08:54:34.353044378 +0000 UTC m=+0.177170390 container remove d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:54:34 np0005486808 systemd[1]: libpod-conmon-d140b8986c603f264678c1ca55d327b49c03af9163d35081aec7ff138514bbb2.scope: Deactivated successfully.
Oct 14 04:54:34 np0005486808 podman[275748]: 2025-10-14 08:54:34.523491302 +0000 UTC m=+0.047263936 container create ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:54:34 np0005486808 systemd[1]: Started libpod-conmon-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope.
Oct 14 04:54:34 np0005486808 podman[275748]: 2025-10-14 08:54:34.500310401 +0000 UTC m=+0.024083055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:54:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:34 np0005486808 podman[275748]: 2025-10-14 08:54:34.614790054 +0000 UTC m=+0.138562738 container init ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 04:54:34 np0005486808 podman[275748]: 2025-10-14 08:54:34.622075574 +0000 UTC m=+0.145848168 container start ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:54:34 np0005486808 podman[275748]: 2025-10-14 08:54:34.624990656 +0000 UTC m=+0.148763330 container attach ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:54:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]: {
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_id": 2,
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "type": "bluestore"
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    },
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_id": 1,
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "type": "bluestore"
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    },
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_id": 0,
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:        "type": "bluestore"
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]:    }
Oct 14 04:54:35 np0005486808 suspicious_knuth[275764]: }
Oct 14 04:54:35 np0005486808 podman[275748]: 2025-10-14 08:54:35.573206622 +0000 UTC m=+1.096979266 container died ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:54:35 np0005486808 systemd[1]: libpod-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope: Deactivated successfully.
Oct 14 04:54:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3cf0a1ea437792fcd3ae19594431cc7ce9bcd240c9c09519e7f29b7307cb1d76-merged.mount: Deactivated successfully.
Oct 14 04:54:35 np0005486808 podman[275748]: 2025-10-14 08:54:35.651899933 +0000 UTC m=+1.175672527 container remove ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:54:35 np0005486808 podman[275798]: 2025-10-14 08:54:35.664003101 +0000 UTC m=+0.076593230 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 14 04:54:35 np0005486808 systemd[1]: libpod-conmon-ae71bb5eaa1bd564beeaac9bae410c1abd8ca7b6de374d2229fdb9903c3d7811.scope: Deactivated successfully.
Oct 14 04:54:35 np0005486808 podman[275797]: 2025-10-14 08:54:35.685455481 +0000 UTC m=+0.096329927 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.702 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.702 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:54:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:54:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8f92683b-70fd-45ec-9e51-eff03cede30e does not exist
Oct 14 04:54:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f38e466b-feb9-4ea1-8971-780599c6a063 does not exist
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.728 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.813 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.814 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.821 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.822 2 INFO nova.compute.claims [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:54:35 np0005486808 nova_compute[259627]: 2025-10-14 08:54:35.937 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404368946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.355 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.361 2 DEBUG nova.compute.provider_tree [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.377 2 DEBUG nova.scheduler.client.report [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.405 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.406 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.459 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.460 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.481 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.498 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.581 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.583 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.583 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating image(s)#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.601 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.622 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.641 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.644 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.713 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.714 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.715 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.717 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.736 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.739 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 30af67a2-4b44-481c-8ab4-296e93c1c517_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.771 2 WARNING oslo_policy.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.772 2 WARNING oslo_policy.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.775 2 DEBUG nova.policy [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '831826dabb48463c92f24c277df4039e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:54:36 np0005486808 nova_compute[259627]: 2025-10-14 08:54:36.996 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 30af67a2-4b44-481c-8ab4-296e93c1c517_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.038 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] resizing rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.106 2 DEBUG nova.objects.instance [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'migration_context' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.119 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.119 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Ensure instance console log exists: /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.120 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:37.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.791 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Successfully created port: f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.968 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.969 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:37 np0005486808 nova_compute[259627]: 2025-10-14 08:54:37.986 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.005 2 DEBUG oslo_concurrency.processutils [None req-83798e37-e1ca-4d67-8680-82a778ae42d9 296d175bbbcb4e68b5452e11aae2ccb2 3736920871984ebdb2935d7d67386536 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.041 2 DEBUG oslo_concurrency.processutils [None req-83798e37-e1ca-4d67-8680-82a778ae42d9 296d175bbbcb4e68b5452e11aae2ccb2 3736920871984ebdb2935d7d67386536 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.056 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.057 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.070 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.070 2 INFO nova.compute.claims [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.225 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634047493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.687 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.696 2 DEBUG nova.compute.provider_tree [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.731 2 DEBUG nova.scheduler.client.report [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.762 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.764 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.808 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.809 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.833 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.847 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.911 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Successfully updated port: f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.932 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.934 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.935 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating image(s)#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.964 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:38 np0005486808 nova_compute[259627]: 2025-10-14 08:54:38.990 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.014 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.024 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.047 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.048 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.048 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.099 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.099 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.100 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.100 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.118 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.120 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1df3849-6811-41a9-9c70-f10a6863b4f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.137 2 DEBUG nova.policy [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '654cc6be69694fcd8058cc5a5eb78223', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ac87003cad443c2b75e49ebdefe379c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.373 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1df3849-6811-41a9-9c70-f10a6863b4f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.415 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] resizing rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:54:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.520 2 DEBUG nova.compute.manager [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.520 2 DEBUG nova.compute.manager [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.521 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.527 2 DEBUG nova.objects.instance [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'migration_context' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.543 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.544 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Ensure instance console log exists: /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.544 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.545 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.545 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:39 np0005486808 nova_compute[259627]: 2025-10-14 08:54:39.679 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.478 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Successfully created port: fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.867 2 DEBUG nova.network.neutron [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.898 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.899 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance network_info: |[{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.900 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.901 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.908 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start _get_guest_xml network_info=[{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.917 2 WARNING nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.922 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.923 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.926 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.927 2 DEBUG nova.virt.libvirt.host [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.928 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.929 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.930 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.931 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.931 2 DEBUG nova.virt.hardware [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:54:40 np0005486808 nova_compute[259627]: 2025-10-14 08:54:40.934 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:54:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628877322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.362 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Successfully updated port: fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.364 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.390 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.394 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.409 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.410 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.410 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 04:54:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:54:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941367568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.786 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.800 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.802 2 DEBUG nova.virt.libvirt.vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:36Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.802 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.803 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.805 2 DEBUG nova.objects.instance [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'pci_devices' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.827 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <uuid>30af67a2-4b44-481c-8ab4-296e93c1c517</uuid>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <name>instance-00000003</name>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1417208985</nova:name>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:54:40</nova:creationTime>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:user uuid="831826dabb48463c92f24c277df4039e">tempest-FloatingIPsAssociationTestJSON-1304888620-project-member</nova:user>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:project uuid="f5b8fd07d6d54bda9a0257bf72d4b37f">tempest-FloatingIPsAssociationTestJSON-1304888620</nova:project>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <nova:port uuid="f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="serial">30af67a2-4b44-481c-8ab4-296e93c1c517</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="uuid">30af67a2-4b44-481c-8ab4-296e93c1c517</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/30af67a2-4b44-481c-8ab4-296e93c1c517_disk">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:76:a5:a0"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <target dev="tapf0f1dcbf-2b"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/console.log" append="off"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:54:41 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:54:41 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:54:41 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:54:41 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.829 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Preparing to wait for external event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.829 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.830 2 DEBUG nova.virt.libvirt.vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:36Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.831 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.831 2 DEBUG nova.network.os_vif_util [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.832 2 DEBUG os_vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.876 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.877 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.877 2 DEBUG ovsdbapp.backend.ovs_idl [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.900 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.901 2 INFO oslo.privsep.daemon [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpz411kyq8/privsep.sock']#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.925 2 DEBUG nova.compute.manager [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.926 2 DEBUG nova.compute.manager [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing instance network info cache due to event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:54:41 np0005486808 nova_compute[259627]: 2025-10-14 08:54:41.926 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.565 2 INFO oslo.privsep.daemon [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.429 1734 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.434 1734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.437 1734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.437 1734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1734#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.607 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.607 2 DEBUG nova.network.neutron [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.625 2 DEBUG oslo_concurrency.lockutils [req-0bfbed98-7285-4710-8436-d576709fa1f7 req-6978556e-7130-497a-b1d6-7e137dd0f54d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:54:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f1dcbf-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0f1dcbf-2b, col_values=(('external_ids', {'iface-id': 'f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:a5:a0', 'vm-uuid': '30af67a2-4b44-481c-8ab4-296e93c1c517'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:42 np0005486808 NetworkManager[44885]: <info>  [1760432082.9508] manager: (tapf0f1dcbf-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:42 np0005486808 nova_compute[259627]: 2025-10-14 08:54:42.962 2 INFO os_vif [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b')#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.033 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.033 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.036 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No VIF found with MAC fa:16:3e:76:a5:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.038 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Using config drive#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.073 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.109 2 DEBUG nova.network.neutron [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.136 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.137 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance network_info: |[{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.137 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.138 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.141 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start _get_guest_xml network_info=[{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.147 2 WARNING nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.154 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.155 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.158 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.158 2 DEBUG nova.virt.libvirt.host [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:54:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1771896407',id=29,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1703239647',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.159 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.160 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.161 2 DEBUG nova.virt.hardware [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.163 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 04:54:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:54:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365924208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.609 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.626 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:43 np0005486808 nova_compute[259627]: 2025-10-14 08:54:43.630 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.022 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Creating config drive at /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.028 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dupjhfe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:54:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020984832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.048 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.050 2 DEBUG nova.virt.libvirt.vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.051 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.052 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.053 2 DEBUG nova.objects.instance [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'pci_devices' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.068 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <uuid>f1df3849-6811-41a9-9c70-f10a6863b4f9</uuid>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <name>instance-00000004</name>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-148937452</nova:name>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:54:43</nova:creationTime>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1703239647">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:user uuid="654cc6be69694fcd8058cc5a5eb78223">tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member</nova:user>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:project uuid="3ac87003cad443c2b75e49ebdefe379c">tempest-ServersWithSpecificFlavorTestJSON-632252786</nova:project>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <nova:port uuid="fd4673d1-9420-4d31-a2ce-c5cb5bc79c42">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="serial">f1df3849-6811-41a9-9c70-f10a6863b4f9</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="uuid">f1df3849-6811-41a9-9c70-f10a6863b4f9</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f1df3849-6811-41a9-9c70-f10a6863b4f9_disk">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b8:61:64"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <target dev="tapfd4673d1-94"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/console.log" append="off"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:54:44 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:54:44 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:54:44 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:54:44 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Preparing to wait for external event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.070 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.071 2 DEBUG nova.virt.libvirt.vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.071 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.072 2 DEBUG nova.network.os_vif_util [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.073 2 DEBUG os_vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4673d1-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd4673d1-94, col_values=(('external_ids', {'iface-id': 'fd4673d1-9420-4d31-a2ce-c5cb5bc79c42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:61:64', 'vm-uuid': 'f1df3849-6811-41a9-9c70-f10a6863b4f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.0802] manager: (tapfd4673d1-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.089 2 INFO os_vif [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94')#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.134 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.135 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.135 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No VIF found with MAC fa:16:3e:b8:61:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.135 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Using config drive#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.152 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.157 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dupjhfe" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.181 2 DEBUG nova.storage.rbd_utils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.184 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.320 2 DEBUG oslo_concurrency.processutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config 30af67a2-4b44-481c-8ab4-296e93c1c517_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.321 2 INFO nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deleting local config drive /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517/disk.config because it was imported into RBD.#033[00m
Oct 14 04:54:44 np0005486808 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 14 04:54:44 np0005486808 kernel: tapf0f1dcbf-2b: entered promiscuous mode
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.4077] manager: (tapf0f1dcbf-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00027|binding|INFO|Claiming lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for this chassis.
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00028|binding|INFO|f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0: Claiming fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 04:54:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.432 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:54:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.434 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa bound to our chassis#033[00m
Oct 14 04:54:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.436 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa#033[00m
Oct 14 04:54:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.438 162547 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp03yknxqr/privsep.sock']#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.450 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Creating config drive at /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.455 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqunzi4sv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:44 np0005486808 systemd-udevd[276513]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:54:44 np0005486808 systemd-machined[214636]: New machine qemu-3-instance-00000003.
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.4722] device (tapf0f1dcbf-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.4734] device (tapf0f1dcbf-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:54:44 np0005486808 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00029|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 ovn-installed in OVS
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00030|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 up in Southbound
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.588 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqunzi4sv" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.607 2 DEBUG nova.storage.rbd_utils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.609 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.770 2 DEBUG oslo_concurrency.processutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config f1df3849-6811-41a9-9c70-f10a6863b4f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.771 2 INFO nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deleting local config drive /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9/disk.config because it was imported into RBD.#033[00m
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.8121] manager: (tapfd4673d1-94): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 14 04:54:44 np0005486808 systemd-udevd[276511]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:54:44 np0005486808 kernel: tapfd4673d1-94: entered promiscuous mode
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00031|binding|INFO|Claiming lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for this chassis.
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00032|binding|INFO|fd4673d1-9420-4d31-a2ce-c5cb5bc79c42: Claiming fa:16:3e:b8:61:64 10.100.0.10
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:44.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:64 10.100.0.10'], port_security=['fa:16:3e:b8:61:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f1df3849-6811-41a9-9c70-f10a6863b4f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.8299] device (tapfd4673d1-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:54:44 np0005486808 NetworkManager[44885]: <info>  [1760432084.8305] device (tapfd4673d1-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:54:44 np0005486808 systemd-machined[214636]: New machine qemu-4-instance-00000004.
Oct 14 04:54:44 np0005486808 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00033|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 ovn-installed in OVS
Oct 14 04:54:44 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:44Z|00034|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 up in Southbound
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.884 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updated VIF entry in instance network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.884 2 DEBUG nova.network.neutron [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG nova.compute.manager [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.918 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.919 2 DEBUG oslo_concurrency.lockutils [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.919 2 DEBUG nova.compute.manager [req-103f12de-ddfa-4018-bdc4-00168ede3ba8 req-015a17ac-126e-4c34-9dcf-350246a1badc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Processing event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:54:44 np0005486808 nova_compute[259627]: 2025-10-14 08:54:44.920 2 DEBUG oslo_concurrency.lockutils [req-a9a6cbcc-e50a-4ffb-850f-56b9041b20f1 req-78985573-aec8-4d70-8653-f06e04e1d39c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.143 2 DEBUG nova.compute.manager [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.144 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.145 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.145 2 DEBUG oslo_concurrency.lockutils [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.146 2 DEBUG nova.compute.manager [req-a1a69348-4a73-485c-910f-82485462f862 req-a9f53f94-4d0e-4de3-a7ec-ddd22da94fa9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Processing event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.153 162547 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.153 162547 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp03yknxqr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.037 276588 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.041 276588 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.044 276588 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.044 276588 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276588#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7097399-2db1-4746-81fc-71cd85d3a236]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.627 276588 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.628 276588 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:45.629 276588 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.695 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.6947627, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.695 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Started (Lifecycle Event)#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.699 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.705 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.710 2 INFO nova.virt.libvirt.driver [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance spawned successfully.#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.711 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.723 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.727 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.747 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.748 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.749 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.750 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.751 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.751 2 DEBUG nova.virt.libvirt.driver [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.759 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.760 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.6948502, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.760 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.807 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432085.7026436, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.808 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.835 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.839 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.844 2 INFO nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 6.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.844 2 DEBUG nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.856 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.896 2 INFO nova.compute.manager [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 7.86 seconds to build instance.#033[00m
Oct 14 04:54:45 np0005486808 nova_compute[259627]: 2025-10-14 08:54:45.916 2 DEBUG oslo_concurrency.lockutils [None req-105abacb-0e28-4805-b168-a5e8039e2098 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5db8ec9f-6b89-4b5d-96f7-36699129a672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.236 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap92d50a40-91 in ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.238 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap92d50a40-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.238 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0bdcd-86fc-4cb1-9b19-bdd88333e17a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.240 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adb10e4f-5e5b-43bf-9232-f95eb80bfe6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.280 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ad871b-94f0-417c-ae2c-425226e99cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.315 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c70726c1-a9dd-48c3-8c3e-f5be8eeb6cbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.317 162547 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpm6cjjc6j/privsep.sock']#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.342 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.342444, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.343 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Started (Lifecycle Event)#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.345 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.347 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.350 2 INFO nova.virt.libvirt.driver [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance spawned successfully.#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.351 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.371 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.408 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.409 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.410 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.411 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.411 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.412 2 DEBUG nova.virt.libvirt.driver [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.417 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.440 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.441 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.3431032, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.441 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.492 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432086.3475614, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.499 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.509 2 INFO nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 9.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.510 2 DEBUG nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.545 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.548 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.580 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.591 2 INFO nova.compute.manager [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 10.81 seconds to build instance.#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.606 2 DEBUG oslo_concurrency.lockutils [None req-481c42d8-9dcc-43f7-9d93-cd368336fc1a 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:46 np0005486808 nova_compute[259627]: 2025-10-14 08:54:46.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.013 2 DEBUG nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.014 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.014 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.015 2 DEBUG oslo_concurrency.lockutils [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.015 2 DEBUG nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.015 2 WARNING nova.compute.manager [req-d97288c9-3ac6-4b2b-8451-2582b1eac8e6 req-f66da549-6c55-4e73-9edc-fbf958e399a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.077 162547 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.079 162547 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm6cjjc6j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.952 276686 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.957 276686 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.960 276686 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:46.960 276686 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276686#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.081 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70311341-1164-49ff-9b6f-12cb2e99bea0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.636 2 DEBUG nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.637 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG oslo_concurrency.lockutils [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.638 2 DEBUG nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:54:47 np0005486808 nova_compute[259627]: 2025-10-14 08:54:47.639 2 WARNING nova.compute.manager [req-d77ca1f2-d9f2-4d9b-82ca-298efaeee987 req-1fba25c6-e616-4671-b00e-379061b3a420 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:47.697 276686 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.310 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c518cfee-525f-45de-b796-9e8ae12e72e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.3198] manager: (tap92d50a40-90): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0890bf-4a43-4e0b-ad47-493503aadb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5770cff1-df6b-4779-9aa5-df9e8898683c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.350 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[44985054-8440-4696-a634-73fa015e17e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 systemd-udevd[276698]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.3698] device (tap92d50a40-90): carrier: link connected
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.373 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef7d7c7-9d6d-4ae8-8906-f53b11fb9ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.400 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04d69da2-aa2a-4fc3-b30e-310224f4d93e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276701, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[927ff67c-377c-4c80-bf4c-9f8ff7439ea3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:6c66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586713, 'tstamp': 586713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276716, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.472 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[676199c0-d3c6-402a-a862-7338bd3caba0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276717, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[047a848f-918e-42a7-9d5c-b083c21a25e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ffaebb54-64ee-49a4-8a22-b891432f80d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.579 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.579 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.580 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.5824] manager: (tap92d50a40-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 14 04:54:48 np0005486808 kernel: tap92d50a40-90: entered promiscuous mode
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.587 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:48Z|00035|binding|INFO|Releasing lport 2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52 from this chassis (sb_readonly=0)
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.591 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51e8d14c-bc69-4598-86d5-18d83168fc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.594 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/92d50a40-95c8-4c0a-a4ab-d459f68516aa.pid.haproxy
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 92d50a40-95c8-4c0a-a4ab-d459f68516aa
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:54:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:48.595 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'env', 'PROCESS_TAG=haproxy-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/92d50a40-95c8-4c0a-a4ab-d459f68516aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7082] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7085] device (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7093] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7096] device (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7104] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7109] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7113] device (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 14 04:54:48 np0005486808 NetworkManager[44885]: <info>  [1760432088.7115] device (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:48 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:48Z|00036|binding|INFO|Releasing lport 2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52 from this chassis (sb_readonly=0)
Oct 14 04:54:48 np0005486808 nova_compute[259627]: 2025-10-14 08:54:48.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 podman[276750]: 2025-10-14 08:54:49.034495823 +0000 UTC m=+0.067200908 container create 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 podman[276750]: 2025-10-14 08:54:48.995969633 +0000 UTC m=+0.028674788 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:54:49 np0005486808 systemd[1]: Started libpod-conmon-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope.
Oct 14 04:54:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3945ba3649327f68a0834580ef80e82e4e0d34f541da7ffdb8442a6916717c6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:49 np0005486808 podman[276750]: 2025-10-14 08:54:49.161298911 +0000 UTC m=+0.194004016 container init 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:54:49 np0005486808 podman[276750]: 2025-10-14 08:54:49.171131103 +0000 UTC m=+0.203836188 container start 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:54:49 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : New worker (276772) forked
Oct 14 04:54:49 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : Loading success.
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.241 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.243 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.257 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a57f81c7-d064-4138-aa7e-4f65c8738006]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.258 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f970eb9-81 in ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.261 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f970eb9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5434a5f-16f2-4ffe-b76d-af64912716de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87bf5f43-0dd7-4366-ad63-e75a6d142faf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.292 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[97f8e502-df14-4fbf-8219-72b670dc13a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f02065-de63-416f-af7a-b75303a519e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cfb5d8-7d01-4b61-9b2e-8bb9156d700d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 NetworkManager[44885]: <info>  [1760432089.3702] manager: (tap6f970eb9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[600e1a63-c211-4c2e-8fa9-e7432a985816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 systemd-udevd[276708]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.407 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb80d3b-4ed2-4065-912f-1bd2b603a04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.411 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ca045d27-9d42-42c4-990e-790c4f9f762e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 NetworkManager[44885]: <info>  [1760432089.4312] device (tap6f970eb9-80): carrier: link connected
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.438 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4141e213-5135-4aff-8ca7-9813a27a4e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.454 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bc2e97-bbe8-42ee-8adb-ecb0821f091a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586819, 'reachable_time': 18741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276794, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc1d4cb-d492-466d-b519-ee195af34a92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:30aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586819, 'tstamp': 586819}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276795, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00684a-889a-4014-8ead-a91ac043c55c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586819, 'reachable_time': 18741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276796, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.509 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7d876b-9582-405a-b3b5-ccbf2d0bfbee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.565 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb282bc-20bb-4e7a-b723-3af9f08522ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.567 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f970eb9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:49 np0005486808 NetworkManager[44885]: <info>  [1760432089.5691] manager: (tap6f970eb9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 14 04:54:49 np0005486808 kernel: tap6f970eb9-80: entered promiscuous mode
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f970eb9-80, col_values=(('external_ids', {'iface-id': '6a62b55c-d140-4dc2-a487-c292e81e63e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:54:49 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:49Z|00037|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 nova_compute[259627]: 2025-10-14 08:54:49.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.601 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e23e45e7-346b-4723-8e39-7662feeabab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.604 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:54:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:54:49.606 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'env', 'PROCESS_TAG=haproxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:54:49 np0005486808 podman[276828]: 2025-10-14 08:54:49.952176376 +0000 UTC m=+0.039330811 container create e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:54:50 np0005486808 systemd[1]: Started libpod-conmon-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope.
Oct 14 04:54:50 np0005486808 podman[276828]: 2025-10-14 08:54:49.9336739 +0000 UTC m=+0.020828345 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:54:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:54:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffccbbfb65602d0e35d329bb8e895b905f5212158c2b547223141001e2533a96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:54:50 np0005486808 podman[276828]: 2025-10-14 08:54:50.06016245 +0000 UTC m=+0.147316945 container init e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:54:50 np0005486808 podman[276828]: 2025-10-14 08:54:50.070491764 +0000 UTC m=+0.157646219 container start e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 14 04:54:50 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : New worker (276850) forked
Oct 14 04:54:50 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : Loading success.
Oct 14 04:54:50 np0005486808 nova_compute[259627]: 2025-10-14 08:54:50.987 2 DEBUG nova.compute.manager [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:50 np0005486808 nova_compute[259627]: 2025-10-14 08:54:50.988 2 DEBUG nova.compute.manager [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing instance network info cache due to event network-changed-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:54:50 np0005486808 nova_compute[259627]: 2025-10-14 08:54:50.988 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:50 np0005486808 nova_compute[259627]: 2025-10-14 08:54:50.989 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:50 np0005486808 nova_compute[259627]: 2025-10-14 08:54:50.989 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Refreshing network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:54:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 04:54:51 np0005486808 nova_compute[259627]: 2025-10-14 08:54:51.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 04:54:53 np0005486808 nova_compute[259627]: 2025-10-14 08:54:53.824 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updated VIF entry in instance network info cache for port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:54:53 np0005486808 nova_compute[259627]: 2025-10-14 08:54:53.825 2 DEBUG nova.network.neutron [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [{"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:53 np0005486808 nova_compute[259627]: 2025-10-14 08:54:53.843 2 DEBUG oslo_concurrency.lockutils [req-38c900d9-552a-4697-ac7a-ce0b73820e8f req-c1886beb-22ba-4087-a47f-aaacbf30f17c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1df3849-6811-41a9-9c70-f10a6863b4f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:54 np0005486808 nova_compute[259627]: 2025-10-14 08:54:54.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:54 np0005486808 podman[276861]: 2025-10-14 08:54:54.695429735 +0000 UTC m=+0.096594064 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 04:54:54 np0005486808 podman[276860]: 2025-10-14 08:54:54.715984542 +0000 UTC m=+0.121217421 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:54:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.715 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.715 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.734 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.798 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.799 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.810 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.811 2 INFO nova.compute.claims [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:54:55 np0005486808 nova_compute[259627]: 2025-10-14 08:54:55.942 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:54:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128658766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.435 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.441 2 DEBUG nova.compute.provider_tree [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.463 2 DEBUG nova.scheduler.client.report [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.493 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.494 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.575 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.575 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.606 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.629 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.739 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.742 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.743 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating image(s)#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.771 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.832 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.866 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.870 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.899 2 DEBUG nova.policy [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '831826dabb48463c92f24c277df4039e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.938 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.939 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.939 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.940 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.979 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:54:56 np0005486808 nova_compute[259627]: 2025-10-14 08:54:56.986 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.239 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.298 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] resizing rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.403 2 DEBUG nova.objects.instance [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'migration_context' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.426 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Ensure instance console log exists: /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.427 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.428 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:54:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:54:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 04:54:57 np0005486808 nova_compute[259627]: 2025-10-14 08:54:57.651 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Successfully created port: be863b8b-ed33-4cec-a274-d62c9bd4ac05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:54:58 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.242 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Successfully updated port: be863b8b-ed33-4cec-a274-d62c9bd4ac05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.259 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.260 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.260 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.366 2 DEBUG nova.compute.manager [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.366 2 DEBUG nova.compute.manager [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.367 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:54:58 np0005486808 nova_compute[259627]: 2025-10-14 08:54:58.769 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:54:58 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:54:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 135 op/s
Oct 14 04:54:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 04:54:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 04:54:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:59Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:61:64 10.100.0.10
Oct 14 04:54:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:54:59Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:61:64 10.100.0.10
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.825 2 DEBUG nova.network.neutron [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.858 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.858 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance network_info: |[{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.859 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.860 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.868 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start _get_guest_xml network_info=[{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.880 2 WARNING nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.896 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.897 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.903 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:54:59 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.904 2 DEBUG nova.virt.libvirt.host [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.905 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.905 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.906 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.907 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.908 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.908 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.909 2 DEBUG nova.virt.hardware [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:54:59 np0005486808 nova_compute[259627]: 2025-10-14 08:54:59.914 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894751824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.408 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.440 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.444 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075345938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.885 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.888 2 DEBUG nova.virt.libvirt.vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:56Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.888 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.889 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.891 2 DEBUG nova.objects.instance [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.913 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <uuid>7b60a7cc-57e5-4833-9541-ed03e9e862ea</uuid>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <name>instance-00000005</name>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1096031615</nova:name>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:54:59</nova:creationTime>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:user uuid="831826dabb48463c92f24c277df4039e">tempest-FloatingIPsAssociationTestJSON-1304888620-project-member</nova:user>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:project uuid="f5b8fd07d6d54bda9a0257bf72d4b37f">tempest-FloatingIPsAssociationTestJSON-1304888620</nova:project>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <nova:port uuid="be863b8b-ed33-4cec-a274-d62c9bd4ac05">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="serial">7b60a7cc-57e5-4833-9541-ed03e9e862ea</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="uuid">7b60a7cc-57e5-4833-9541-ed03e9e862ea</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c0:74:b6"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <target dev="tapbe863b8b-ed"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/console.log" append="off"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:55:00 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:55:00 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:55:00 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:55:00 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.915 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Preparing to wait for external event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.915 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.916 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.916 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.917 2 DEBUG nova.virt.libvirt.vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:54:56Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.917 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.918 2 DEBUG nova.network.os_vif_util [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.919 2 DEBUG os_vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.927 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe863b8b-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe863b8b-ed, col_values=(('external_ids', {'iface-id': 'be863b8b-ed33-4cec-a274-d62c9bd4ac05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:74:b6', 'vm-uuid': '7b60a7cc-57e5-4833-9541-ed03e9e862ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:00 np0005486808 NetworkManager[44885]: <info>  [1760432100.9323] manager: (tapbe863b8b-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.938 2 INFO os_vif [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed')#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.992 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] No VIF found with MAC fa:16:3e:c0:74:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:55:00 np0005486808 nova_compute[259627]: 2025-10-14 08:55:00.993 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Using config drive#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.015 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.363 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.363 2 DEBUG nova.network.neutron [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.377 2 DEBUG oslo_concurrency.lockutils [req-b83064d6-8255-47ff-82eb-f80fd9a7eadc req-15290f6f-c128-4ce9-8b33-2abfd1693373 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.394 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Creating config drive at /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.398 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6_0lvgl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.0 MiB/s wr, 281 op/s
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.527 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6_0lvgl" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.568 2 DEBUG nova.storage.rbd_utils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] rbd image 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.574 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.728 2 DEBUG oslo_concurrency.processutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config 7b60a7cc-57e5-4833-9541-ed03e9e862ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.730 2 INFO nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deleting local config drive /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea/disk.config because it was imported into RBD.#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 NetworkManager[44885]: <info>  [1760432101.7984] manager: (tapbe863b8b-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 14 04:55:01 np0005486808 kernel: tapbe863b8b-ed: entered promiscuous mode
Oct 14 04:55:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:01Z|00038|binding|INFO|Claiming lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 for this chassis.
Oct 14 04:55:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:01Z|00039|binding|INFO|be863b8b-ed33-4cec-a274-d62c9bd4ac05: Claiming fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.810 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:74:b6 10.100.0.13'], port_security=['fa:16:3e:c0:74:b6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b60a7cc-57e5-4833-9541-ed03e9e862ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be863b8b-ed33-4cec-a274-d62c9bd4ac05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be863b8b-ed33-4cec-a274-d62c9bd4ac05 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa bound to our chassis#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.819 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa#033[00m
Oct 14 04:55:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:01Z|00040|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 ovn-installed in OVS
Oct 14 04:55:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:01Z|00041|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 up in Southbound
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 systemd-machined[214636]: New machine qemu-5-instance-00000005.
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[41dfe802-69c0-4231-9bd9-77dbddbcedbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 14 04:55:01 np0005486808 systemd-udevd[277224]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.884 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[113f033e-56c6-4795-b73a-c296d5314f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 NetworkManager[44885]: <info>  [1760432101.8901] device (tapbe863b8b-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:55:01 np0005486808 NetworkManager[44885]: <info>  [1760432101.8912] device (tapbe863b8b-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.890 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccefe4b-e46e-4525-882b-962e4d54d0b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.923 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3576dccf-187d-491f-a0fa-ff3b4690f417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[443a7bf4-3000-4a0f-a6f1-0686df8dbfe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277234, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.957 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5664779f-fa4f-4a96-bc46-f12d48ddf0a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586730, 'tstamp': 586730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277236, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586733, 'tstamp': 586733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277236, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.958 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 nova_compute[259627]: 2025-10-14 08:55:01.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.966 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:01.967 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.135 2 DEBUG nova.compute.manager [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.135 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG oslo_concurrency.lockutils [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.136 2 DEBUG nova.compute.manager [req-7fc071a0-8298-44c3-92d4-42ffdf56cc9c req-f0af7eeb-59ff-4001-932d-4b8a92729428 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Processing event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:55:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.951 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9504519, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.952 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Started (Lifecycle Event)#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.955 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.959 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.964 2 INFO nova.virt.libvirt.driver [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance spawned successfully.#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.965 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.982 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.989 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.998 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:02 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.999 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:02.999 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.000 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.000 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.001 2 DEBUG nova.virt.libvirt.driver [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.012 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.012 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9506228, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.012 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.042 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.046 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432102.9594593, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.065 2 INFO nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 6.33 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.065 2 DEBUG nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.072 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.101 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.134 2 INFO nova.compute.manager [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 7.36 seconds to build instance.#033[00m
Oct 14 04:55:03 np0005486808 nova_compute[259627]: 2025-10-14 08:55:03.149 2 DEBUG oslo_concurrency.lockutils [None req-bd2aafc4-2ed8-4687-946c-96c27d292c55 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 241 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.277 2 DEBUG oslo_concurrency.lockutils [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.278 2 DEBUG nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:04 np0005486808 nova_compute[259627]: 2025-10-14 08:55:04.278 2 WARNING nova.compute.manager [req-252bfe1f-2699-43b3-b0c8-6f81135d07fa req-2bf89832-7198-4b2c-a5c3-cbf9d0506690 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:55:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 04:55:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:55:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:55:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:55:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180873797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:55:05 np0005486808 nova_compute[259627]: 2025-10-14 08:55:05.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.186 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.187 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.212 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.302 2 DEBUG nova.compute.manager [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.303 2 DEBUG nova.compute.manager [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.304 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.304 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.305 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.312 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.313 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.322 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.323 2 INFO nova.compute.claims [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.526 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:06 np0005486808 podman[277281]: 2025-10-14 08:55:06.714855764 +0000 UTC m=+0.068437309 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:55:06 np0005486808 podman[277280]: 2025-10-14 08:55:06.76132908 +0000 UTC m=+0.114583277 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 04:55:06 np0005486808 nova_compute[259627]: 2025-10-14 08:55:06.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770342919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.012 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.013 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.018 2 DEBUG nova.compute.provider_tree [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.034 2 DEBUG nova.scheduler.client.report [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.061 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.062 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.102 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.102 2 DEBUG nova.network.neutron [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.120 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.138 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.225 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.226 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.226 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating image(s)#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.250 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.271 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.292 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.295 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.354 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.355 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.355 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.356 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.374 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.378 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.479 2 DEBUG nova.network.neutron [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.480 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.593 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.667 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] resizing rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.792 2 DEBUG nova.objects.instance [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.810 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.811 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Ensure instance console log exists: /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.811 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.812 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.812 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.814 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.819 2 WARNING nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.824 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.824 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.827 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.827 2 DEBUG nova.virt.libvirt.host [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.828 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.829 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.830 2 DEBUG nova.virt.hardware [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.832 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.868 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.868 2 DEBUG nova.network.neutron [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:07 np0005486808 nova_compute[259627]: 2025-10-14 08:55:07.883 2 DEBUG oslo_concurrency.lockutils [req-c0f8ec90-e2fd-4282-818e-4aace5648859 req-60256abd-35b0-42d8-8dec-8d6408252ce6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427218954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.307 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.326 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.329 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942311881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.747 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.750 2 DEBUG nova.objects.instance [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.768 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <uuid>2d3012e0-0c96-4f38-aaf5-91e69018d624</uuid>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <name>instance-00000006</name>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1046199962</nova:name>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:55:07</nova:creationTime>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:user uuid="d198584e448f4f7588fd71c62016a5d9">tempest-ServerDiagnosticsNegativeTest-1101654890-project-member</nova:user>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <nova:project uuid="b83283e63f5f412aa3f06e953847cac6">tempest-ServerDiagnosticsNegativeTest-1101654890</nova:project>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="serial">2d3012e0-0c96-4f38-aaf5-91e69018d624</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="uuid">2d3012e0-0c96-4f38-aaf5-91e69018d624</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2d3012e0-0c96-4f38-aaf5-91e69018d624_disk">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/console.log" append="off"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:55:08 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:55:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:55:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:55:08 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.842 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.843 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.844 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Using config drive#033[00m
Oct 14 04:55:08 np0005486808 nova_compute[259627]: 2025-10-14 08:55:08.871 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.032 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.035 2 INFO nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Terminating instance#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.036 2 DEBUG nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:09 np0005486808 kernel: tapfd4673d1-94 (unregistering): left promiscuous mode
Oct 14 04:55:09 np0005486808 NetworkManager[44885]: <info>  [1760432109.1000] device (tapfd4673d1-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.125 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Creating config drive at /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.130 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurfes625 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:09 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:09Z|00042|binding|INFO|Releasing lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 from this chassis (sb_readonly=0)
Oct 14 04:55:09 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:09Z|00043|binding|INFO|Setting lport fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 down in Southbound
Oct 14 04:55:09 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:09Z|00044|binding|INFO|Removing iface tapfd4673d1-94 ovn-installed in OVS
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.159 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:64 10.100.0.10'], port_security=['fa:16:3e:b8:61:64 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f1df3849-6811-41a9-9c70-f10a6863b4f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.161 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.163 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.167 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf6ec4f-9470-45a6-b1a5-f6668530e059]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.168 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace which is not needed anymore#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 14 04:55:09 np0005486808 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 14.678s CPU time.
Oct 14 04:55:09 np0005486808 systemd-machined[214636]: Machine qemu-4-instance-00000004 terminated.
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.255 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurfes625" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.281 2 DEBUG nova.storage.rbd_utils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] rbd image 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.285 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:09 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : haproxy version is 2.8.14-c23fe91
Oct 14 04:55:09 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [NOTICE]   (276848) : path to executable is /usr/sbin/haproxy
Oct 14 04:55:09 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [WARNING]  (276848) : Exiting Master process...
Oct 14 04:55:09 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [ALERT]    (276848) : Current worker (276850) exited with code 143 (Terminated)
Oct 14 04:55:09 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[276844]: [WARNING]  (276848) : All workers exited. Exiting... (0)
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.309 2 INFO nova.virt.libvirt.driver [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Instance destroyed successfully.#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.310 2 DEBUG nova.objects.instance [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'resources' on Instance uuid f1df3849-6811-41a9-9c70-f10a6863b4f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:09 np0005486808 systemd[1]: libpod-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope: Deactivated successfully.
Oct 14 04:55:09 np0005486808 podman[277623]: 2025-10-14 08:55:09.320296794 +0000 UTC m=+0.047442641 container died e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.334 2 DEBUG nova.virt.libvirt.vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-148937452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(29),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-148937452',id=4,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=29,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:54:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-oq3iap0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:54:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=f1df3849-6811-41a9-9c70-f10a6863b4f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.335 2 DEBUG nova.network.os_vif_util [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "address": "fa:16:3e:b8:61:64", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd4673d1-94", "ovs_interfaceid": "fd4673d1-9420-4d31-a2ce-c5cb5bc79c42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.336 2 DEBUG nova.network.os_vif_util [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.336 2 DEBUG os_vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4673d1-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:55:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ffccbbfb65602d0e35d329bb8e895b905f5212158c2b547223141001e2533a96-merged.mount: Deactivated successfully.
Oct 14 04:55:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e-userdata-shm.mount: Deactivated successfully.
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.346 2 INFO os_vif [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:61:64,bridge_name='br-int',has_traffic_filtering=True,id=fd4673d1-9420-4d31-a2ce-c5cb5bc79c42,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd4673d1-94')#033[00m
Oct 14 04:55:09 np0005486808 podman[277623]: 2025-10-14 08:55:09.363870839 +0000 UTC m=+0.091016686 container cleanup e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 04:55:09 np0005486808 systemd[1]: libpod-conmon-e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e.scope: Deactivated successfully.
Oct 14 04:55:09 np0005486808 podman[277704]: 2025-10-14 08:55:09.434967023 +0000 UTC m=+0.045582476 container remove e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[902c06d1-1858-407c-a034-4f463d1e2eab]: (4, ('Tue Oct 14 08:55:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e)\ne67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e\nTue Oct 14 08:55:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (e67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e)\ne67e4e4bef86bbb4f07b66dfa951302a554644083cca37c38f06082c6d5e148e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.447 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0130d686-6e15-4619-8d24-2311b98e13ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.447 2 DEBUG oslo_concurrency.processutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config 2d3012e0-0c96-4f38-aaf5-91e69018d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.448 2 INFO nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deleting local config drive /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624/disk.config because it was imported into RBD.#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 246 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 224 op/s
Oct 14 04:55:09 np0005486808 kernel: tap6f970eb9-80: left promiscuous mode
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e76b138-254e-4762-b817-5c3694a7299a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d23b5c9f-a311-4351-b3cc-ad89f80642c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04487446-bfdd-4519-b055-e2647a99f38d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42d916de-ebab-40b0-946e-48887570ccd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586810, 'reachable_time': 28615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277732, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 systemd-machined[214636]: New machine qemu-6-instance-00000006.
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.531 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:55:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:09.532 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[50f0a6f8-3147-4398-902f-8375c2a34a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:09 np0005486808 systemd[1]: run-netns-ovnmeta\x2d6f970eb9\x2d83e1\x2d4efc\x2db15d\x2db5885b9eabe7.mount: Deactivated successfully.
Oct 14 04:55:09 np0005486808 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.748 2 INFO nova.virt.libvirt.driver [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deleting instance files /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9_del#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.749 2 INFO nova.virt.libvirt.driver [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deletion of /var/lib/nova/instances/f1df3849-6811-41a9-9c70-f10a6863b4f9_del complete#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.831 2 INFO nova.compute.manager [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG oslo.service.loopingcall [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:09 np0005486808 nova_compute[259627]: 2025-10-14 08:55:09.832 2 DEBUG nova.network.neutron [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.540 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432110.5404274, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.544 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.545 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.548 2 INFO nova.virt.libvirt.driver [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance spawned successfully.#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.549 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.575 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.585 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.586 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.586 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.587 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.588 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.588 2 DEBUG nova.virt.libvirt.driver [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.635 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.636 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432110.5429018, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.636 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Started (Lifecycle Event)#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.664 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.666 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.676 2 INFO nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 3.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.677 2 DEBUG nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.694 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.749 2 INFO nova.compute.manager [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 4.49 seconds to build instance.#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.774 2 DEBUG oslo_concurrency.lockutils [None req-4b5017ed-7164-469a-b1c5-967e8e25e4cf d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:10 np0005486808 nova_compute[259627]: 2025-10-14 08:55:10.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.415 2 DEBUG nova.network.neutron [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.435 2 INFO nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Took 1.60 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.8 MiB/s wr, 285 op/s
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.483 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.484 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.571 2 DEBUG nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.571 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG oslo_concurrency.lockutils [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.572 2 DEBUG nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.572 2 WARNING nova.compute.manager [req-8ba9debd-1a6e-45b5-8768-21eabf61b6e1 req-88b607eb-290f-4517-9af6-20bc6bbf9a8a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-unplugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.626 2 DEBUG oslo_concurrency.processutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.985 2 DEBUG nova.compute.manager [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG nova.compute.manager [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:11 np0005486808 nova_compute[259627]: 2025-10-14 08:55:11.986 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989207332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.086 2 DEBUG oslo_concurrency.processutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.092 2 DEBUG nova.compute.provider_tree [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.114 2 DEBUG nova.scheduler.client.report [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.136 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.187 2 INFO nova.scheduler.client.report [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Deleted allocations for instance f1df3849-6811-41a9-9c70-f10a6863b4f9#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.191 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.192 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.233 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.315 2 DEBUG oslo_concurrency.lockutils [None req-49b2ae85-8a8e-43b9-9862-71f6cdd259d8 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.335 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.335 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.340 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.341 2 INFO nova.compute.claims [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.534 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.595 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.596 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.596 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.597 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.597 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.598 2 INFO nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Terminating instance#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquired lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.599 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.824 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.913646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112913682, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2084, "num_deletes": 251, "total_data_size": 3398059, "memory_usage": 3460080, "flush_reason": "Manual Compaction"}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112926397, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3309618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20882, "largest_seqno": 22965, "table_properties": {"data_size": 3300262, "index_size": 5850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19305, "raw_average_key_size": 20, "raw_value_size": 3281295, "raw_average_value_size": 3428, "num_data_blocks": 264, "num_entries": 957, "num_filter_entries": 957, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760431900, "oldest_key_time": 1760431900, "file_creation_time": 1760432112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 12864 microseconds, and 6121 cpu microseconds.
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.926509) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3309618 bytes OK
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.926553) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928422) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928433) EVENT_LOG_v1 {"time_micros": 1760432112928429, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.928448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3389304, prev total WAL file size 3389304, number of live WAL files 2.
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.929394) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3232KB)], [50(7365KB)]
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112929462, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10851920, "oldest_snapshot_seqno": -1}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950822610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.968 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4728 keys, 9106228 bytes, temperature: kUnknown
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112973900, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9106228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9072192, "index_size": 21126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 115850, "raw_average_key_size": 24, "raw_value_size": 8984321, "raw_average_value_size": 1900, "num_data_blocks": 889, "num_entries": 4728, "num_filter_entries": 4728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.974 2 DEBUG nova.compute.provider_tree [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.974153) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9106228 bytes
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.975561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.9 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5246, records dropped: 518 output_compression: NoCompression
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.975576) EVENT_LOG_v1 {"time_micros": 1760432112975569, "job": 26, "event": "compaction_finished", "compaction_time_micros": 44493, "compaction_time_cpu_micros": 18109, "output_level": 6, "num_output_files": 1, "total_output_size": 9106228, "num_input_records": 5246, "num_output_records": 4728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112976154, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432112977214, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.929273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:55:12.977299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:55:12 np0005486808 nova_compute[259627]: 2025-10-14 08:55:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.003 2 DEBUG nova.scheduler.client.report [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.033 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.034 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.080 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.081 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.102 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.137 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.200 2 DEBUG nova.network.neutron [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.219 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Releasing lock "refresh_cache-2d3012e0-0c96-4f38-aaf5-91e69018d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.220 2 DEBUG nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.241 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.256 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.257 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating image(s)#033[00m
Oct 14 04:55:13 np0005486808 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 14 04:55:13 np0005486808 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 3.732s CPU time.
Oct 14 04:55:13 np0005486808 systemd-machined[214636]: Machine qemu-6-instance-00000006 terminated.
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.300 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.334 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.367 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.372 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.410 2 DEBUG nova.policy [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '654cc6be69694fcd8058cc5a5eb78223', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ac87003cad443c2b75e49ebdefe379c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.440 2 INFO nova.virt.libvirt.driver [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance destroyed successfully.#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.441 2 DEBUG nova.objects.instance [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lazy-loading 'resources' on Instance uuid 2d3012e0-0c96-4f38-aaf5-91e69018d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 214 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.479 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.480 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.481 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.481 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.507 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.510 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.682 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.683 2 DEBUG nova.network.neutron [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.710 2 DEBUG oslo_concurrency.lockutils [req-b9b75c77-4441-4d0e-b96a-df55a94793a2 req-0fe84787-6d68-420a-a3e2-b9b39db4ebbc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.723 2 DEBUG nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.723 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG oslo_concurrency.lockutils [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1df3849-6811-41a9-9c70-f10a6863b4f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.724 2 DEBUG nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] No waiting events found dispatching network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.724 2 WARNING nova.compute.manager [req-e88d0ecc-857e-4097-a3a2-f5afd0cc89d4 req-6907915c-a70b-415d-b610-758daa9ea28d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received unexpected event network-vif-plugged-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.800 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.857 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] resizing rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.975 2 INFO nova.virt.libvirt.driver [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deleting instance files /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624_del#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.976 2 INFO nova.virt.libvirt.driver [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deletion of /var/lib/nova/instances/2d3012e0-0c96-4f38-aaf5-91e69018d624_del complete#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:13 np0005486808 nova_compute[259627]: 2025-10-14 08:55:13.983 2 DEBUG nova.objects.instance [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'migration_context' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.066 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.111 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.117 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.118 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.119 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.151 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Successfully created port: fac86e41-30dc-481e-a423-10c6cdb3626f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.168 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.169 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.170 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.170 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.171 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.211 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.212 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.236 2 INFO nova.compute.manager [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.237 2 DEBUG oslo.service.loopingcall [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.239 2 DEBUG nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.239 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.251 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.252 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.284 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.290 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.479 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.497 2 DEBUG nova.network.neutron [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.517 2 INFO nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Took 0.28 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.570 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.571 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.669 2 DEBUG oslo_concurrency.processutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.709 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.711 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.713 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.714 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.716 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352307652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.741 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.845 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.846 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.850 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.850 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.926 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Successfully updated port: fac86e41-30dc-481e-a423-10c6cdb3626f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.943 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.944 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:14 np0005486808 nova_compute[259627]: 2025-10-14 08:55:14.944 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:55:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757442718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.079 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.080 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4300MB free_disk=59.9010009765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.081 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.088 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.094 2 DEBUG oslo_concurrency.processutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.099 2 DEBUG nova.compute.provider_tree [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.125 2 DEBUG nova.scheduler.client.report [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.149 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.156 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.183 2 INFO nova.scheduler.client.report [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Deleted allocations for instance 2d3012e0-0c96-4f38-aaf5-91e69018d624#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.254 2 DEBUG oslo_concurrency.lockutils [None req-7b16a558-1a8f-42c1-8a40-fd7c9a575aed d198584e448f4f7588fd71c62016a5d9 b83283e63f5f412aa3f06e953847cac6 - - default default] Lock "2d3012e0-0c96-4f38-aaf5-91e69018d624" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 30af67a2-4b44-481c-8ab4-296e93c1c517 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7b60a7cc-57e5-4833-9541-ed03e9e862ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b063b9bf-1f88-47d3-a838-a4bcfc5eeecc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.257 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.258 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.264 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.338 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Ensure instance console log exists: /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.339 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.340 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.355 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 310 op/s
Oct 14 04:55:15 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:15Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 04:55:15 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:15Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:74:b6 10.100.0.13
Oct 14 04:55:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115688616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.848 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.857 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.877 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.907 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:55:15 np0005486808 nova_compute[259627]: 2025-10-14 08:55:15.908 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.297 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.298 2 DEBUG nova.network.neutron [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.319 2 DEBUG nova.network.neutron [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.322 2 DEBUG oslo_concurrency.lockutils [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.323 2 DEBUG nova.compute.manager [req-09950395-f0e5-4548-8f52-998e4643cedf req-012296bc-3ae8-4f79-bc55-47a88f29b829 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Received event network-vif-deleted-fd4673d1-9420-4d31-a2ce-c5cb5bc79c42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.343 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.343 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance network_info: |[{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.348 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start _get_guest_xml network_info=[{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 1, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.352 2 WARNING nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.359 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.360 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.364 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.365 2 DEBUG nova.virt.libvirt.host [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.365 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.366 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:54:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1507542026',id=28,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-352684228',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.366 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.367 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.368 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.369 2 DEBUG nova.virt.hardware [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.374 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424509819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.847 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.848 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.907 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:16 np0005486808 nova_compute[259627]: 2025-10-14 08:55:16.953 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.104 2 DEBUG nova.compute.manager [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG nova.compute.manager [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing instance network info cache due to event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.105 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087898165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.243 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.262 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.265 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.285 2 DEBUG nova.compute.manager [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.285 2 DEBUG nova.compute.manager [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing instance network info cache due to event network-changed-be863b8b-ed33-4cec-a274-d62c9bd4ac05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.286 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Refreshing network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 04:55:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189364581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.687 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.688 2 DEBUG nova.virt.libvirt.vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.689 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.690 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.692 2 DEBUG nova.objects.instance [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'pci_devices' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.710 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <uuid>b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</uuid>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <name>instance-00000007</name>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1648521031</nova:name>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:55:16</nova:creationTime>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-352684228">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:ephemeral>1</nova:ephemeral>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:user uuid="654cc6be69694fcd8058cc5a5eb78223">tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member</nova:user>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:project uuid="3ac87003cad443c2b75e49ebdefe379c">tempest-ServersWithSpecificFlavorTestJSON-632252786</nova:project>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <nova:port uuid="fac86e41-30dc-481e-a423-10c6cdb3626f">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="serial">b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="uuid">b063b9bf-1f88-47d3-a838-a4bcfc5eeecc</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.eph0">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <target dev="vdb" bus="virtio"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:e1:72:b8"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <target dev="tapfac86e41-30"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/console.log" append="off"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:55:17 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:55:17 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:55:17 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:55:17 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Preparing to wait for external event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.711 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.712 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.713 2 DEBUG nova.virt.libvirt.vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.713 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.714 2 DEBUG nova.network.os_vif_util [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.714 2 DEBUG os_vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfac86e41-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfac86e41-30, col_values=(('external_ids', {'iface-id': 'fac86e41-30dc-481e-a423-10c6cdb3626f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:72:b8', 'vm-uuid': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:17 np0005486808 NetworkManager[44885]: <info>  [1760432117.7657] manager: (tapfac86e41-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.771 2 INFO os_vif [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30')#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.833 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.833 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.834 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.834 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] No VIF found with MAC fa:16:3e:e1:72:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.835 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Using config drive#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.869 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:55:17 np0005486808 nova_compute[259627]: 2025-10-14 08:55:17.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.416 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.417 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.811 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Creating config drive at /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.816 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmuav2_e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.957 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmuav2_e" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.983 2 DEBUG nova.storage.rbd_utils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] rbd image b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:18 np0005486808 nova_compute[259627]: 2025-10-14 08:55:18.986 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.159 2 DEBUG oslo_concurrency.processutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.159 2 INFO nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deleting local config drive /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc/disk.config because it was imported into RBD.#033[00m
Oct 14 04:55:19 np0005486808 kernel: tapfac86e41-30: entered promiscuous mode
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.2203] manager: (tapfac86e41-30): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00045|binding|INFO|Claiming lport fac86e41-30dc-481e-a423-10c6cdb3626f for this chassis.
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00046|binding|INFO|fac86e41-30dc-481e-a423-10c6cdb3626f: Claiming fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.229 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:72:b8 10.100.0.3'], port_security=['fa:16:3e:e1:72:b8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fac86e41-30dc-481e-a423-10c6cdb3626f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.231 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fac86e41-30dc-481e-a423-10c6cdb3626f in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 bound to our chassis#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.232 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7#033[00m
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00047|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f ovn-installed in OVS
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00048|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f up in Southbound
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.244 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b39d51f-5657-4521-ab09-670e004ee4c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.249 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6f970eb9-81 in ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.251 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6f970eb9-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.251 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dda2074c-efa3-4f25-85cf-e1de8a801b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.252 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a16f4d69-d340-464a-a8e9-b7f46119db65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 systemd-machined[214636]: New machine qemu-7-instance-00000007.
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.264 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f283eab8-5db7-4bde-8d0a-a3eba4467a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb5c8b5-e106-403d-a953-e7f5eea9fa32]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 systemd-udevd[278377]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.2962] device (tapfac86e41-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.2992] device (tapfac86e41-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.314 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0fc214-242f-4a38-a194-20838608b738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8938c407-23e9-4e01-b393-83d02b0cbdbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.3216] manager: (tap6f970eb9-80): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 14 04:55:19 np0005486808 systemd-udevd[278381]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.356 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e02f26b-8668-435e-8993-85f85844f150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.362 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9ccfb6-6cf1-43b4-936d-d25af52fd917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.382 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.382 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.383 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.384 2 INFO nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Terminating instance#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.385 2 DEBUG nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.3856] device (tap6f970eb9-80): carrier: link connected
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.392 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3da2b2f9-5a4a-4370-ad92-aa1c1fd261c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.409 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5889c940-6064-49bf-817e-9a8c61df3587]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589814, 'reachable_time': 42086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278407, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d872b3cc-d304-4a44-859c-ea434fe10a99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:30aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589814, 'tstamp': 589814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278408, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 kernel: tapbe863b8b-ed (unregistering): left promiscuous mode
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.4311] device (tapbe863b8b-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00049|binding|INFO|Releasing lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 from this chassis (sb_readonly=0)
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00050|binding|INFO|Setting lport be863b8b-ed33-4cec-a274-d62c9bd4ac05 down in Southbound
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00051|binding|INFO|Removing iface tapbe863b8b-ed ovn-installed in OVS
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4729077e-56f3-4d69-b4dc-76700b573ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6f970eb9-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:30:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589814, 'reachable_time': 42086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278409, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.444 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:74:b6 10.100.0.13'], port_security=['fa:16:3e:c0:74:b6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b60a7cc-57e5-4833-9541-ed03e9e862ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be863b8b-ed33-4cec-a274-d62c9bd4ac05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG nova.compute.manager [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.449 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.450 2 DEBUG oslo_concurrency.lockutils [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.450 2 DEBUG nova.compute.manager [req-27a0cc50-ba33-491d-9b21-8e1bb96ca0f1 req-c250d385-c2f9-434c-9f20-a8b56fb9902d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Processing event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 237 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 232 op/s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[772fa4df-d89c-49de-90f3-ddd18dc1662b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 14 04:55:19 np0005486808 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.355s CPU time.
Oct 14 04:55:19 np0005486808 systemd-machined[214636]: Machine qemu-5-instance-00000005 terminated.
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72edbf3a-3903-4cfd-8a49-52aba5bb7a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f970eb9-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.5507] manager: (tap6f970eb9-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 kernel: tap6f970eb9-80: entered promiscuous mode
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.555 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6f970eb9-80, col_values=(('external_ids', {'iface-id': '6a62b55c-d140-4dc2-a487-c292e81e63e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:19 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:19Z|00052|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.576 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.576 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3005aee6-3ebb-4244-ba23-7ed26dee72bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.577 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.pid.haproxy
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 6f970eb9-83e1-4efc-b15d-b5885b9eabe7
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:55:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:19.578 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'env', 'PROCESS_TAG=haproxy-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6f970eb9-83e1-4efc-b15d-b5885b9eabe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:55:19 np0005486808 NetworkManager[44885]: <info>  [1760432119.6045] manager: (tapbe863b8b-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.621 2 INFO nova.virt.libvirt.driver [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Instance destroyed successfully.#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.621 2 DEBUG nova.objects.instance [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'resources' on Instance uuid 7b60a7cc-57e5-4833-9541-ed03e9e862ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.640 2 DEBUG nova.virt.libvirt.vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1096031615',display_name='tempest-FloatingIPsAssociationTestJSON-server-1096031615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1096031615',id=5,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:55:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-opxie8ra',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:55:03Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=7b60a7cc-57e5-4833-9541-ed03e9e862ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.640 2 DEBUG nova.network.os_vif_util [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.641 2 DEBUG nova.network.os_vif_util [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.641 2 DEBUG os_vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe863b8b-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.649 2 INFO os_vif [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:74:b6,bridge_name='br-int',has_traffic_filtering=True,id=be863b8b-ed33-4cec-a274-d62c9bd4ac05,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe863b8b-ed')#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.765 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updated VIF entry in instance network info cache for port be863b8b-ed33-4cec-a274-d62c9bd4ac05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.766 2 DEBUG nova.network.neutron [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [{"id": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "address": "fa:16:3e:c0:74:b6", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe863b8b-ed", "ovs_interfaceid": "be863b8b-ed33-4cec-a274-d62c9bd4ac05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.771 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updated VIF entry in instance network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.772 2 DEBUG nova.network.neutron [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.792 2 DEBUG oslo_concurrency.lockutils [req-f4b207c5-5204-4745-b7cb-26a04b28c0d3 req-16c59d5b-15a3-467e-b41f-594d785a7880 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:19 np0005486808 nova_compute[259627]: 2025-10-14 08:55:19.792 2 DEBUG oslo_concurrency.lockutils [req-180f6aeb-1c53-46bf-b839-d81509ce0b68 req-cbcba39e-eda8-444c-b0d7-1391dcbc3205 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7b60a7cc-57e5-4833-9541-ed03e9e862ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.025 2 INFO nova.virt.libvirt.driver [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deleting instance files /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea_del#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.026 2 INFO nova.virt.libvirt.driver [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deletion of /var/lib/nova/instances/7b60a7cc-57e5-4833-9541-ed03e9e862ea_del complete#033[00m
Oct 14 04:55:20 np0005486808 podman[278534]: 2025-10-14 08:55:20.033943317 +0000 UTC m=+0.064150103 container create 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 04:55:20 np0005486808 systemd[1]: Started libpod-conmon-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope.
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.077 2 INFO nova.compute.manager [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG oslo.service.loopingcall [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.078 2 DEBUG nova.network.neutron [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:20 np0005486808 podman[278534]: 2025-10-14 08:55:19.993443048 +0000 UTC m=+0.023649864 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:55:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409964ad013ad7282d1b3ada20703b51786ffd8f880c945dcb857889c1c15880/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:20 np0005486808 podman[278534]: 2025-10-14 08:55:20.119883897 +0000 UTC m=+0.150090703 container init 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 04:55:20 np0005486808 podman[278534]: 2025-10-14 08:55:20.124849089 +0000 UTC m=+0.155055875 container start 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 04:55:20 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : New worker (278555) forked
Oct 14 04:55:20 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : Loading success.
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be863b8b-ed33-4cec-a274-d62c9bd4ac05 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.173 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5200abe5-75f0-48b7-9059-930ec87e21ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.216 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0fba0767-c5ee-4fa8-aa07-d453a5971fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.219 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5b277a-610c-4a92-b2ef-2d66c051f93e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.252 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec83f1b7-ef74-4289-bcaa-9d2e3f940d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.268 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2678437, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec32ffb7-7a89-479f-93f0-d8645191cf55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap92d50a40-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586713, 'reachable_time': 30695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278569, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.268 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Started (Lifecycle Event)#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.270 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.273 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.276 2 INFO nova.virt.libvirt.driver [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance spawned successfully.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.276 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4922b516-6f14-4f0e-b4e3-81ef5ed5a1f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586730, 'tstamp': 586730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278570, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap92d50a40-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586733, 'tstamp': 586733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278570, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.284 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92d50a40-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap92d50a40-90, col_values=(('external_ids', {'iface-id': '2c98ab3c-01b9-41bc-bf2f-b9baea9e1b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:20.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.292 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.301 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.302 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.302 2 DEBUG nova.virt.libvirt.driver [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.340 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.340 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2698064, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.340 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.377 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.380 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432120.2727513, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.380 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.392 2 INFO nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 7.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.393 2 DEBUG nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.409 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.411 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.448 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.469 2 INFO nova.compute.manager [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 8.16 seconds to build instance.#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.491 2 DEBUG oslo_concurrency.lockutils [None req-f1c89760-5a95-464b-9e3e-e085e4e0b506 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.544 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.556 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.556 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.557 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:55:20 np0005486808 nova_compute[259627]: 2025-10-14 08:55:20.988 2 DEBUG nova.network.neutron [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.008 2 INFO nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.061 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.062 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.184 2 DEBUG oslo_concurrency.processutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.215 2 DEBUG nova.compute.manager [req-9aa9ea90-7807-4365-a2bd-b6485c279ecc req-cd516856-14db-44d6-922b-fdbcb59a14c7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-deleted-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.557 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.557 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.558 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state active and task_state None.#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.558 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-unplugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.559 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] No waiting events found dispatching network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 WARNING nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Received unexpected event network-vif-plugged-be863b8b-ed33-4cec-a274-d62c9bd4ac05 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.560 2 DEBUG nova.compute.manager [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.561 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148715762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.599 2 DEBUG oslo_concurrency.processutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.604 2 DEBUG nova.compute.provider_tree [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.619 2 DEBUG nova.scheduler.client.report [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.643 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.667 2 INFO nova.scheduler.client.report [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Deleted allocations for instance 7b60a7cc-57e5-4833-9541-ed03e9e862ea#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.741 2 DEBUG oslo_concurrency.lockutils [None req-c25290d6-34e1-471f-8b42-7dfdbd0c9d72 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "7b60a7cc-57e5-4833-9541-ed03e9e862ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:21 np0005486808 nova_compute[259627]: 2025-10-14 08:55:21.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:23 np0005486808 nova_compute[259627]: 2025-10-14 08:55:23.116 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:23 np0005486808 nova_compute[259627]: 2025-10-14 08:55:23.117 2 DEBUG nova.network.neutron [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:23 np0005486808 nova_compute[259627]: 2025-10-14 08:55:23.131 2 DEBUG oslo_concurrency.lockutils [req-5b2587df-0de5-4134-8aea-62cc235fbbd1 req-592e7ffd-c6ec-4023-b524-b2d66edc4d1f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 169 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 232 op/s
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.306 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432109.2670877, f1df3849-6811-41a9-9c70-f10a6863b4f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.306 2 INFO nova.compute.manager [-] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.334 2 DEBUG nova.compute.manager [None req-a09f45e9-6aa8-4c97-a40b-7a98db0f4725 - - - - - -] [instance: f1df3849-6811-41a9-9c70-f10a6863b4f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.372 2 DEBUG nova.compute.manager [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.373 2 DEBUG nova.compute.manager [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing instance network info cache due to event network-changed-fac86e41-30dc-481e-a423-10c6cdb3626f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.373 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.374 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.374 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Refreshing network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:24 np0005486808 nova_compute[259627]: 2025-10-14 08:55:24.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 301 op/s
Oct 14 04:55:25 np0005486808 podman[278595]: 2025-10-14 08:55:25.653244352 +0000 UTC m=+0.066142603 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 04:55:25 np0005486808 podman[278594]: 2025-10-14 08:55:25.655029406 +0000 UTC m=+0.070245144 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.137 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updated VIF entry in instance network info cache for port fac86e41-30dc-481e-a423-10c6cdb3626f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.137 2 DEBUG nova.network.neutron [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [{"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.162 2 DEBUG oslo_concurrency.lockutils [req-c3f27d7b-db16-499f-8d11-93f1dd827d71 req-f93a8b40-f99b-4513-b65a-f8f6e63bcaae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.981 2 DEBUG nova.compute.manager [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG nova.compute.manager [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing instance network info cache due to event network-changed-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.982 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:26 np0005486808 nova_compute[259627]: 2025-10-14 08:55:26.983 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Refreshing network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:55:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 04:55:28 np0005486808 nova_compute[259627]: 2025-10-14 08:55:28.438 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432113.4367025, 2d3012e0-0c96-4f38-aaf5-91e69018d624 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:28 np0005486808 nova_compute[259627]: 2025-10-14 08:55:28.439 2 INFO nova.compute.manager [-] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:55:28 np0005486808 nova_compute[259627]: 2025-10-14 08:55:28.465 2 DEBUG nova.compute.manager [None req-655cfc2e-4bee-466f-b2ab-eaf69d7e1850 - - - - - -] [instance: 2d3012e0-0c96-4f38-aaf5-91e69018d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 129 op/s
Oct 14 04:55:29 np0005486808 nova_compute[259627]: 2025-10-14 08:55:29.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:30 np0005486808 nova_compute[259627]: 2025-10-14 08:55:30.606 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updated VIF entry in instance network info cache for port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:55:30 np0005486808 nova_compute[259627]: 2025-10-14 08:55:30.607 2 DEBUG nova.network.neutron [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [{"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:30 np0005486808 nova_compute[259627]: 2025-10-14 08:55:30.629 2 DEBUG oslo_concurrency.lockutils [req-d5623e39-985c-4fce-9b0c-2644d4004af6 req-b74c6658-7860-4a9d-8085-b62d16c33f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-30af67a2-4b44-481c-8ab4-296e93c1c517" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.338 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.339 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:55:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 130 op/s
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:72:b8 10.100.0.3
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.580 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.581 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.582 2 INFO nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Terminating instance#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.583 2 DEBUG nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:31 np0005486808 kernel: tapf0f1dcbf-2b (unregistering): left promiscuous mode
Oct 14 04:55:31 np0005486808 NetworkManager[44885]: <info>  [1760432131.6447] device (tapf0f1dcbf-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00053|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=0)
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00054|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down in Southbound
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00055|binding|INFO|Removing iface tapf0f1dcbf-2b ovn-installed in OVS
Oct 14 04:55:31 np0005486808 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 14 04:55:31 np0005486808 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 16.903s CPU time.
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.710 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.712 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis#033[00m
Oct 14 04:55:31 np0005486808 systemd-machined[214636]: Machine qemu-3-instance-00000003 terminated.
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.713 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[086e8c74-c270-4f4e-bd45-a1015593f4a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.714 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa namespace which is not needed anymore#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 kernel: tapf0f1dcbf-2b: entered promiscuous mode
Oct 14 04:55:31 np0005486808 NetworkManager[44885]: <info>  [1760432131.7977] manager: (tapf0f1dcbf-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct 14 04:55:31 np0005486808 kernel: tapf0f1dcbf-2b (unregistering): left promiscuous mode
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00056|binding|INFO|Claiming lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for this chassis.
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00057|binding|INFO|f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0: Claiming fa:16:3e:76:a5:a0 10.100.0.12
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.814 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00058|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 ovn-installed in OVS
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00059|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 up in Southbound
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00060|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=1)
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00061|binding|INFO|Removing iface tapf0f1dcbf-2b ovn-installed in OVS
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00062|if_status|INFO|Not setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down as sb is readonly
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.825 2 INFO nova.virt.libvirt.driver [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Instance destroyed successfully.#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.826 2 DEBUG nova.objects.instance [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lazy-loading 'resources' on Instance uuid 30af67a2-4b44-481c-8ab4-296e93c1c517 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00063|binding|INFO|Releasing lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 from this chassis (sb_readonly=0)
Oct 14 04:55:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:31Z|00064|binding|INFO|Setting lport f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 down in Southbound
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.842 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:a5:a0 10.100.0.12'], port_security=['fa:16:3e:76:a5:a0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30af67a2-4b44-481c-8ab4-296e93c1c517', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5b8fd07d6d54bda9a0257bf72d4b37f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e013b807-3b2c-404b-b699-697e2a823013', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20317fae-45dd-4464-971e-a345fe497251, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : haproxy version is 2.8.14-c23fe91
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [NOTICE]   (276770) : path to executable is /usr/sbin/haproxy
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : Exiting Master process...
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : Exiting Master process...
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [ALERT]    (276770) : Current worker (276772) exited with code 143 (Terminated)
Oct 14 04:55:31 np0005486808 neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa[276766]: [WARNING]  (276770) : All workers exited. Exiting... (0)
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.850 2 DEBUG nova.virt.libvirt.vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:54:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1417208985',display_name='tempest-FloatingIPsAssociationTestJSON-server-1417208985',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1417208985',id=3,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:54:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5b8fd07d6d54bda9a0257bf72d4b37f',ramdisk_id='',reservation_id='r-95bnx8ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1304888620',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1304888620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:54:46Z,user_data=None,user_id='831826dabb48463c92f24c277df4039e',uuid=30af67a2-4b44-481c-8ab4-296e93c1c517,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG nova.network.os_vif_util [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converting VIF {"id": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "address": "fa:16:3e:76:a5:a0", "network": {"id": "92d50a40-95c8-4c0a-a4ab-d459f68516aa", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-664787396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5b8fd07d6d54bda9a0257bf72d4b37f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f1dcbf-2b", "ovs_interfaceid": "f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG nova.network.os_vif_util [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.851 2 DEBUG os_vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:55:31 np0005486808 systemd[1]: libpod-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope: Deactivated successfully.
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f1dcbf-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.857 2 INFO os_vif [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:a5:a0,bridge_name='br-int',has_traffic_filtering=True,id=f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0,network=Network(92d50a40-95c8-4c0a-a4ab-d459f68516aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f1dcbf-2b')#033[00m
Oct 14 04:55:31 np0005486808 podman[278658]: 2025-10-14 08:55:31.860459477 +0000 UTC m=+0.058900054 container died 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:55:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b-userdata-shm.mount: Deactivated successfully.
Oct 14 04:55:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3945ba3649327f68a0834580ef80e82e4e0d34f541da7ffdb8442a6916717c6f-merged.mount: Deactivated successfully.
Oct 14 04:55:31 np0005486808 podman[278658]: 2025-10-14 08:55:31.901924529 +0000 UTC m=+0.100365116 container cleanup 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:55:31 np0005486808 systemd[1]: libpod-conmon-3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b.scope: Deactivated successfully.
Oct 14 04:55:31 np0005486808 podman[278708]: 2025-10-14 08:55:31.971867674 +0000 UTC m=+0.046605480 container remove 3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bacc9211-0a89-47e1-a15e-e463c2cf0ebd]: (4, ('Tue Oct 14 08:55:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa (3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b)\n3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b\nTue Oct 14 08:55:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa (3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b)\n3172ffba8c903c56b6eda0671d39029a54ecf4575f4f98bdc441f03e69bd622b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[051ac235-1e36-4572-a9df-30832074ef5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.984 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92d50a40-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 kernel: tap92d50a40-90: left promiscuous mode
Oct 14 04:55:31 np0005486808 nova_compute[259627]: 2025-10-14 08:55:31.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:31.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91f78f32-11a6-48cb-92ad-01dadd789db8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2679f7c1-4dc1-4c03-8b09-be77a28e60f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[988d8d5d-a638-44d6-bc09-cbf9de117524]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.036 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86d32d58-8e05-401b-988a-42b99e4d55d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586706, 'reachable_time': 37454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278723, 'error': None, 'target': 'ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 systemd[1]: run-netns-ovnmeta\x2d92d50a40\x2d95c8\x2d4c0a\x2da4ab\x2dd459f68516aa.mount: Deactivated successfully.
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.039 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-92d50a40-95c8-4c0a-a4ab-d459f68516aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.039 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2048c8a8-f150-4f29-9c6c-a57071c9669f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.042 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.044 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.045 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[499880a6-7fd2-43a8-9460-6afb24135f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.045 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 in datapath 92d50a40-95c8-4c0a-a4ab-d459f68516aa unbound from our chassis#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.049 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 92d50a40-95c8-4c0a-a4ab-d459f68516aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:55:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:32.049 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d7d9a2-c83a-4000-8aba-59ccf59ee406]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.283 2 INFO nova.virt.libvirt.driver [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deleting instance files /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517_del#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.285 2 INFO nova.virt.libvirt.driver [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deletion of /var/lib/nova/instances/30af67a2-4b44-481c-8ab4-296e93c1c517_del complete#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.384 2 INFO nova.compute.manager [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.384 2 DEBUG oslo.service.loopingcall [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.384 2 DEBUG nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.385 2 DEBUG nova.network.neutron [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.538 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.539 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.539 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.540 2 DEBUG oslo_concurrency.lockutils [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.540 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:32 np0005486808 nova_compute[259627]: 2025-10-14 08:55:32.541 2 DEBUG nova.compute.manager [req-012eee2f-617d-4b1c-af44-006ab52dc6f0 req-298d538f-8570-4747-bf81-b8370f7f7ccd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:55:32
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta']
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:55:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:55:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 169 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.563 2 DEBUG nova.network.neutron [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.579 2 INFO nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Took 1.19 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.627 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.627 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.724 2 DEBUG oslo_concurrency.processutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:33 np0005486808 nova_compute[259627]: 2025-10-14 08:55:33.761 2 DEBUG nova.compute.manager [req-9fa0b056-6cab-4af5-a2fe-5faedda4aa02 req-767f6fba-f91b-4172-8ebc-027425435f77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-deleted-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750230889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.272 2 DEBUG oslo_concurrency.processutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.280 2 DEBUG nova.compute.provider_tree [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.296 2 DEBUG nova.scheduler.client.report [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.325 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.350 2 INFO nova.scheduler.client.report [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Deleted allocations for instance 30af67a2-4b44-481c-8ab4-296e93c1c517#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.418 2 DEBUG oslo_concurrency.lockutils [None req-00134dcb-473f-4c25-bfc2-2362be23c69b 831826dabb48463c92f24c277df4039e f5b8fd07d6d54bda9a0257bf72d4b37f - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.617 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432119.6169467, 7b60a7cc-57e5-4833-9541-ed03e9e862ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.618 2 INFO nova.compute.manager [-] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.637 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.638 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.639 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.639 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.640 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.640 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.641 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.641 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.642 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.642 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.643 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.643 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.644 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.644 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.645 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.645 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.646 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.646 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.647 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.647 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.648 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.648 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.649 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.649 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-unplugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.650 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.651 2 DEBUG oslo_concurrency.lockutils [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "30af67a2-4b44-481c-8ab4-296e93c1c517-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.651 2 DEBUG nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] No waiting events found dispatching network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.652 2 WARNING nova.compute.manager [req-273d8d1d-0785-4974-a24a-258768b4c22d req-ef9a092f-3815-48bd-9d24-1860c3e71933 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Received unexpected event network-vif-plugged-f0f1dcbf-2b62-4d87-8316-44bc56ecb8d0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:34 np0005486808 nova_compute[259627]: 2025-10-14 08:55:34.655 2 DEBUG nova.compute.manager [None req-3dc29873-c609-4101-9790-305750111f02 - - - - - -] [instance: 7b60a7cc-57e5-4833-9541-ed03e9e862ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c3aa8daa-908c-4fb1-8c9f-cc8667655e59 does not exist
Oct 14 04:55:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev db138469-d829-41be-b1a6-507a8197c55c does not exist
Oct 14 04:55:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f29dd08-f203-4467-a632-e5ed45511f8f does not exist
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:55:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:55:36 np0005486808 nova_compute[259627]: 2025-10-14 08:55:36.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:36 np0005486808 podman[278902]: 2025-10-14 08:55:36.846566434 +0000 UTC m=+0.067242230 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 04:55:36 np0005486808 nova_compute[259627]: 2025-10-14 08:55:36.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:36 np0005486808 nova_compute[259627]: 2025-10-14 08:55:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:36 np0005486808 podman[278943]: 2025-10-14 08:55:36.975709489 +0000 UTC m=+0.101056823 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:55:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:55:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.435797077 +0000 UTC m=+0.038773107 container create 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:55:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 04:55:37 np0005486808 systemd[1]: Started libpod-conmon-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope.
Oct 14 04:55:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.420072519 +0000 UTC m=+0.023048539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.533774673 +0000 UTC m=+0.136750773 container init 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.545348539 +0000 UTC m=+0.148324599 container start 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.550311421 +0000 UTC m=+0.153287481 container attach 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:55:37 np0005486808 cool_ramanujan[279078]: 167 167
Oct 14 04:55:37 np0005486808 systemd[1]: libpod-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope: Deactivated successfully.
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.556052723 +0000 UTC m=+0.159028763 container died 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:55:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-270dbf189e011135f0f5b6a4368fd7535a1be1da0266466ef701a4ccb5bddbff-merged.mount: Deactivated successfully.
Oct 14 04:55:37 np0005486808 podman[279062]: 2025-10-14 08:55:37.606853066 +0000 UTC m=+0.209829086 container remove 1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:55:37 np0005486808 systemd[1]: libpod-conmon-1299b43a920bed8635a0802b22b1a4708e4f98492aef5e3659ec138ba8424f64.scope: Deactivated successfully.
Oct 14 04:55:37 np0005486808 podman[279102]: 2025-10-14 08:55:37.827236081 +0000 UTC m=+0.054506805 container create a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:55:37 np0005486808 systemd[1]: Started libpod-conmon-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope.
Oct 14 04:55:37 np0005486808 podman[279102]: 2025-10-14 08:55:37.800288377 +0000 UTC m=+0.027559101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:37 np0005486808 podman[279102]: 2025-10-14 08:55:37.943676333 +0000 UTC m=+0.170947047 container init a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:55:37 np0005486808 podman[279102]: 2025-10-14 08:55:37.956672904 +0000 UTC m=+0.183943618 container start a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:55:37 np0005486808 podman[279102]: 2025-10-14 08:55:37.960953709 +0000 UTC m=+0.188224433 container attach a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:55:39 np0005486808 brave_hopper[279118]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:55:39 np0005486808 brave_hopper[279118]: --> relative data size: 1.0
Oct 14 04:55:39 np0005486808 brave_hopper[279118]: --> All data devices are unavailable
Oct 14 04:55:39 np0005486808 systemd[1]: libpod-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Deactivated successfully.
Oct 14 04:55:39 np0005486808 podman[279102]: 2025-10-14 08:55:39.126094177 +0000 UTC m=+1.353364871 container died a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:55:39 np0005486808 systemd[1]: libpod-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Consumed 1.128s CPU time.
Oct 14 04:55:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-23535a52d89b40cda6cd70d754eb042d6a767cf2d97ee54beb7725911bb3922d-merged.mount: Deactivated successfully.
Oct 14 04:55:39 np0005486808 podman[279102]: 2025-10-14 08:55:39.192584787 +0000 UTC m=+1.419855511 container remove a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hopper, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 04:55:39 np0005486808 systemd[1]: libpod-conmon-a148b0e716db781ca72a3103f6720abde855ebfb3b849880f3c736c10f507172.scope: Deactivated successfully.
Oct 14 04:55:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:39.341 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 04:55:39 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:39Z|00065|binding|INFO|Releasing lport 6a62b55c-d140-4dc2-a487-c292e81e63e0 from this chassis (sb_readonly=0)
Oct 14 04:55:39 np0005486808 nova_compute[259627]: 2025-10-14 08:55:39.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.004341097 +0000 UTC m=+0.059247812 container create 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 04:55:40 np0005486808 systemd[1]: Started libpod-conmon-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope.
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:39.976683115 +0000 UTC m=+0.031589890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.099986626 +0000 UTC m=+0.154893421 container init 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.112000223 +0000 UTC m=+0.166906918 container start 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.116073153 +0000 UTC m=+0.170979878 container attach 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 04:55:40 np0005486808 beautiful_fermat[279315]: 167 167
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.117253812 +0000 UTC m=+0.172160497 container died 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:55:40 np0005486808 systemd[1]: libpod-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope: Deactivated successfully.
Oct 14 04:55:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6efc83e076da523583df42ac7edeabafc2d80f2f5a58bd71651111a20f4e9b07-merged.mount: Deactivated successfully.
Oct 14 04:55:40 np0005486808 podman[279299]: 2025-10-14 08:55:40.1512133 +0000 UTC m=+0.206119995 container remove 3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 04:55:40 np0005486808 systemd[1]: libpod-conmon-3458335f70181583285704a164d3909b348e12e297fffa4f8156592a904d58b8.scope: Deactivated successfully.
Oct 14 04:55:40 np0005486808 podman[279339]: 2025-10-14 08:55:40.355618691 +0000 UTC m=+0.062631605 container create 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:55:40 np0005486808 systemd[1]: Started libpod-conmon-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope.
Oct 14 04:55:40 np0005486808 podman[279339]: 2025-10-14 08:55:40.327343154 +0000 UTC m=+0.034355988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:40 np0005486808 podman[279339]: 2025-10-14 08:55:40.479998679 +0000 UTC m=+0.187011473 container init 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:55:40 np0005486808 podman[279339]: 2025-10-14 08:55:40.489295728 +0000 UTC m=+0.196308512 container start 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 04:55:40 np0005486808 podman[279339]: 2025-10-14 08:55:40.49625771 +0000 UTC m=+0.203270514 container attach 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.524 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.525 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.526 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.527 2 INFO nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Terminating instance#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.528 2 DEBUG nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:40 np0005486808 kernel: tapfac86e41-30 (unregistering): left promiscuous mode
Oct 14 04:55:40 np0005486808 NetworkManager[44885]: <info>  [1760432140.6086] device (tapfac86e41-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:55:40 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:40Z|00066|binding|INFO|Releasing lport fac86e41-30dc-481e-a423-10c6cdb3626f from this chassis (sb_readonly=0)
Oct 14 04:55:40 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:40Z|00067|binding|INFO|Setting lport fac86e41-30dc-481e-a423-10c6cdb3626f down in Southbound
Oct 14 04:55:40 np0005486808 ovn_controller[152662]: 2025-10-14T08:55:40Z|00068|binding|INFO|Removing iface tapfac86e41-30 ovn-installed in OVS
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.634 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:72:b8 10.100.0.3'], port_security=['fa:16:3e:e1:72:b8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b063b9bf-1f88-47d3-a838-a4bcfc5eeecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ac87003cad443c2b75e49ebdefe379c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8a9f5d33-cc0d-455f-8821-c23805bbda66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a071297a-bec1-4fd2-a338-694b6508cca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fac86e41-30dc-481e-a423-10c6cdb3626f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.636 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fac86e41-30dc-481e-a423-10c6cdb3626f in datapath 6f970eb9-83e1-4efc-b15d-b5885b9eabe7 unbound from our chassis#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.638 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f970eb9-83e1-4efc-b15d-b5885b9eabe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a467657f-50c7-42ac-954a-9da0f45da308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.640 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 namespace which is not needed anymore#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 14 04:55:40 np0005486808 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.965s CPU time.
Oct 14 04:55:40 np0005486808 systemd-machined[214636]: Machine qemu-7-instance-00000007 terminated.
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.764 2 INFO nova.virt.libvirt.driver [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Instance destroyed successfully.#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.765 2 DEBUG nova.objects.instance [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lazy-loading 'resources' on Instance uuid b063b9bf-1f88-47d3-a838-a4bcfc5eeecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.784 2 DEBUG nova.virt.libvirt.vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1648521031',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(28),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1648521031',id=7,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=28,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7g6TTsjbjlUgcn3NOlTdjTTdxkB/tOz/SnrUnsyC2B2DiKRfdyGeRG/baba8J8u0sFhOnSigOVqumIulB1xwu2rOWWivaBSyCqPcCAc1UiuhefYZEgIiwxztAfhh5GqA==',key_name='tempest-keypair-2141010768',keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:55:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ac87003cad443c2b75e49ebdefe379c',ramdisk_id='',reservation_id='r-3ocp7rye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-632252786',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-632252786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:55:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='654cc6be69694fcd8058cc5a5eb78223',uuid=b063b9bf-1f88-47d3-a838-a4bcfc5eeecc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.785 2 DEBUG nova.network.os_vif_util [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converting VIF {"id": "fac86e41-30dc-481e-a423-10c6cdb3626f", "address": "fa:16:3e:e1:72:b8", "network": {"id": "6f970eb9-83e1-4efc-b15d-b5885b9eabe7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-458855963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ac87003cad443c2b75e49ebdefe379c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfac86e41-30", "ovs_interfaceid": "fac86e41-30dc-481e-a423-10c6cdb3626f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.785 2 DEBUG nova.network.os_vif_util [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.786 2 DEBUG os_vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfac86e41-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : haproxy version is 2.8.14-c23fe91
Oct 14 04:55:40 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [NOTICE]   (278553) : path to executable is /usr/sbin/haproxy
Oct 14 04:55:40 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [ALERT]    (278553) : Current worker (278555) exited with code 143 (Terminated)
Oct 14 04:55:40 np0005486808 neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7[278549]: [WARNING]  (278553) : All workers exited. Exiting... (0)
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.792 2 INFO os_vif [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:72:b8,bridge_name='br-int',has_traffic_filtering=True,id=fac86e41-30dc-481e-a423-10c6cdb3626f,network=Network(6f970eb9-83e1-4efc-b15d-b5885b9eabe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfac86e41-30')#033[00m
Oct 14 04:55:40 np0005486808 systemd[1]: libpod-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope: Deactivated successfully.
Oct 14 04:55:40 np0005486808 podman[279385]: 2025-10-14 08:55:40.804075062 +0000 UTC m=+0.064738217 container died 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:55:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a-userdata-shm.mount: Deactivated successfully.
Oct 14 04:55:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-409964ad013ad7282d1b3ada20703b51786ffd8f880c945dcb857889c1c15880-merged.mount: Deactivated successfully.
Oct 14 04:55:40 np0005486808 podman[279385]: 2025-10-14 08:55:40.852121707 +0000 UTC m=+0.112784842 container cleanup 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 04:55:40 np0005486808 systemd[1]: libpod-conmon-234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a.scope: Deactivated successfully.
Oct 14 04:55:40 np0005486808 podman[279441]: 2025-10-14 08:55:40.924824091 +0000 UTC m=+0.044857038 container remove 234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9228989-7c33-4824-9584-c28b13c8b181]: (4, ('Tue Oct 14 08:55:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a)\n234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a\nTue Oct 14 08:55:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 (234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a)\n234ba5c52d7ddd5f2072f6308e0120bd7dd78880f61da4f3d484e268eed1b48a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.934 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8819d783-e724-412a-bb24-5d0f6fd22b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.935 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f970eb9-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:55:40 np0005486808 kernel: tap6f970eb9-80: left promiscuous mode
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 nova_compute[259627]: 2025-10-14 08:55:40.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[edfb55b0-ce7b-47bc-9927-4ce7d125b014]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24ddd1dc-0bf3-4539-9a91-bde0762370cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:40.987 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c85645d8-bb20-4d49-9aa0-f222731bbf0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[575fdee5-6590-4c71-84e9-b4da8ca6553e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589806, 'reachable_time': 34887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279456, 'error': None, 'target': 'ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:41 np0005486808 systemd[1]: run-netns-ovnmeta\x2d6f970eb9\x2d83e1\x2d4efc\x2db15d\x2db5885b9eabe7.mount: Deactivated successfully.
Oct 14 04:55:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.012 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6f970eb9-83e1-4efc-b15d-b5885b9eabe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:55:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:55:41.012 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9c825437-2331-4ba6-955f-903979bdd662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]: {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    "0": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "devices": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "/dev/loop3"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            ],
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_name": "ceph_lv0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_size": "21470642176",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "name": "ceph_lv0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "tags": {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_name": "ceph",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.crush_device_class": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.encrypted": "0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_id": "0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.vdo": "0"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            },
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "vg_name": "ceph_vg0"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        }
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    ],
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    "1": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "devices": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "/dev/loop4"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            ],
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_name": "ceph_lv1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_size": "21470642176",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "name": "ceph_lv1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "tags": {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_name": "ceph",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.crush_device_class": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.encrypted": "0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_id": "1",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.vdo": "0"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            },
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "vg_name": "ceph_vg1"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        }
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    ],
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    "2": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "devices": [
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "/dev/loop5"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            ],
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_name": "ceph_lv2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_size": "21470642176",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "name": "ceph_lv2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "tags": {
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.cluster_name": "ceph",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.crush_device_class": "",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.encrypted": "0",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osd_id": "2",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:                "ceph.vdo": "0"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            },
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "type": "block",
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:            "vg_name": "ceph_vg2"
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:        }
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]:    ]
Oct 14 04:55:41 np0005486808 youthful_johnson[279356]: }
Oct 14 04:55:41 np0005486808 systemd[1]: libpod-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope: Deactivated successfully.
Oct 14 04:55:41 np0005486808 podman[279339]: 2025-10-14 08:55:41.264841437 +0000 UTC m=+0.971854231 container died 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:55:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b1ac926ea1d4b2fba17d8c4e18d300d7810c40c9ed941ccc9f63bffafb61eff5-merged.mount: Deactivated successfully.
Oct 14 04:55:41 np0005486808 podman[279339]: 2025-10-14 08:55:41.330085066 +0000 UTC m=+1.037097890 container remove 8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:55:41 np0005486808 systemd[1]: libpod-conmon-8d0f6559d97d0c4a902b2544e0a82a584e98dbb6d078fd299c80490c09c3ce98.scope: Deactivated successfully.
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.366 2 INFO nova.virt.libvirt.driver [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deleting instance files /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_del#033[00m
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.367 2 INFO nova.virt.libvirt.driver [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deletion of /var/lib/nova/instances/b063b9bf-1f88-47d3-a838-a4bcfc5eeecc_del complete#033[00m
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.455 2 INFO nova.compute.manager [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.456 2 DEBUG oslo.service.loopingcall [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.457 2 DEBUG nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.458 2 DEBUG nova.network.neutron [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct 14 04:55:41 np0005486808 nova_compute[259627]: 2025-10-14 08:55:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.088862281 +0000 UTC m=+0.046488528 container create f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:55:42 np0005486808 systemd[1]: Started libpod-conmon-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope.
Oct 14 04:55:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.069119604 +0000 UTC m=+0.026745891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.167616143 +0000 UTC m=+0.125242460 container init f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.175727433 +0000 UTC m=+0.133353670 container start f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.178731037 +0000 UTC m=+0.136357354 container attach f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:55:42 np0005486808 brave_taussig[279628]: 167 167
Oct 14 04:55:42 np0005486808 systemd[1]: libpod-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope: Deactivated successfully.
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.180593473 +0000 UTC m=+0.138219720 container died f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 04:55:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5d28b91e8f4a833af5ee796bbedb72e057af10894273c8605402c37ff2bcadbb-merged.mount: Deactivated successfully.
Oct 14 04:55:42 np0005486808 podman[279612]: 2025-10-14 08:55:42.219748009 +0000 UTC m=+0.177374286 container remove f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:55:42 np0005486808 systemd[1]: libpod-conmon-f035f6b990f860a971d31d0d21e9edf898e2b5b6a59c0592298c1dbf3076168e.scope: Deactivated successfully.
Oct 14 04:55:42 np0005486808 podman[279652]: 2025-10-14 08:55:42.417385434 +0000 UTC m=+0.059554890 container create 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:55:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:42 np0005486808 systemd[1]: Started libpod-conmon-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope.
Oct 14 04:55:42 np0005486808 podman[279652]: 2025-10-14 08:55:42.396108209 +0000 UTC m=+0.038277705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:55:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:55:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:55:42 np0005486808 podman[279652]: 2025-10-14 08:55:42.522753032 +0000 UTC m=+0.164922578 container init 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 04:55:42 np0005486808 podman[279652]: 2025-10-14 08:55:42.532396 +0000 UTC m=+0.174565486 container start 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 04:55:42 np0005486808 podman[279652]: 2025-10-14 08:55:42.537211309 +0000 UTC m=+0.179380795 container attach 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759845367747607 of space, bias 1.0, pg target 0.2279536103242821 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:55:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:55:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]: {
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_id": 2,
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "type": "bluestore"
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    },
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_id": 1,
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "type": "bluestore"
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    },
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_id": 0,
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:        "type": "bluestore"
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]:    }
Oct 14 04:55:43 np0005486808 amazing_satoshi[279669]: }
Oct 14 04:55:43 np0005486808 systemd[1]: libpod-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Deactivated successfully.
Oct 14 04:55:43 np0005486808 systemd[1]: libpod-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Consumed 1.098s CPU time.
Oct 14 04:55:43 np0005486808 podman[279652]: 2025-10-14 08:55:43.623419899 +0000 UTC m=+1.265589375 container died 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:55:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3c93dab37126401a93477523680033160ce92a793c1fd6fb46265147cf565af7-merged.mount: Deactivated successfully.
Oct 14 04:55:43 np0005486808 podman[279652]: 2025-10-14 08:55:43.698828249 +0000 UTC m=+1.340997695 container remove 70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:55:43 np0005486808 systemd[1]: libpod-conmon-70cd24eebc414468c8330eeff4a2622a6023b26cc65632731167e3aba4b43c9a.scope: Deactivated successfully.
Oct 14 04:55:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:55:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:55:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bc9cdd0e-7cf1-4b72-85d7-cb38fbd76788 does not exist
Oct 14 04:55:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f72e0617-f717-4102-ae41-73db88c0ee3b does not exist
Oct 14 04:55:43 np0005486808 nova_compute[259627]: 2025-10-14 08:55:43.753 2 DEBUG nova.network.neutron [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:43 np0005486808 nova_compute[259627]: 2025-10-14 08:55:43.779 2 INFO nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Took 2.32 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:43 np0005486808 nova_compute[259627]: 2025-10-14 08:55:43.988 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:43 np0005486808 nova_compute[259627]: 2025-10-14 08:55:43.989 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.043 2 DEBUG oslo_concurrency.processutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.447 2 DEBUG nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.448 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.449 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.450 2 DEBUG oslo_concurrency.lockutils [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.450 2 DEBUG nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.451 2 WARNING nova.compute.manager [req-7634a392-592d-4c02-bc98-1d23634fce31 req-b961d588-92ef-4f4e-ba54-994ae0c74048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-unplugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451440496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.486 2 DEBUG oslo_concurrency.processutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.494 2 DEBUG nova.compute.provider_tree [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.511 2 DEBUG nova.scheduler.client.report [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.537 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.567 2 INFO nova.scheduler.client.report [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Deleted allocations for instance b063b9bf-1f88-47d3-a838-a4bcfc5eeecc#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.681 2 DEBUG oslo_concurrency.lockutils [None req-0510a605-0105-4bc6-9870-ba38aa42f7f5 654cc6be69694fcd8058cc5a5eb78223 3ac87003cad443c2b75e49ebdefe379c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.734 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.735 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.752 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.820 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.821 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.829 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.829 2 INFO nova.compute.claims [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:55:44 np0005486808 nova_compute[259627]: 2025-10-14 08:55:44.954 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458850696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.437 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.446 2 DEBUG nova.compute.provider_tree [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.481 2 DEBUG nova.scheduler.client.report [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.509 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.510 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.564 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.564 2 DEBUG nova.network.neutron [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.607 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.630 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.719 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.721 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.721 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating image(s)#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.744 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.775 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.800 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.804 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.833 2 DEBUG nova.network.neutron [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.833 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.874 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.874 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.875 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.875 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.898 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:45 np0005486808 nova_compute[259627]: 2025-10-14 08:55:45.902 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfaff639-0439-40d7-8f56-5e8068d741cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.177 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dfaff639-0439-40d7-8f56-5e8068d741cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.253 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] resizing rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.359 2 DEBUG nova.objects.instance [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'migration_context' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.378 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.378 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Ensure instance console log exists: /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.379 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.381 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.386 2 WARNING nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.392 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.393 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.396 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.397 2 DEBUG nova.virt.libvirt.host [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.398 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.399 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.400 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.401 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.401 2 DEBUG nova.virt.hardware [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.404 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.529 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG oslo_concurrency.lockutils [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b063b9bf-1f88-47d3-a838-a4bcfc5eeecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.530 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] No waiting events found dispatching network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.531 2 WARNING nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received unexpected event network-vif-plugged-fac86e41-30dc-481e-a423-10c6cdb3626f for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.531 2 DEBUG nova.compute.manager [req-39e2ef75-533c-4655-a760-01325eab3492 req-6ab7cb83-e91f-4351-9d0d-fa4ea83c76a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Received event network-vif-deleted-fac86e41-30dc-481e-a423-10c6cdb3626f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631251832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.822 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432131.8212082, 30af67a2-4b44-481c-8ab4-296e93c1c517 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.822 2 INFO nova.compute.manager [-] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.839 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.869 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.874 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.906 2 DEBUG nova.compute.manager [None req-15fc0198-1685-455d-93b3-8634e2e64e6d - - - - - -] [instance: 30af67a2-4b44-481c-8ab4-296e93c1c517] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:46 np0005486808 nova_compute[259627]: 2025-10-14 08:55:46.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:55:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761699309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.371 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.373 2 DEBUG nova.objects.instance [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.395 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <uuid>dfaff639-0439-40d7-8f56-5e8068d741cf</uuid>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <name>instance-00000008</name>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerDiagnosticsTest-server-1361332870</nova:name>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:55:46</nova:creationTime>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:user uuid="590d8c12b92e491291f294d8ab3f0b24">tempest-ServerDiagnosticsTest-929986915-project-member</nova:user>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <nova:project uuid="82278e5903c445ef88008b2a13537a99">tempest-ServerDiagnosticsTest-929986915</nova:project>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="serial">dfaff639-0439-40d7-8f56-5e8068d741cf</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="uuid">dfaff639-0439-40d7-8f56-5e8068d741cf</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dfaff639-0439-40d7-8f56-5e8068d741cf_disk">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/console.log" append="off"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:55:47 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:55:47 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:55:47 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:55:47 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:55:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.464 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.465 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.465 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Using config drive#033[00m
Oct 14 04:55:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.499 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.695 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Creating config drive at /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.704 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7_vgazy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.837 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7_vgazy" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.861 2 DEBUG nova.storage.rbd_utils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] rbd image dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:47 np0005486808 nova_compute[259627]: 2025-10-14 08:55:47.865 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:48 np0005486808 nova_compute[259627]: 2025-10-14 08:55:48.037 2 DEBUG oslo_concurrency.processutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config dfaff639-0439-40d7-8f56-5e8068d741cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:48 np0005486808 nova_compute[259627]: 2025-10-14 08:55:48.038 2 INFO nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deleting local config drive /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf/disk.config because it was imported into RBD.#033[00m
Oct 14 04:55:48 np0005486808 systemd-machined[214636]: New machine qemu-8-instance-00000008.
Oct 14 04:55:48 np0005486808 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 14 04:55:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.581 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432149.580854, dfaff639-0439-40d7-8f56-5e8068d741cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.583 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.588 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.589 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.595 2 INFO nova.virt.libvirt.driver [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance spawned successfully.#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.596 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.607 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.612 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.624 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.625 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.626 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.626 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.627 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.628 2 DEBUG nova.virt.libvirt.driver [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.687 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.688 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432149.5831437, dfaff639-0439-40d7-8f56-5e8068d741cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Started (Lifecycle Event)#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.714 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.719 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.743 2 INFO nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 4.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.744 2 DEBUG nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.748 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.833 2 INFO nova.compute.manager [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 5.03 seconds to build instance.#033[00m
Oct 14 04:55:49 np0005486808 nova_compute[259627]: 2025-10-14 08:55:49.858 2 DEBUG oslo_concurrency.lockutils [None req-16ad1e40-f7e8-47f9-9d93-2345f26b5fa8 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.273 2 DEBUG nova.compute.manager [None req-875931fc-c8e7-4d31-a31a-576f8b504865 abec7e84696f45b39425bd6626415ef8 3dd4f80f8f3246faa638ef3cd796ed26 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.277 2 INFO nova.compute.manager [None req-875931fc-c8e7-4d31-a31a-576f8b504865 abec7e84696f45b39425bd6626415ef8 3dd4f80f8f3246faa638ef3cd796ed26 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Retrieving diagnostics#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.453 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.454 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.455 2 INFO nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Terminating instance#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquired lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.456 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.767 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:50 np0005486808 nova_compute[259627]: 2025-10-14 08:55:50.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.036 2 DEBUG nova.network.neutron [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.061 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Releasing lock "refresh_cache-dfaff639-0439-40d7-8f56-5e8068d741cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.062 2 DEBUG nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:55:51 np0005486808 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 14 04:55:51 np0005486808 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.963s CPU time.
Oct 14 04:55:51 np0005486808 systemd-machined[214636]: Machine qemu-8-instance-00000008 terminated.
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.283 2 INFO nova.virt.libvirt.driver [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance destroyed successfully.#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.283 2 DEBUG nova.objects.instance [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lazy-loading 'resources' on Instance uuid dfaff639-0439-40d7-8f56-5e8068d741cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.688 2 INFO nova.virt.libvirt.driver [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deleting instance files /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf_del#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.689 2 INFO nova.virt.libvirt.driver [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deletion of /var/lib/nova/instances/dfaff639-0439-40d7-8f56-5e8068d741cf_del complete#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.766 2 INFO nova.compute.manager [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.767 2 DEBUG oslo.service.loopingcall [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.768 2 DEBUG nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.768 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.924 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.945 2 DEBUG nova.network.neutron [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:55:51 np0005486808 nova_compute[259627]: 2025-10-14 08:55:51.976 2 INFO nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.028 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.028 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.151 2 DEBUG oslo_concurrency.processutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808018943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.595 2 DEBUG oslo_concurrency.processutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.601 2 DEBUG nova.compute.provider_tree [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.614 2 DEBUG nova.scheduler.client.report [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.644 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.673 2 INFO nova.scheduler.client.report [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Deleted allocations for instance dfaff639-0439-40d7-8f56-5e8068d741cf#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.768 2 DEBUG oslo_concurrency.lockutils [None req-12f292ea-564f-4287-86aa-57f58ae9f90c 590d8c12b92e491291f294d8ab3f0b24 82278e5903c445ef88008b2a13537a99 - - default default] Lock "dfaff639-0439-40d7-8f56-5e8068d741cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:52 np0005486808 nova_compute[259627]: 2025-10-14 08:55:52.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:53 np0005486808 nova_compute[259627]: 2025-10-14 08:55:53.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 04:55:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.632 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.634 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.654 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.734 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.735 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.741 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.741 2 INFO nova.compute.claims [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.762 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432140.7610755, b063b9bf-1f88-47d3-a838-a4bcfc5eeecc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.762 2 INFO nova.compute.manager [-] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.801 2 DEBUG nova.compute.manager [None req-5ff0cd66-dd1f-4f3e-bcec-5bac6270d4c3 - - - - - -] [instance: b063b9bf-1f88-47d3-a838-a4bcfc5eeecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:55 np0005486808 nova_compute[259627]: 2025-10-14 08:55:55.868 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:55:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344160010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.375 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.382 2 DEBUG nova.compute.provider_tree [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.406 2 DEBUG nova.scheduler.client.report [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.440 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.441 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.498 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.499 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.516 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.532 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.633 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.635 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.636 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating image(s)#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.670 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:56 np0005486808 podman[280221]: 2025-10-14 08:55:56.678380808 +0000 UTC m=+0.085811278 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:55:56 np0005486808 podman[280220]: 2025-10-14 08:55:56.692003114 +0000 UTC m=+0.094881121 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.709 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.731 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.734 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.817 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.819 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.820 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.821 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.854 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.858 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:55:56 np0005486808 nova_compute[259627]: 2025-10-14 08:55:56.887 2 DEBUG nova.policy [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'efc7f5a2c6324662956767ff381c6b16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.123 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.210 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] resizing rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.310 2 DEBUG nova.objects.instance [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'migration_context' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.321 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.322 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Ensure instance console log exists: /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.322 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.323 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.323 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:55:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:55:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 04:55:57 np0005486808 nova_compute[259627]: 2025-10-14 08:55:57.851 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Successfully created port: d5a0b41a-72da-4c0e-a568-fdba9541e479 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:55:58 np0005486808 nova_compute[259627]: 2025-10-14 08:55:58.879 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Successfully updated port: d5a0b41a-72da-4c0e-a568-fdba9541e479 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:55:58 np0005486808 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:58 np0005486808 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquired lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:55:58 np0005486808 nova_compute[259627]: 2025-10-14 08:55:58.891 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:55:59 np0005486808 nova_compute[259627]: 2025-10-14 08:55:59.087 2 DEBUG nova.compute.manager [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-changed-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:55:59 np0005486808 nova_compute[259627]: 2025-10-14 08:55:59.087 2 DEBUG nova.compute.manager [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Refreshing instance network info cache due to event network-changed-d5a0b41a-72da-4c0e-a568-fdba9541e479. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:55:59 np0005486808 nova_compute[259627]: 2025-10-14 08:55:59.088 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:55:59 np0005486808 nova_compute[259627]: 2025-10-14 08:55:59.173 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:55:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.395 2 DEBUG nova.network.neutron [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.418 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Releasing lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance network_info: |[{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.419 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Refreshing network info cache for port d5a0b41a-72da-4c0e-a568-fdba9541e479 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.421 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start _get_guest_xml network_info=[{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.426 2 WARNING nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.431 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.431 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.438 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.439 2 DEBUG nova.virt.libvirt.host [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.439 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.440 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.440 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.441 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.442 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.443 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.443 2 DEBUG nova.virt.hardware [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.446 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839077312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.845 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.880 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:00 np0005486808 nova_compute[259627]: 2025-10-14 08:56:00.885 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199315589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.311 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.314 2 DEBUG nova.virt.libvirt.vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:56Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.315 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.316 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.318 2 DEBUG nova.objects.instance [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.338 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <uuid>726e9e21-4f40-48aa-947a-95a78db4dbf3</uuid>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <name>instance-00000009</name>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesNegativeTestJSON-server-1369634349</nova:name>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:00</nova:creationTime>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:user uuid="efc7f5a2c6324662956767ff381c6b16">tempest-ImagesNegativeTestJSON-866675267-project-member</nova:user>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:project uuid="11f8dfac5e5c4bfe9bccae4608ea8d51">tempest-ImagesNegativeTestJSON-866675267</nova:project>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <nova:port uuid="d5a0b41a-72da-4c0e-a568-fdba9541e479">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="serial">726e9e21-4f40-48aa-947a-95a78db4dbf3</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="uuid">726e9e21-4f40-48aa-947a-95a78db4dbf3</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/726e9e21-4f40-48aa-947a-95a78db4dbf3_disk">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8c:0c:0a"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <target dev="tapd5a0b41a-72"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/console.log" append="off"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:01 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:01 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:01 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:01 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.340 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Preparing to wait for external event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.340 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.341 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.342 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.343 2 DEBUG nova.virt.libvirt.vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:55:56Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.343 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.344 2 DEBUG nova.network.os_vif_util [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.345 2 DEBUG os_vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5a0b41a-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5a0b41a-72, col_values=(('external_ids', {'iface-id': 'd5a0b41a-72da-4c0e-a568-fdba9541e479', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:0c:0a', 'vm-uuid': '726e9e21-4f40-48aa-947a-95a78db4dbf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:01 np0005486808 NetworkManager[44885]: <info>  [1760432161.3580] manager: (tapd5a0b41a-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.368 2 INFO os_vif [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72')#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.443 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.444 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.444 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] No VIF found with MAC fa:16:3e:8c:0c:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.445 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Using config drive#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.474 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.891 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Creating config drive at /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config#033[00m
Oct 14 04:56:01 np0005486808 nova_compute[259627]: 2025-10-14 08:56:01.902 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2_qssvt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.039 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2_qssvt" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.066 2 DEBUG nova.storage.rbd_utils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] rbd image 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.071 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.221 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updated VIF entry in instance network info cache for port d5a0b41a-72da-4c0e-a568-fdba9541e479. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.222 2 DEBUG nova.network.neutron [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [{"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.242 2 DEBUG oslo_concurrency.lockutils [req-40d2c1c8-a27e-4c2c-831c-72968dd2bcb9 req-07f18681-bf80-4748-9096-0339de77ef2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-726e9e21-4f40-48aa-947a-95a78db4dbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.244 2 DEBUG oslo_concurrency.processutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config 726e9e21-4f40-48aa-947a-95a78db4dbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.244 2 INFO nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deleting local config drive /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:02 np0005486808 kernel: tapd5a0b41a-72: entered promiscuous mode
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:02Z|00069|binding|INFO|Claiming lport d5a0b41a-72da-4c0e-a568-fdba9541e479 for this chassis.
Oct 14 04:56:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:02Z|00070|binding|INFO|d5a0b41a-72da-4c0e-a568-fdba9541e479: Claiming fa:16:3e:8c:0c:0a 10.100.0.9
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.2947] manager: (tapd5a0b41a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.308 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:0a 10.100.0.9'], port_security=['fa:16:3e:8c:0c:0a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '726e9e21-4f40-48aa-947a-95a78db4dbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'neutron:revision_number': '2', 'neutron:security_group_ids': '109e5b33-46b2-4d5e-adaf-e6f9890b6610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cb3f08b-f3e8-45d0-921c-e2283d0c50c8, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d5a0b41a-72da-4c0e-a568-fdba9541e479) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.310 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d5a0b41a-72da-4c0e-a568-fdba9541e479 in datapath 9b410fba-e0a4-4544-b437-e1dbffa6da06 bound to our chassis#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b410fba-e0a4-4544-b437-e1dbffa6da06#033[00m
Oct 14 04:56:02 np0005486808 systemd-machined[214636]: New machine qemu-9-instance-00000009.
Oct 14 04:56:02 np0005486808 systemd-udevd[280558]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8a47bd-c0f0-40db-af6e-395b69c5e316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.327 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b410fba-e1 in ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.328 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b410fba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee9bef4-81ff-44d2-bd0a-626bb3728a20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[64104582-9802-4261-af49-35c25a505d1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.344 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8d69aa-118c-4215-932a-c06459ea6b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.3485] device (tapd5a0b41a-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.3501] device (tapd5a0b41a-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:56:02 np0005486808 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be729019-0914-4e69-a576-55e569d62759]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:02Z|00071|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 ovn-installed in OVS
Oct 14 04:56:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:02Z|00072|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 up in Southbound
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.404 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[954301b5-f0cf-4a97-9b5b-05c1ac1cbe11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.4091] manager: (tap9b410fba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.408 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc251199-7e10-41cd-82f4-4c1a795257b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 systemd-udevd[280562]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.438 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8da4f188-69d8-4aee-a9b1-87b2a4acf5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.441 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8f439dad-3a8c-46c5-8780-fa90ddc37184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.4651] device (tap9b410fba-e0): carrier: link connected
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.470 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e75f54e8-4c63-47e3-a8c1-815aac2e7f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9eedc5ce-a6ad-4bc2-bb82-a213327cbc93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b410fba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:3b:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594122, 'reachable_time': 26959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280591, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[64e259ad-ee50-49c6-bd4e-1f56cb37a29d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:3bd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594122, 'tstamp': 594122}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280592, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0b59b2-17a8-49a9-b152-88dac01afd4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b410fba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:3b:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594122, 'reachable_time': 26959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280593, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[060daf68-e8e0-4d81-b9e5-197b0eda9e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.611 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a146db4-f1eb-4e9d-ac1e-026279edd6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b410fba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b410fba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 kernel: tap9b410fba-e0: entered promiscuous mode
Oct 14 04:56:02 np0005486808 NetworkManager[44885]: <info>  [1760432162.6181] manager: (tap9b410fba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.621 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b410fba-e0, col_values=(('external_ids', {'iface-id': '291e43c3-46e3-4e83-b745-e05afcea3301'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:02Z|00073|binding|INFO|Releasing lport 291e43c3-46e3-4e83-b745-e05afcea3301 from this chassis (sb_readonly=0)
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.643 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d82f5cb-7e63-4a29-9d56-48d51bab91c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.645 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-9b410fba-e0a4-4544-b437-e1dbffa6da06
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/9b410fba-e0a4-4544-b437-e1dbffa6da06.pid.haproxy
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 9b410fba-e0a4-4544-b437-e1dbffa6da06
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:56:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:02.646 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'env', 'PROCESS_TAG=haproxy-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b410fba-e0a4-4544-b437-e1dbffa6da06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.699 2 DEBUG nova.compute.manager [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG oslo_concurrency.lockutils [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.700 2 DEBUG nova.compute.manager [req-29109707-ffde-4dc1-9b75-f95aacb0c025 req-90930d97-b301-43f9-b276-14c6880e2268 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Processing event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.784 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.785 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.800 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.878 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.886 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.898 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:02 np0005486808 nova_compute[259627]: 2025-10-14 08:56:02.899 2 INFO nova.compute.claims [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.023 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:03 np0005486808 podman[280667]: 2025-10-14 08:56:03.069773436 +0000 UTC m=+0.070379657 container create a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:56:03 np0005486808 systemd[1]: Started libpod-conmon-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope.
Oct 14 04:56:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:03 np0005486808 podman[280667]: 2025-10-14 08:56:03.032669061 +0000 UTC m=+0.033275362 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:56:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04047615a0e39b76733f58c9bbb2c5b94aae3cbd29f91e22f62d4a2216aca8e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:03 np0005486808 podman[280667]: 2025-10-14 08:56:03.14250017 +0000 UTC m=+0.143106421 container init a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 04:56:03 np0005486808 podman[280667]: 2025-10-14 08:56:03.147605946 +0000 UTC m=+0.148212157 container start a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 04:56:03 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : New worker (280706) forked
Oct 14 04:56:03 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : Loading success.
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.401 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.4010384, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.402 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.403 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.408 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.411 2 INFO nova.virt.libvirt.driver [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance spawned successfully.#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.411 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.424 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.429 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.433 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.433 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.434 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.434 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.435 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.435 2 DEBUG nova.virt.libvirt.driver [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.474 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.475 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.403522, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.475 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:56:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210446206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.494 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.497 2 INFO nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 6.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.498 2 DEBUG nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.499 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.502 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432163.406412, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.503 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.507 2 DEBUG nova.compute.provider_tree [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.529 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.532 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.566 2 DEBUG nova.scheduler.client.report [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.572 2 INFO nova.compute.manager [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 7.86 seconds to build instance.#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.593 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.594 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.596 2 DEBUG oslo_concurrency.lockutils [None req-ea8bc14a-b85e-49db-baf4-d3be9b5adbec efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.644 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.645 2 DEBUG nova.network.neutron [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.665 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.682 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.773 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.774 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.775 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating image(s)#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.797 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.820 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.845 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.849 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.910 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.910 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.911 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.911 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.932 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.938 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0936e31d-4bed-46c8-a561-05467cf93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.962 2 DEBUG nova.network.neutron [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:56:03 np0005486808 nova_compute[259627]: 2025-10-14 08:56:03.962 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.177 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0936e31d-4bed-46c8-a561-05467cf93f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.233 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] resizing rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.281 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.282 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.330 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.331 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.333 2 INFO nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Terminating instance#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.334 2 DEBUG nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.335 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.343 2 DEBUG nova.objects.instance [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'migration_context' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.366 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Ensure instance console log exists: /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.367 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.368 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.371 2 WARNING nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:04 np0005486808 kernel: tapd5a0b41a-72 (unregistering): left promiscuous mode
Oct 14 04:56:04 np0005486808 NetworkManager[44885]: <info>  [1760432164.3780] device (tapd5a0b41a-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.377 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.380 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:04 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:04Z|00074|binding|INFO|Releasing lport d5a0b41a-72da-4c0e-a568-fdba9541e479 from this chassis (sb_readonly=0)
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:04Z|00075|binding|INFO|Setting lport d5a0b41a-72da-4c0e-a568-fdba9541e479 down in Southbound
Oct 14 04:56:04 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:04Z|00076|binding|INFO|Removing iface tapd5a0b41a-72 ovn-installed in OVS
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.387 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.libvirt.host [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.388 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.389 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.390 2 DEBUG nova.virt.hardware [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.393 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.392 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:0a 10.100.0.9'], port_security=['fa:16:3e:8c:0c:0a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '726e9e21-4f40-48aa-947a-95a78db4dbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11f8dfac5e5c4bfe9bccae4608ea8d51', 'neutron:revision_number': '4', 'neutron:security_group_ids': '109e5b33-46b2-4d5e-adaf-e6f9890b6610', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cb3f08b-f3e8-45d0-921c-e2283d0c50c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d5a0b41a-72da-4c0e-a568-fdba9541e479) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.394 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d5a0b41a-72da-4c0e-a568-fdba9541e479 in datapath 9b410fba-e0a4-4544-b437-e1dbffa6da06 unbound from our chassis#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.395 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b410fba-e0a4-4544-b437-e1dbffa6da06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42c90486-4e69-4742-adb5-6a75110070c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.396 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 namespace which is not needed anymore#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 14 04:56:04 np0005486808 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 1.844s CPU time.
Oct 14 04:56:04 np0005486808 systemd-machined[214636]: Machine qemu-9-instance-00000009 terminated.
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.447 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.447 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.456 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.457 2 INFO nova.compute.claims [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:04 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : haproxy version is 2.8.14-c23fe91
Oct 14 04:56:04 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [NOTICE]   (280685) : path to executable is /usr/sbin/haproxy
Oct 14 04:56:04 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [WARNING]  (280685) : Exiting Master process...
Oct 14 04:56:04 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [ALERT]    (280685) : Current worker (280706) exited with code 143 (Terminated)
Oct 14 04:56:04 np0005486808 neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06[280681]: [WARNING]  (280685) : All workers exited. Exiting... (0)
Oct 14 04:56:04 np0005486808 systemd[1]: libpod-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope: Deactivated successfully.
Oct 14 04:56:04 np0005486808 podman[280906]: 2025-10-14 08:56:04.52307529 +0000 UTC m=+0.043805071 container died a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:56:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b-userdata-shm.mount: Deactivated successfully.
Oct 14 04:56:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-04047615a0e39b76733f58c9bbb2c5b94aae3cbd29f91e22f62d4a2216aca8e0-merged.mount: Deactivated successfully.
Oct 14 04:56:04 np0005486808 NetworkManager[44885]: <info>  [1760432164.5590] manager: (tapd5a0b41a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Oct 14 04:56:04 np0005486808 podman[280906]: 2025-10-14 08:56:04.559049267 +0000 UTC m=+0.079778988 container cleanup a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 04:56:04 np0005486808 systemd[1]: libpod-conmon-a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b.scope: Deactivated successfully.
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.579 2 INFO nova.virt.libvirt.driver [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Instance destroyed successfully.#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.579 2 DEBUG nova.objects.instance [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lazy-loading 'resources' on Instance uuid 726e9e21-4f40-48aa-947a-95a78db4dbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.597 2 DEBUG nova.virt.libvirt.vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1369634349',display_name='tempest-ImagesNegativeTestJSON-server-1369634349',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1369634349',id=9,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11f8dfac5e5c4bfe9bccae4608ea8d51',ramdisk_id='',reservation_id='r-ogdi8y87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-866675267',owner_user_name='tempest-ImagesNegativeTestJSON-866675267-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:03Z,user_data=None,user_id='efc7f5a2c6324662956767ff381c6b16',uuid=726e9e21-4f40-48aa-947a-95a78db4dbf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.597 2 DEBUG nova.network.os_vif_util [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converting VIF {"id": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "address": "fa:16:3e:8c:0c:0a", "network": {"id": "9b410fba-e0a4-4544-b437-e1dbffa6da06", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-606020585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "11f8dfac5e5c4bfe9bccae4608ea8d51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5a0b41a-72", "ovs_interfaceid": "d5a0b41a-72da-4c0e-a568-fdba9541e479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.598 2 DEBUG nova.network.os_vif_util [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.598 2 DEBUG os_vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5a0b41a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.605 2 INFO os_vif [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:0a,bridge_name='br-int',has_traffic_filtering=True,id=d5a0b41a-72da-4c0e-a568-fdba9541e479,network=Network(9b410fba-e0a4-4544-b437-e1dbffa6da06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5a0b41a-72')#033[00m
Oct 14 04:56:04 np0005486808 podman[280958]: 2025-10-14 08:56:04.642509266 +0000 UTC m=+0.057038888 container remove a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.649 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ff5c4-4ded-4d30-b127-c33dcccbe931]: (4, ('Tue Oct 14 08:56:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 (a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b)\na405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b\nTue Oct 14 08:56:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 (a405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b)\na405ad6fd6455b0dc79e71b53d4c5c7adfc6f86ae755adf02ad97bb80a5ff63b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c97c680-4fb4-4ee4-be1b-2c811d33fea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.652 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b410fba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:04 np0005486808 kernel: tap9b410fba-e0: left promiscuous mode
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2d34b1-5f03-4fa2-b50f-4c407c042e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67daebfa-3c28-4f77-b059-50d49639ed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.713 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b5667f-e021-4d8e-9f20-561cd87724bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c520ad6-8626-46ee-90ed-d32adb51dcd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594116, 'reachable_time': 43073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280995, 'error': None, 'target': 'ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 systemd[1]: run-netns-ovnmeta\x2d9b410fba\x2de0a4\x2d4544\x2db437\x2de1dbffa6da06.mount: Deactivated successfully.
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.739 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b410fba-e0a4-4544-b437-e1dbffa6da06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:56:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:04.739 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4b19a5a2-1bc2-4ee9-b644-583c2a128a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.789 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.789 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.790 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.790 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.791 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.792 2 WARNING nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received unexpected event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.792 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.793 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.794 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-unplugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.794 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.795 2 DEBUG oslo_concurrency.lockutils [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.796 2 DEBUG nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] No waiting events found dispatching network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.796 2 WARNING nova.compute.manager [req-eb283250-580f-4733-8d64-9eb01e3af2a0 req-b2a609bf-6db5-4856-92b2-4b005233ae8e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received unexpected event network-vif-plugged-d5a0b41a-72da-4c0e-a568-fdba9541e479 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:56:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647073352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.850 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.874 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.880 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.978 2 INFO nova.virt.libvirt.driver [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deleting instance files /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3_del#033[00m
Oct 14 04:56:04 np0005486808 nova_compute[259627]: 2025-10-14 08:56:04.979 2 INFO nova.virt.libvirt.driver [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deletion of /var/lib/nova/instances/726e9e21-4f40-48aa-947a-95a78db4dbf3_del complete#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.058 2 INFO nova.compute.manager [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG oslo.service.loopingcall [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.059 2 DEBUG nova.network.neutron [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346080831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.111 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.115 2 DEBUG nova.compute.provider_tree [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.138 2 DEBUG nova.scheduler.client.report [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.157 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.157 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.195 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.195 2 DEBUG nova.network.neutron [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.212 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.231 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877257362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.287 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.288 2 DEBUG nova.objects.instance [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.313 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <uuid>0936e31d-4bed-46c8-a561-05467cf93f75</uuid>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <name>instance-0000000a</name>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:name>tempest-TenantUsagesTestJSON-server-196977972</nova:name>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:04</nova:creationTime>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:user uuid="0a71ad98ce694327be82c546cc6f37b4">tempest-TenantUsagesTestJSON-1237927654-project-member</nova:user>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <nova:project uuid="d319ba0849124bb381bfab8321324c76">tempest-TenantUsagesTestJSON-1237927654</nova:project>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="serial">0936e31d-4bed-46c8-a561-05467cf93f75</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="uuid">0936e31d-4bed-46c8-a561-05467cf93f75</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0936e31d-4bed-46c8-a561-05467cf93f75_disk">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0936e31d-4bed-46c8-a561-05467cf93f75_disk.config">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/console.log" append="off"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:05 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:05 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:05 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:05 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.357 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.358 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.358 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating image(s)#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.374 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.390 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.409 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.412 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.435 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.435 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.436 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Using config drive#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.456 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.467 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.468 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.468 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.469 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.487 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.490 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed89f48c-8144-453c-9357-0abc99716b22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3458753081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.572 2 DEBUG nova.network.neutron [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.573 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.661 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Creating config drive at /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.665 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjjp08b4w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.752 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ed89f48c-8144-453c-9357-0abc99716b22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.818 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjjp08b4w" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.846 2 DEBUG nova.storage.rbd_utils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] rbd image 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.850 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.870 2 DEBUG nova.network.neutron [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.876 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] resizing rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.907 2 INFO nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.971 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.972 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.977 2 DEBUG nova.objects.instance [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'migration_context' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.996 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.996 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Ensure instance console log exists: /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.997 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:05 np0005486808 nova_compute[259627]: 2025-10-14 08:56:05.999 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.006 2 WARNING nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.010 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.011 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.014 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.014 2 DEBUG nova.virt.libvirt.host [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.015 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.015 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.016 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.016 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.017 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.018 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.019 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.019 2 DEBUG nova.virt.hardware [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.022 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.046 2 DEBUG oslo_concurrency.processutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config 0936e31d-4bed-46c8-a561-05467cf93f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.048 2 INFO nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deleting local config drive /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.111 2 DEBUG oslo_concurrency.processutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:06 np0005486808 systemd-machined[214636]: New machine qemu-10-instance-0000000a.
Oct 14 04:56:06 np0005486808 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.282 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432151.28092, dfaff639-0439-40d7-8f56-5e8068d741cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.283 2 INFO nova.compute.manager [-] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.304 2 DEBUG nova.compute.manager [None req-857d13c3-b60c-4ff6-affc-d47da1f8c644 - - - - - -] [instance: dfaff639-0439-40d7-8f56-5e8068d741cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534328190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.433 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.460 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.464 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491166002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.571 2 DEBUG oslo_concurrency.processutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.582 2 DEBUG nova.compute.provider_tree [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.602 2 DEBUG nova.scheduler.client.report [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.629 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.654 2 INFO nova.scheduler.client.report [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Deleted allocations for instance 726e9e21-4f40-48aa-947a-95a78db4dbf3#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.719 2 DEBUG oslo_concurrency.lockutils [None req-12befcf9-ad67-49a8-9000-f2051ca3a839 efc7f5a2c6324662956767ff381c6b16 11f8dfac5e5c4bfe9bccae4608ea8d51 - - default default] Lock "726e9e21-4f40-48aa-947a-95a78db4dbf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258172472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.883 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.885 2 DEBUG nova.objects.instance [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.901 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <uuid>ed89f48c-8144-453c-9357-0abc99716b22</uuid>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <name>instance-0000000b</name>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-292904921</nova:name>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:06</nova:creationTime>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:user uuid="8414c5e33cd949b9809db7c92c81ef19">tempest-DeleteServersAdminTestJSON-882596417-project-member</nova:user>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <nova:project uuid="e939ba1f2e1f42ccb04f896c697625d2">tempest-DeleteServersAdminTestJSON-882596417</nova:project>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="serial">ed89f48c-8144-453c-9357-0abc99716b22</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="uuid">ed89f48c-8144-453c-9357-0abc99716b22</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ed89f48c-8144-453c-9357-0abc99716b22_disk">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ed89f48c-8144-453c-9357-0abc99716b22_disk.config">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/console.log" append="off"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:06 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:06 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:06 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:06 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.969 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.969 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:06 np0005486808 nova_compute[259627]: 2025-10-14 08:56:06.970 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Using config drive#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.001 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.013 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:07.014 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:07 np0005486808 podman[281427]: 2025-10-14 08:56:07.019924804 +0000 UTC m=+0.081546723 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.039 2 DEBUG nova.compute.manager [req-91488208-e646-4b9e-a438-3ba9400a844a req-eb73e2e0-4a0c-4272-b71e-8fa62fc8b8e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Received event network-vif-deleted-d5a0b41a-72da-4c0e-a568-fdba9541e479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:07 np0005486808 podman[281464]: 2025-10-14 08:56:07.116140997 +0000 UTC m=+0.079697387 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.127 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Creating config drive at /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.131 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bjpcml7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432167.1576405, 0936e31d-4bed-46c8-a561-05467cf93f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.159 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.163 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.164 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.168 2 INFO nova.virt.libvirt.driver [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance spawned successfully.#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.168 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.198 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.205 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.216 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.216 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.218 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.219 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.220 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.221 2 DEBUG nova.virt.libvirt.driver [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.255 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bjpcml7" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.292 2 DEBUG nova.storage.rbd_utils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image ed89f48c-8144-453c-9357-0abc99716b22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.296 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config ed89f48c-8144-453c-9357-0abc99716b22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.327 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432167.1621742, 0936e31d-4bed-46c8-a561-05467cf93f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.331 2 INFO nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 3.56 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.332 2 DEBUG nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.349 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.353 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.375 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.395 2 INFO nova.compute.manager [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 4.55 seconds to build instance.#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.411 2 DEBUG oslo_concurrency.lockutils [None req-71d836ed-bd9b-46c7-926e-6ef62dc17290 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.486 2 DEBUG oslo_concurrency.processutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config ed89f48c-8144-453c-9357-0abc99716b22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:07 np0005486808 nova_compute[259627]: 2025-10-14 08:56:07.487 2 INFO nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deleting local config drive /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:07 np0005486808 virtqemud[259351]: End of file while reading data: Input/output error
Oct 14 04:56:07 np0005486808 systemd-machined[214636]: New machine qemu-11-instance-0000000b.
Oct 14 04:56:07 np0005486808 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.511 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432168.51139, ed89f48c-8144-453c-9357-0abc99716b22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.512 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.514 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.514 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.519 2 INFO nova.virt.libvirt.driver [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance spawned successfully.#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.519 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.534 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.539 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.542 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.543 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.544 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.544 2 DEBUG nova.virt.libvirt.driver [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.594 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.595 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432168.5119765, ed89f48c-8144-453c-9357-0abc99716b22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.634 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.639 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.650 2 INFO nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 3.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.651 2 DEBUG nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.662 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.715 2 INFO nova.compute.manager [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 4.29 seconds to build instance.#033[00m
Oct 14 04:56:08 np0005486808 nova_compute[259627]: 2025-10-14 08:56:08.733 2 DEBUG oslo_concurrency.lockutils [None req-e1cb4af6-bcd2-4435-abce-92475521a752 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.556 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.557 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.557 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.558 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.558 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.559 2 INFO nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Terminating instance#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquired lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.560 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:09 np0005486808 nova_compute[259627]: 2025-10-14 08:56:09.813 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.368 2 DEBUG nova.network.neutron [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.390 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Releasing lock "refresh_cache-0936e31d-4bed-46c8-a561-05467cf93f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.390 2 DEBUG nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:56:10 np0005486808 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 14 04:56:10 np0005486808 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 4.331s CPU time.
Oct 14 04:56:10 np0005486808 systemd-machined[214636]: Machine qemu-10-instance-0000000a terminated.
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.537 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.537 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "ed89f48c-8144-453c-9357-0abc99716b22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.538 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.539 2 INFO nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Terminating instance#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquired lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.540 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.606 2 INFO nova.virt.libvirt.driver [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance destroyed successfully.#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.607 2 DEBUG nova.objects.instance [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lazy-loading 'resources' on Instance uuid 0936e31d-4bed-46c8-a561-05467cf93f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:10 np0005486808 nova_compute[259627]: 2025-10-14 08:56:10.662 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.000 2 INFO nova.virt.libvirt.driver [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deleting instance files /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75_del#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.000 2 INFO nova.virt.libvirt.driver [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deletion of /var/lib/nova/instances/0936e31d-4bed-46c8-a561-05467cf93f75_del complete#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.078 2 INFO nova.compute.manager [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG oslo.service.loopingcall [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.079 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:56:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.4 MiB/s wr, 299 op/s
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.762 2 DEBUG nova.network.neutron [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.779 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.785 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Releasing lock "refresh_cache-ed89f48c-8144-453c-9357-0abc99716b22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.786 2 DEBUG nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.799 2 DEBUG nova.network.neutron [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.821 2 INFO nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Took 0.74 seconds to deallocate network for instance.#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:11 np0005486808 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 14 04:56:11 np0005486808 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 4.209s CPU time.
Oct 14 04:56:11 np0005486808 systemd-machined[214636]: Machine qemu-11-instance-0000000b terminated.
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.870 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.871 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.959 2 DEBUG oslo_concurrency.processutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.983 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.983 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:56:11 np0005486808 nova_compute[259627]: 2025-10-14 08:56:11.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.003 2 INFO nova.virt.libvirt.driver [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance destroyed successfully.#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.003 2 DEBUG nova.objects.instance [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lazy-loading 'resources' on Instance uuid ed89f48c-8144-453c-9357-0abc99716b22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989591504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.415 2 DEBUG oslo_concurrency.processutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.422 2 INFO nova.virt.libvirt.driver [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deleting instance files /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22_del#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.423 2 INFO nova.virt.libvirt.driver [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deletion of /var/lib/nova/instances/ed89f48c-8144-453c-9357-0abc99716b22_del complete#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.427 2 DEBUG nova.compute.provider_tree [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.500 2 DEBUG nova.scheduler.client.report [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.543 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.552 2 INFO nova.compute.manager [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.552 2 DEBUG oslo.service.loopingcall [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.553 2 DEBUG nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.553 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.571 2 INFO nova.scheduler.client.report [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Deleted allocations for instance 0936e31d-4bed-46c8-a561-05467cf93f75#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.645 2 DEBUG oslo_concurrency.lockutils [None req-bdc27181-0e87-4f16-b532-206df04ee026 0a71ad98ce694327be82c546cc6f37b4 d319ba0849124bb381bfab8321324c76 - - default default] Lock "0936e31d-4bed-46c8-a561-05467cf93f75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.727 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.743 2 DEBUG nova.network.neutron [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.760 2 INFO nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.814 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.815 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.895 2 DEBUG oslo_concurrency.processutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:12 np0005486808 nova_compute[259627]: 2025-10-14 08:56:12.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814602029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.377 2 DEBUG oslo_concurrency.processutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.385 2 DEBUG nova.compute.provider_tree [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.414 2 DEBUG nova.scheduler.client.report [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.457 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.494 2 INFO nova.scheduler.client.report [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Deleted allocations for instance ed89f48c-8144-453c-9357-0abc99716b22#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.562 2 DEBUG oslo_concurrency.lockutils [None req-fe324dd9-a91d-49ee-9afb-cd10c6fd5ab3 282f4e31893c4c6fb7089cb5f9cc595e dadad3be94b24b99a53e2909310265c0 - - default default] Lock "ed89f48c-8144-453c-9357-0abc99716b22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:56:13 np0005486808 nova_compute[259627]: 2025-10-14 08:56:13.999 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082987097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.392 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.602 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.604 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4639MB free_disk=59.94648361206055GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.604 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.605 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.678 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.678 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:56:14 np0005486808 nova_compute[259627]: 2025-10-14 08:56:14.710 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1413220776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:15 np0005486808 nova_compute[259627]: 2025-10-14 08:56:15.125 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:15 np0005486808 nova_compute[259627]: 2025-10-14 08:56:15.132 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:15 np0005486808 nova_compute[259627]: 2025-10-14 08:56:15.148 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:15 np0005486808 nova_compute[259627]: 2025-10-14 08:56:15.171 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:56:15 np0005486808 nova_compute[259627]: 2025-10-14 08:56:15.171 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 325 op/s
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.173 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.173 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.421 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.422 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.442 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.537 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.538 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.543 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.543 2 INFO nova.compute.claims [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.647 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:16 np0005486808 nova_compute[259627]: 2025-10-14 08:56:16.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126881945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.090 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.096 2 DEBUG nova.compute.provider_tree [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.120 2 DEBUG nova.scheduler.client.report [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.146 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.147 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.210 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.211 2 DEBUG nova.network.neutron [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.232 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.250 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.373 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.375 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.376 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating image(s)#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.411 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.446 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.477 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.481 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.565 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.567 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.569 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.569 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.600 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.605 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.878 2 DEBUG nova.network.neutron [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.878 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.881 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.930 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] resizing rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:17 np0005486808 nova_compute[259627]: 2025-10-14 08:56:17.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.012 2 DEBUG nova.objects.instance [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.045 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.045 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Ensure instance console log exists: /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.046 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.047 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.048 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.052 2 WARNING nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.056 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.057 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.host [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.061 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.062 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.063 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.064 2 DEBUG nova.virt.hardware [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.066 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187502317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.540 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.562 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.565 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440874776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.979 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:18 np0005486808 nova_compute[259627]: 2025-10-14 08:56:18.981 2 DEBUG nova.objects.instance [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.012 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <uuid>4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</uuid>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <name>instance-0000000c</name>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-280360781</nova:name>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:18</nova:creationTime>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:user uuid="8414c5e33cd949b9809db7c92c81ef19">tempest-DeleteServersAdminTestJSON-882596417-project-member</nova:user>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <nova:project uuid="e939ba1f2e1f42ccb04f896c697625d2">tempest-DeleteServersAdminTestJSON-882596417</nova:project>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="serial">4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="uuid">4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/console.log" append="off"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:19 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:19 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:19 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:19 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.058 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.059 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.060 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Using config drive#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.084 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.375 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Creating config drive at /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.384 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgr_sbz5j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.527 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgr_sbz5j" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.565 2 DEBUG nova.storage.rbd_utils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] rbd image 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.571 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.599 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432164.57261, 726e9e21-4f40-48aa-947a-95a78db4dbf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.600 2 INFO nova.compute.manager [-] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.621 2 DEBUG nova.compute.manager [None req-7c8f2b14-9afd-49fd-9119-b17b5f5827c0 - - - - - -] [instance: 726e9e21-4f40-48aa-947a-95a78db4dbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.775 2 DEBUG oslo_concurrency.processutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:19 np0005486808 nova_compute[259627]: 2025-10-14 08:56:19.776 2 INFO nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deleting local config drive /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:19 np0005486808 systemd-machined[214636]: New machine qemu-12-instance-0000000c.
Oct 14 04:56:19 np0005486808 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432180.7879593, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.791 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.791 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.795 2 INFO nova.virt.libvirt.driver [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance spawned successfully.#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.795 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.825 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.832 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.836 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.837 2 DEBUG nova.virt.libvirt.driver [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.864 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.865 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432180.7881947, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.891 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.894 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.901 2 INFO nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 3.53 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.901 2 DEBUG nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.913 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.963 2 INFO nova.compute.manager [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 4.47 seconds to build instance.#033[00m
Oct 14 04:56:20 np0005486808 nova_compute[259627]: 2025-10-14 08:56:20.986 2 DEBUG oslo_concurrency.lockutils [None req-52971c94-6378-4a45-9702-b9b3379b46ad 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Oct 14 04:56:21 np0005486808 nova_compute[259627]: 2025-10-14 08:56:21.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.220 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.221 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.221 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.222 2 INFO nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Terminating instance#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.223 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.224 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquired lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.224 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.557 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.836 2 DEBUG nova.network.neutron [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.861 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Releasing lock "refresh_cache-4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:23 np0005486808 nova_compute[259627]: 2025-10-14 08:56:23.862 2 DEBUG nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:56:23 np0005486808 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 14 04:56:23 np0005486808 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 4.043s CPU time.
Oct 14 04:56:23 np0005486808 systemd-machined[214636]: Machine qemu-12-instance-0000000c terminated.
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.086 2 INFO nova.virt.libvirt.driver [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance destroyed successfully.#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.087 2 DEBUG nova.objects.instance [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lazy-loading 'resources' on Instance uuid 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.531 2 INFO nova.virt.libvirt.driver [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deleting instance files /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_del#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.532 2 INFO nova.virt.libvirt.driver [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deletion of /var/lib/nova/instances/4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543_del complete#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.551 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.552 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.582 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.590 2 INFO nova.compute.manager [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.591 2 DEBUG oslo.service.loopingcall [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.591 2 DEBUG nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.592 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.668 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.669 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.676 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.676 2 INFO nova.compute.claims [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.835 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.923 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.942 2 DEBUG nova.network.neutron [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:24 np0005486808 nova_compute[259627]: 2025-10-14 08:56:24.956 2 INFO nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.005 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166849200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.304 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.314 2 DEBUG nova.compute.provider_tree [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.329 2 DEBUG nova.scheduler.client.report [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.350 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.351 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.354 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.415 2 DEBUG oslo_concurrency.processutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.438 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.440 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.473 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.492 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.583 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.585 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.586 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating image(s)#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.624 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.661 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.690 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.695 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.723 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432170.6048746, 0936e31d-4bed-46c8-a561-05467cf93f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.724 2 INFO nova.compute.manager [-] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.751 2 DEBUG nova.compute.manager [None req-af93ef21-ed20-4481-9f5c-b0cfc5141508 - - - - - -] [instance: 0936e31d-4bed-46c8-a561-05467cf93f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.789 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.790 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.791 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.792 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.819 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.823 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548bff7e-531b-4f5d-b4d3-18d586f46581_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.853 2 DEBUG nova.policy [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50e2582d63041b682c71a379f763c0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bf65c21e4104af6981b071561617657', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:56:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785891186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.890 2 DEBUG oslo_concurrency.processutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.895 2 DEBUG nova.compute.provider_tree [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.919 2 DEBUG nova.scheduler.client.report [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.956 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:25 np0005486808 nova_compute[259627]: 2025-10-14 08:56:25.987 2 INFO nova.scheduler.client.report [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Deleted allocations for instance 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.066 2 DEBUG oslo_concurrency.lockutils [None req-1312c871-c606-4fd9-821d-c1bf8cc4831c 8414c5e33cd949b9809db7c92c81ef19 e939ba1f2e1f42ccb04f896c697625d2 - - default default] Lock "4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.110 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548bff7e-531b-4f5d-b4d3-18d586f46581_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.160 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] resizing rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.247 2 DEBUG nova.objects.instance [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'migration_context' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.267 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.267 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Ensure instance console log exists: /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.268 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.269 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.269 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:26 np0005486808 nova_compute[259627]: 2025-10-14 08:56:26.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:27 np0005486808 nova_compute[259627]: 2025-10-14 08:56:27.003 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432172.0018141, ed89f48c-8144-453c-9357-0abc99716b22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:27 np0005486808 nova_compute[259627]: 2025-10-14 08:56:27.003 2 INFO nova.compute.manager [-] [instance: ed89f48c-8144-453c-9357-0abc99716b22] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:56:27 np0005486808 nova_compute[259627]: 2025-10-14 08:56:27.037 2 DEBUG nova.compute.manager [None req-41272684-7352-4219-aaba-79ee3340b08e - - - - - -] [instance: ed89f48c-8144-453c-9357-0abc99716b22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:27 np0005486808 nova_compute[259627]: 2025-10-14 08:56:27.145 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Successfully created port: 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.463678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187463698, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1179, "num_deletes": 507, "total_data_size": 1159067, "memory_usage": 1187552, "flush_reason": "Manual Compaction"}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187468737, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 763981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22966, "largest_seqno": 24144, "table_properties": {"data_size": 759565, "index_size": 1428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14273, "raw_average_key_size": 18, "raw_value_size": 748105, "raw_average_value_size": 992, "num_data_blocks": 64, "num_entries": 754, "num_filter_entries": 754, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432113, "oldest_key_time": 1760432113, "file_creation_time": 1760432187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 5092 microseconds, and 2391 cpu microseconds.
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.468768) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 763981 bytes OK
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.468783) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.469981) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.469993) EVENT_LOG_v1 {"time_micros": 1760432187469989, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1152515, prev total WAL file size 1152515, number of live WAL files 2.
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(746KB)], [53(8892KB)]
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187470692, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9870209, "oldest_snapshot_seqno": -1}
Oct 14 04:56:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4484 keys, 6862429 bytes, temperature: kUnknown
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187500816, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 6862429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6833001, "index_size": 17094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112478, "raw_average_key_size": 25, "raw_value_size": 6752430, "raw_average_value_size": 1505, "num_data_blocks": 711, "num_entries": 4484, "num_filter_entries": 4484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.501037) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 6862429 bytes
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.502256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 327.0 rd, 227.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.7 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(21.9) write-amplify(9.0) OK, records in: 5482, records dropped: 998 output_compression: NoCompression
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.502271) EVENT_LOG_v1 {"time_micros": 1760432187502264, "job": 28, "event": "compaction_finished", "compaction_time_micros": 30180, "compaction_time_cpu_micros": 16375, "output_level": 6, "num_output_files": 1, "total_output_size": 6862429, "num_input_records": 5482, "num_output_records": 4484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187502476, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432187503735, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.470517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:56:27.503783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:56:27 np0005486808 podman[282315]: 2025-10-14 08:56:27.67977122 +0000 UTC m=+0.083530882 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 04:56:27 np0005486808 podman[282314]: 2025-10-14 08:56:27.683741807 +0000 UTC m=+0.089127919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.622 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Successfully updated port: 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.637 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.637 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.638 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.650 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.651 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.668 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.746 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.746 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.756 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.756 2 INFO nova.compute.claims [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.856 2 DEBUG nova.compute.manager [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.857 2 DEBUG nova.compute.manager [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.857 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.898 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:28 np0005486808 nova_compute[259627]: 2025-10-14 08:56:28.933 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698723594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.397 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.405 2 DEBUG nova.compute.provider_tree [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.426 2 DEBUG nova.scheduler.client.report [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.451 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.452 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.503 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.504 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.523 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.539 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.620 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.622 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.622 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating image(s)#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.646 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.674 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.698 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.702 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.771 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.772 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.772 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.773 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.794 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.797 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:29 np0005486808 nova_compute[259627]: 2025-10-14 08:56:29.912 2 DEBUG nova.policy [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.092 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.188 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.296 2 DEBUG nova.objects.instance [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.310 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.311 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Ensure instance console log exists: /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.311 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.312 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.312 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.324 2 DEBUG nova.network.neutron [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.342 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance network_info: |[{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.343 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.347 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start _get_guest_xml network_info=[{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.351 2 WARNING nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.358 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.358 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.361 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.libvirt.host [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.362 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.363 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.363 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.364 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.365 2 DEBUG nova.virt.hardware [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.368 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.820 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Successfully created port: e0cae4c8-f654-471d-a9c1-c77a306f1edf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:56:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842344352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.848 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.872 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:30 np0005486808 nova_compute[259627]: 2025-10-14 08:56:30.875 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251090085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.301 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.304 2 DEBUG nova.virt.libvirt.vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:25Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.304 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.305 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.307 2 DEBUG nova.objects.instance [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.322 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <uuid>548bff7e-531b-4f5d-b4d3-18d586f46581</uuid>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <name>instance-0000000d</name>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1911316539</nova:name>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:30</nova:creationTime>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <nova:port uuid="7da6c99d-4e04-4c0b-b4d0-d32a2e19c462">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="serial">548bff7e-531b-4f5d-b4d3-18d586f46581</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="uuid">548bff7e-531b-4f5d-b4d3-18d586f46581</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548bff7e-531b-4f5d-b4d3-18d586f46581_disk">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:a5:2f:21"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <target dev="tap7da6c99d-4e"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/console.log" append="off"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:31 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:31 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:31 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:31 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.323 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Preparing to wait for external event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.324 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.325 2 DEBUG nova.virt.libvirt.vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:25Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.325 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.326 2 DEBUG nova.network.os_vif_util [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.326 2 DEBUG os_vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7da6c99d-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7da6c99d-4e, col_values=(('external_ids', {'iface-id': '7da6c99d-4e04-4c0b-b4d0-d32a2e19c462', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:2f:21', 'vm-uuid': '548bff7e-531b-4f5d-b4d3-18d586f46581'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:31 np0005486808 NetworkManager[44885]: <info>  [1760432191.3802] manager: (tap7da6c99d-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.387 2 INFO os_vif [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e')#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.441 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.442 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.442 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No VIF found with MAC fa:16:3e:a5:2f:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.443 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Using config drive#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.467 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 180 op/s
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.932 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Creating config drive at /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config#033[00m
Oct 14 04:56:31 np0005486808 nova_compute[259627]: 2025-10-14 08:56:31.942 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxvylg5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.085 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxvylg5i" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.120 2 DEBUG nova.storage.rbd_utils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.126 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.156 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.170 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.204 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Successfully updated port: e0cae4c8-f654-471d-a9c1-c77a306f1edf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.228 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.228 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.229 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.311 2 DEBUG oslo_concurrency.processutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config 548bff7e-531b-4f5d-b4d3-18d586f46581_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.313 2 INFO nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deleting local config drive /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.355 2 DEBUG nova.compute.manager [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-changed-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.356 2 DEBUG nova.compute.manager [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Refreshing instance network info cache due to event network-changed-e0cae4c8-f654-471d-a9c1-c77a306f1edf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.356 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:32 np0005486808 kernel: tap7da6c99d-4e: entered promiscuous mode
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.3786] manager: (tap7da6c99d-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:32Z|00077|binding|INFO|Claiming lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 for this chassis.
Oct 14 04:56:32 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:32Z|00078|binding|INFO|7da6c99d-4e04-4c0b-b4d0-d32a2e19c462: Claiming fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.407 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2f:21 10.100.0.9'], port_security=['fa:16:3e:a5:2f:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '548bff7e-531b-4f5d-b4d3-18d586f46581', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.409 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9#033[00m
Oct 14 04:56:32 np0005486808 systemd-machined[214636]: New machine qemu-13-instance-0000000d.
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.420 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9047cc8-1bd9-45fd-a10e-4bb530c6a1c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.421 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58ff48d6-a1 in ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.423 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58ff48d6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2de96bd-f0c3-40f9-ab21-1d3b2cf1c062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df07d5b8-079f-4723-8466-c833ad9ef831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.435 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.435 2 DEBUG nova.network.neutron [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.438 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[31aaf8db-e568-460a-9fa7-955c6b14e279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.451 2 DEBUG oslo_concurrency.lockutils [req-ae3acb7e-3cf2-4e95-9fe3-65fde29a8421 req-8e2fbb13-66d1-4f95-88d8-5af4f568c129 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:32 np0005486808 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 14 04:56:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.466 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff9f0d2-8a3b-4924-b9c0-f85dd82cba3a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 systemd-udevd[282678]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:56:32 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:32Z|00079|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 ovn-installed in OVS
Oct 14 04:56:32 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:32Z|00080|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 up in Southbound
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.5100] device (tap7da6c99d-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.5118] device (tap7da6c99d-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.511 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda1610-579a-43be-9b35-400591176703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.5197] manager: (tap58ff48d6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2cc48a-c0fc-4c1c-adcd-83ae9c02e9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.545 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.567 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[779d0f34-f18b-451b-87ee-1a9e4ceb92ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.571 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aba15ba4-ed2d-424f-b41e-b42d1320a348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.6014] device (tap58ff48d6-a0): carrier: link connected
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[defe4d97-714e-43d5-85c0-0a0acaefa7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c7804970-1ac3-4ef8-855b-01baacdeb9f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282707, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bea17eb5-ebea-44f4-848c-80c2c7ad4baa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:28ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597136, 'tstamp': 597136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282708, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cca91d-7500-4fd2-8cc3-86c4d838742b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282709, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1dec415e-9041-4cc6-a670-b5e4795f40ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:56:32
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40a6404f-fd26-4bbb-a297-40b5b4f13b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.794 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:32 np0005486808 NetworkManager[44885]: <info>  [1760432192.7966] manager: (tap58ff48d6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 14 04:56:32 np0005486808 kernel: tap58ff48d6-a0: entered promiscuous mode
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:32 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:32Z|00081|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.809 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a617d7f-7846-4abd-a79d-c3245dabaec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.811 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/58ff48d6-a644-40e6-8fc9-ee19b4354df9.pid.haproxy
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 58ff48d6-a644-40e6-8fc9-ee19b4354df9
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:56:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:32.812 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'env', 'PROCESS_TAG=haproxy-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58ff48d6-a644-40e6-8fc9-ee19b4354df9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:56:32 np0005486808 nova_compute[259627]: 2025-10-14 08:56:32.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:56:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:56:33 np0005486808 podman[282741]: 2025-10-14 08:56:33.235113076 +0000 UTC m=+0.071874934 container create 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:56:33 np0005486808 systemd[1]: Started libpod-conmon-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope.
Oct 14 04:56:33 np0005486808 podman[282741]: 2025-10-14 08:56:33.197845057 +0000 UTC m=+0.034606995 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:56:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fd54790ff5fec54b5deca5f48d8cccc184d3f02bd560c1168873a2e4409722/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:33 np0005486808 podman[282741]: 2025-10-14 08:56:33.312821993 +0000 UTC m=+0.149583881 container init 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:33 np0005486808 podman[282741]: 2025-10-14 08:56:33.318891983 +0000 UTC m=+0.155653841 container start 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 04:56:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : New worker (282804) forked
Oct 14 04:56:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : Loading success.
Oct 14 04:56:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.559 2 DEBUG nova.network.neutron [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.579 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.579 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance network_info: |[{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.580 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.580 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Refreshing network info cache for port e0cae4c8-f654-471d-a9c1-c77a306f1edf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.586 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start _get_guest_xml network_info=[{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.593 2 WARNING nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.603 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.604 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.609 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.610 2 DEBUG nova.virt.libvirt.host [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.611 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.611 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.612 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.613 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.614 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.614 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.615 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.615 2 DEBUG nova.virt.hardware [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.620 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.815 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432193.8148136, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.815 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.833 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.838 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432193.8151333, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.838 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.872 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.876 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:33 np0005486808 nova_compute[259627]: 2025-10-14 08:56:33.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249284711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.108 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.125 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.128 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887096197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.512 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.513 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.513 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Processing event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.514 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.515 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.515 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.516 2 DEBUG oslo_concurrency.lockutils [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.516 2 DEBUG nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] No waiting events found dispatching network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.516 2 WARNING nova.compute.manager [req-0cf6502c-c39c-4872-83f7-bcb52e1d7c3c req-6adae2c7-5787-48a7-a82c-c43e290b3ec9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received unexpected event network-vif-plugged-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.518 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432194.5228746, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.530 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.533 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.535 2 DEBUG nova.virt.libvirt.vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:29Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.536 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.537 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.540 2 DEBUG nova.objects.instance [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.548 2 INFO nova.virt.libvirt.driver [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance spawned successfully.#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.549 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.563 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.573 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <uuid>5de76ef0-5c03-4b43-a691-c858cecd9e80</uuid>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <name>instance-0000000e</name>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-526971628</nova:name>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:33</nova:creationTime>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <nova:port uuid="e0cae4c8-f654-471d-a9c1-c77a306f1edf">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="serial">5de76ef0-5c03-4b43-a691-c858cecd9e80</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="uuid">5de76ef0-5c03-4b43-a691-c858cecd9e80</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:bc:3d:9e"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <target dev="tape0cae4c8-f6"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/console.log" append="off"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:34 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:34 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:34 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:34 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.575 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Preparing to wait for external event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.575 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.576 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.576 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.577 2 DEBUG nova.virt.libvirt.vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:29Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.578 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.578 2 DEBUG nova.network.os_vif_util [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.579 2 DEBUG os_vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0cae4c8-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0cae4c8-f6, col_values=(('external_ids', {'iface-id': 'e0cae4c8-f654-471d-a9c1-c77a306f1edf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:3d:9e', 'vm-uuid': '5de76ef0-5c03-4b43-a691-c858cecd9e80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:34 np0005486808 NetworkManager[44885]: <info>  [1760432194.5904] manager: (tape0cae4c8-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.592 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.595 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.595 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.596 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.597 2 DEBUG nova.virt.libvirt.driver [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.601 2 INFO os_vif [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6')#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.623 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.663 2 INFO nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 9.08 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.664 2 DEBUG nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.670 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:bc:3d:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.671 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Using config drive#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.689 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.770 2 INFO nova.compute.manager [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 10.13 seconds to build instance.#033[00m
Oct 14 04:56:34 np0005486808 nova_compute[259627]: 2025-10-14 08:56:34.794 2 DEBUG oslo_concurrency.lockutils [None req-22c68f14-fd21-45a9-a57b-259d8f58e8d3 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.108 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updated VIF entry in instance network info cache for port e0cae4c8-f654-471d-a9c1-c77a306f1edf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.108 2 DEBUG nova.network.neutron [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [{"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.139 2 DEBUG oslo_concurrency.lockutils [req-d24f5148-5778-4f54-9358-9ad843446e7b req-606a3c50-9bfb-4ae9-b3bc-9a64af0e12ed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-5de76ef0-5c03-4b43-a691-c858cecd9e80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.430 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Creating config drive at /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.436 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeeemi12b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 171 op/s
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.566 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeeemi12b" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.621 2 DEBUG nova.storage.rbd_utils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.628 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.794 2 DEBUG oslo_concurrency.processutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config 5de76ef0-5c03-4b43-a691-c858cecd9e80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.795 2 INFO nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deleting local config drive /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:35 np0005486808 kernel: tape0cae4c8-f6: entered promiscuous mode
Oct 14 04:56:35 np0005486808 NetworkManager[44885]: <info>  [1760432195.8569] manager: (tape0cae4c8-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 14 04:56:35 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:35Z|00082|binding|INFO|Claiming lport e0cae4c8-f654-471d-a9c1-c77a306f1edf for this chassis.
Oct 14 04:56:35 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:35Z|00083|binding|INFO|e0cae4c8-f654-471d-a9c1-c77a306f1edf: Claiming fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:35 np0005486808 systemd-udevd[282948]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.907 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:3d:9e 10.100.0.5'], port_security=['fa:16:3e:bc:3d:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5de76ef0-5c03-4b43-a691-c858cecd9e80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e0cae4c8-f654-471d-a9c1-c77a306f1edf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.908 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e0cae4c8-f654-471d-a9c1-c77a306f1edf in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce#033[00m
Oct 14 04:56:35 np0005486808 systemd-machined[214636]: New machine qemu-14-instance-0000000e.
Oct 14 04:56:35 np0005486808 NetworkManager[44885]: <info>  [1760432195.9203] device (tape0cae4c8-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:56:35 np0005486808 NetworkManager[44885]: <info>  [1760432195.9213] device (tape0cae4c8-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.923 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[292569f1-e773-4a24-9f44-c39ce48090c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.924 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.925 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.925 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3abe0105-ab3d-4b27-b379-c7e8d314c5b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed950ae3-51a6-4b3f-b62f-8a3572d9be1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.939 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2a4238-7d7d-4408-beff-110413a2dcdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:35 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:35Z|00084|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf ovn-installed in OVS
Oct 14 04:56:35 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:35Z|00085|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf up in Southbound
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65317362-35a3-4374-a8ff-10b6f60e772e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 nova_compute[259627]: 2025-10-14 08:56:35.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.990 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a70ff9c3-4631-4607-b15e-0769b24fcdb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:35 np0005486808 NetworkManager[44885]: <info>  [1760432195.9977] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 14 04:56:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:35.997 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[384239c8-7177-4b03-9666-bba4c49cf389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.028 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[67ac003e-7cc0-4ffb-bafa-06c4da131327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.031 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc181f7-d0fa-4834-91e3-e325713cf1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 NetworkManager[44885]: <info>  [1760432196.0581] device (tap58d74886-d0): carrier: link connected
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.065 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c36cf01-f0c6-4979-ab82-564934ed182b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac886bad-d2bd-4074-a9e0-f3ac5e09c0c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597481, 'reachable_time': 18511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282983, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7768f9b7-4798-45f1-86e5-f6400e8207b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597481, 'tstamp': 597481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282984, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.123 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52178bf2-a983-4aef-a8fe-98115a21975e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597481, 'reachable_time': 18511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282985, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84cbdce8-ea07-4dec-b88b-6fd304a0a1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.203 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[148db18f-8683-4f57-ab30-34a18b22afd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.205 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.205 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.206 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:36 np0005486808 NetworkManager[44885]: <info>  [1760432196.2093] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 14 04:56:36 np0005486808 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.213 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:36 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:36Z|00086|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG nova.compute.manager [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.244 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.245 2 DEBUG oslo_concurrency.lockutils [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.245 2 DEBUG nova.compute.manager [req-87ca2516-ba2f-4a91-a924-26c3bb7c0096 req-d67a8059-f536-40da-8ff0-3a6b97c90371 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Processing event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.281 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c7fa18-6d75-481e-adb1-3e257d834012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.282 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:56:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:36.282 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:56:36 np0005486808 podman[283059]: 2025-10-14 08:56:36.661931886 +0000 UTC m=+0.046844986 container create 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:36 np0005486808 systemd[1]: Started libpod-conmon-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope.
Oct 14 04:56:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9fec2e8f9314ad08f9029050299940c2c033e7affbb383b843a03d80cd24df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:36 np0005486808 podman[283059]: 2025-10-14 08:56:36.729175034 +0000 UTC m=+0.114088154 container init 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 04:56:36 np0005486808 podman[283059]: 2025-10-14 08:56:36.637863143 +0000 UTC m=+0.022776263 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:56:36 np0005486808 podman[283059]: 2025-10-14 08:56:36.737230493 +0000 UTC m=+0.122143593 container start 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:36 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : New worker (283080) forked
Oct 14 04:56:36 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : Loading success.
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.826 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.825972, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.828 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.834 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.837 2 INFO nova.virt.libvirt.driver [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance spawned successfully.#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.837 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.843 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.856 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.857 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.858 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.858 2 DEBUG nova.virt.libvirt.driver [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.861 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.861 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.8262136, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.861 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.881 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.883 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432196.8307307, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.883 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.904 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.907 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.914 2 INFO nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.915 2 DEBUG nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:36 np0005486808 nova_compute[259627]: 2025-10-14 08:56:36.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:37 np0005486808 nova_compute[259627]: 2025-10-14 08:56:37.006 2 INFO nova.compute.manager [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 8.28 seconds to build instance.#033[00m
Oct 14 04:56:37 np0005486808 nova_compute[259627]: 2025-10-14 08:56:37.039 2 DEBUG oslo_concurrency.lockutils [None req-4912d8b1-5e14-4ec8-9ba2-4cdbe6c38b8e aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 04:56:37 np0005486808 podman[283090]: 2025-10-14 08:56:37.645203847 +0000 UTC m=+0.055033178 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 04:56:37 np0005486808 podman[283089]: 2025-10-14 08:56:37.671923396 +0000 UTC m=+0.084726730 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.921 2 DEBUG nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.923 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.923 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.924 2 DEBUG oslo_concurrency.lockutils [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.924 2 DEBUG nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:38 np0005486808 nova_compute[259627]: 2025-10-14 08:56:38.925 2 WARNING nova.compute.manager [req-044cfb0d-e220-4835-b97e-bdd86045a628 req-944bc06a-3ba0-4d13-8bcd-b43eb4291a32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received unexpected event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with vm_state active and task_state None.#033[00m
Oct 14 04:56:39 np0005486808 nova_compute[259627]: 2025-10-14 08:56:39.084 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432184.0823946, 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:39 np0005486808 nova_compute[259627]: 2025-10-14 08:56:39.084 2 INFO nova.compute.manager [-] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:56:39 np0005486808 nova_compute[259627]: 2025-10-14 08:56:39.105 2 DEBUG nova.compute.manager [None req-42c44aff-5996-42ed-8a93-23fdfedf2c8d - - - - - -] [instance: 4a84d2bf-44ca-4a5f-8eba-d6b3a7ebf543] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 14 04:56:39 np0005486808 nova_compute[259627]: 2025-10-14 08:56:39.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:40.173 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.829 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.831 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.831 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.832 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.833 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:41 np0005486808 nova_compute[259627]: 2025-10-14 08:56:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:56:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.197 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.198 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.217 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.303 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.303 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.310 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.311 2 INFO nova.compute.claims [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.483 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 435 KiB/s wr, 148 op/s
Oct 14 04:56:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356587249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.983 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:43 np0005486808 nova_compute[259627]: 2025-10-14 08:56:43.991 2 DEBUG nova.compute.provider_tree [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.047 2 DEBUG nova.scheduler.client.report [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.073 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.074 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.119 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.119 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.137 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.151 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.234 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.237 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.239 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating image(s)#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.284 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.322 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.351 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.359 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.382 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.383 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.408 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG nova.compute.manager [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing instance network info cache due to event network-changed-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.409 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.410 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Refreshing network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.431 2 DEBUG nova.policy [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11f9a8052a8349b0a21b3acc32a7f2b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.442 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.442 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.443 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.443 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.466 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.468 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 251ae181-b980-4338-a6b5-eee48450b510_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:44 np0005486808 podman[283414]: 2025-10-14 08:56:44.66593064 +0000 UTC m=+0.096957342 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.716 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 251ae181-b980-4338-a6b5-eee48450b510_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:44 np0005486808 podman[283414]: 2025-10-14 08:56:44.750251001 +0000 UTC m=+0.181277683 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.803 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] resizing rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.890 2 DEBUG nova.objects.instance [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'migration_context' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.912 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Ensure instance console log exists: /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:44 np0005486808 nova_compute[259627]: 2025-10-14 08:56:44.913 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:56:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:56:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:45 np0005486808 nova_compute[259627]: 2025-10-14 08:56:45.450 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Successfully created port: 3af4eb0a-c48b-4857-8399-453429b6af53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:56:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 177 op/s
Oct 14 04:56:46 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:46Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 04:56:46 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:46Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:2f:21 10.100.0.9
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.373 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated VIF entry in instance network info cache for port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.374 2 DEBUG nova.network.neutron [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.395 2 DEBUG oslo_concurrency.lockutils [req-ff69b882-6b0a-41d7-adf3-f32b5488d2c8 req-6119726a-70f9-49ae-bf72-1348de4a00a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:46 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 34df4e35-12e5-4a26-b71a-52d5ee7b4426 does not exist
Oct 14 04:56:46 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2f209ca5-99fc-4a9b-9a05-5c35d15e23ab does not exist
Oct 14 04:56:46 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3ff1c5f5-6062-4550-81df-74bb75a338b4 does not exist
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:56:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.574 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Successfully updated port: 3af4eb0a-c48b-4857-8399-453429b6af53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.589 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.857 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.972 2 DEBUG nova.compute.manager [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.973 2 DEBUG nova.compute.manager [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:46 np0005486808 nova_compute[259627]: 2025-10-14 08:56:46.973 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.069467432 +0000 UTC m=+0.069818226 container create 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:56:47 np0005486808 systemd[1]: Started libpod-conmon-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope.
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.026092177 +0000 UTC m=+0.026442961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.161272066 +0000 UTC m=+0.161622850 container init 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.168437602 +0000 UTC m=+0.168788366 container start 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.171397455 +0000 UTC m=+0.171748219 container attach 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:47 np0005486808 epic_swanson[283932]: 167 167
Oct 14 04:56:47 np0005486808 systemd[1]: libpod-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope: Deactivated successfully.
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.17485698 +0000 UTC m=+0.175207784 container died 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:56:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9c0a8c1572c07b2aa85ee313db1437632293e90831b447391762bdd4d46cebe0-merged.mount: Deactivated successfully.
Oct 14 04:56:47 np0005486808 podman[283916]: 2025-10-14 08:56:47.226702653 +0000 UTC m=+0.227053417 container remove 6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 04:56:47 np0005486808 systemd[1]: libpod-conmon-6555442024d50c092138ce3b14f771aecff73f03a9841978a874e4ffe213066a.scope: Deactivated successfully.
Oct 14 04:56:47 np0005486808 podman[283954]: 2025-10-14 08:56:47.388493226 +0000 UTC m=+0.039251415 container create 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:56:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:47 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:56:47 np0005486808 systemd[1]: Started libpod-conmon-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope.
Oct 14 04:56:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:47 np0005486808 podman[283954]: 2025-10-14 08:56:47.370854713 +0000 UTC m=+0.021612932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 14 04:56:47 np0005486808 podman[283954]: 2025-10-14 08:56:47.503888468 +0000 UTC m=+0.154646687 container init 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:56:47 np0005486808 podman[283954]: 2025-10-14 08:56:47.517957234 +0000 UTC m=+0.168715413 container start 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:56:47 np0005486808 podman[283954]: 2025-10-14 08:56:47.523729336 +0000 UTC m=+0.174487625 container attach 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:56:47 np0005486808 nova_compute[259627]: 2025-10-14 08:56:47.664 2 DEBUG nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:47 np0005486808 nova_compute[259627]: 2025-10-14 08:56:47.739 2 INFO nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] instance snapshotting#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.001 2 INFO nova.virt.libvirt.driver [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Beginning live snapshot process#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.151 2 DEBUG nova.virt.libvirt.imagebackend [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.178 2 DEBUG nova.network.neutron [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.199 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.199 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance network_info: |[{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.200 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.200 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.204 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start _get_guest_xml network_info=[{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.208 2 WARNING nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.213 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.213 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.219 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.219 2 DEBUG nova.virt.libvirt.host [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.220 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.221 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.222 2 DEBUG nova.virt.hardware [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.224 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.397 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(cce7a2c7b7974e068ac051c3b08861cb) on rbd image(5de76ef0-5c03-4b43-a691-c858cecd9e80_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:56:48 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:48Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 04:56:48 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:48Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:3d:9e 10.100.0.5
Oct 14 04:56:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654340394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.658 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:48 np0005486808 flamboyant_brahmagupta[283970]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:56:48 np0005486808 flamboyant_brahmagupta[283970]: --> relative data size: 1.0
Oct 14 04:56:48 np0005486808 flamboyant_brahmagupta[283970]: --> All data devices are unavailable
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.682 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:48 np0005486808 nova_compute[259627]: 2025-10-14 08:56:48.686 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:48 np0005486808 systemd[1]: libpod-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Deactivated successfully.
Oct 14 04:56:48 np0005486808 podman[283954]: 2025-10-14 08:56:48.70447846 +0000 UTC m=+1.355236639 container died 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:56:48 np0005486808 systemd[1]: libpod-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Consumed 1.082s CPU time.
Oct 14 04:56:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2f64aa11d6549a940b1072e13f185fdb1e9874632198a3d3d17b566f2ee17119-merged.mount: Deactivated successfully.
Oct 14 04:56:48 np0005486808 podman[283954]: 2025-10-14 08:56:48.774743316 +0000 UTC m=+1.425501505 container remove 862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:56:48 np0005486808 systemd[1]: libpod-conmon-862b73192510f39218dbf48584a39c7f963300b7ad456be7d06790dd7b142e99.scope: Deactivated successfully.
Oct 14 04:56:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815552921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.134 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.137 2 DEBUG nova.virt.libvirt.vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:44Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.137 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.138 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.139 2 DEBUG nova.objects.instance [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.158 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <uuid>251ae181-b980-4338-a6b5-eee48450b510</uuid>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <name>instance-0000000f</name>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118</nova:name>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:48</nova:creationTime>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:user uuid="11f9a8052a8349b0a21b3acc32a7f2b1">tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member</nova:user>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:project uuid="dffd1ba9c7eb426eba02b7fa1cb571e2">tempest-FloatingIPsAssociationNegativeTestJSON-39168433</nova:project>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <nova:port uuid="3af4eb0a-c48b-4857-8399-453429b6af53">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="serial">251ae181-b980-4338-a6b5-eee48450b510</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="uuid">251ae181-b980-4338-a6b5-eee48450b510</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/251ae181-b980-4338-a6b5-eee48450b510_disk">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/251ae181-b980-4338-a6b5-eee48450b510_disk.config">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:dd:99:88"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <target dev="tap3af4eb0a-c4"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/console.log" append="off"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:49 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:49 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Preparing to wait for external event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.159 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.160 2 DEBUG nova.virt.libvirt.vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:44Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.160 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.161 2 DEBUG nova.network.os_vif_util [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.161 2 DEBUG os_vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3af4eb0a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3af4eb0a-c4, col_values=(('external_ids', {'iface-id': '3af4eb0a-c48b-4857-8399-453429b6af53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:99:88', 'vm-uuid': '251ae181-b980-4338-a6b5-eee48450b510'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:49 np0005486808 NetworkManager[44885]: <info>  [1760432209.1728] manager: (tap3af4eb0a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.180 2 INFO os_vif [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4')#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.246 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.246 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.247 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] No VIF found with MAC fa:16:3e:dd:99:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.247 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Using config drive#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.275 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.415644304 +0000 UTC m=+0.054247173 container create 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 04:56:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 14 04:56:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 14 04:56:49 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 14 04:56:49 np0005486808 systemd[1]: Started libpod-conmon-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope.
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.398171235 +0000 UTC m=+0.036774164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.498 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] cloning vms/5de76ef0-5c03-4b43-a691-c858cecd9e80_disk@cce7a2c7b7974e068ac051c3b08861cb to images/c5692065-49f8-45a6-8eac-e30026b3f690 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 04:56:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 182 op/s
Oct 14 04:56:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.525532652 +0000 UTC m=+0.164135561 container init 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.534157804 +0000 UTC m=+0.172760683 container start 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.537584248 +0000 UTC m=+0.176187137 container attach 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:56:49 np0005486808 stupefied_golick[284301]: 167 167
Oct 14 04:56:49 np0005486808 systemd[1]: libpod-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope: Deactivated successfully.
Oct 14 04:56:49 np0005486808 conmon[284301]: conmon 6a72bb74e1a6537abbf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope/container/memory.events
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.54172048 +0000 UTC m=+0.180323369 container died 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 04:56:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-08d8a554f6c7b905bb590ab4da7f8c9e13b92d72a26b8fa22c91dae907df4c30-merged.mount: Deactivated successfully.
Oct 14 04:56:49 np0005486808 podman[284286]: 2025-10-14 08:56:49.585945306 +0000 UTC m=+0.224548175 container remove 6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:56:49 np0005486808 systemd[1]: libpod-conmon-6a72bb74e1a6537abbf7be3e05d13bc02076481a5fd3b7cf313defd4057af1a3.scope: Deactivated successfully.
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.617 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] flattening images/c5692065-49f8-45a6-8eac-e30026b3f690 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.651 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.652 2 DEBUG nova.network.neutron [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.672 2 DEBUG oslo_concurrency.lockutils [req-c70d62c8-6b36-44ef-a7e7-e2ad78f380b7 req-a7eae03f-bb2d-4008-8d4d-5b5653d171db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.785 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Creating config drive at /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.789 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0310kl_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:49 np0005486808 podman[284379]: 2025-10-14 08:56:49.806310087 +0000 UTC m=+0.048086682 container create fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:56:49 np0005486808 systemd[1]: Started libpod-conmon-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope.
Oct 14 04:56:49 np0005486808 podman[284379]: 2025-10-14 08:56:49.787239799 +0000 UTC m=+0.029016414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:49 np0005486808 podman[284379]: 2025-10-14 08:56:49.910327161 +0000 UTC m=+0.152103816 container init fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.919 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0310kl_n" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:49 np0005486808 podman[284379]: 2025-10-14 08:56:49.922709576 +0000 UTC m=+0.164486191 container start fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:56:49 np0005486808 podman[284379]: 2025-10-14 08:56:49.926516139 +0000 UTC m=+0.168292754 container attach fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.943 2 DEBUG nova.storage.rbd_utils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] rbd image 251ae181-b980-4338-a6b5-eee48450b510_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.950 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config 251ae181-b980-4338-a6b5-eee48450b510_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:49 np0005486808 nova_compute[259627]: 2025-10-14 08:56:49.978 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(cce7a2c7b7974e068ac051c3b08861cb) on rbd image(5de76ef0-5c03-4b43-a691-c858cecd9e80_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.102 2 DEBUG oslo_concurrency.processutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config 251ae181-b980-4338-a6b5-eee48450b510_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.103 2 INFO nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deleting local config drive /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:50 np0005486808 kernel: tap3af4eb0a-c4: entered promiscuous mode
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.1603] manager: (tap3af4eb0a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:50Z|00087|binding|INFO|Claiming lport 3af4eb0a-c48b-4857-8399-453429b6af53 for this chassis.
Oct 14 04:56:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:50Z|00088|binding|INFO|3af4eb0a-c48b-4857-8399-453429b6af53: Claiming fa:16:3e:dd:99:88 10.100.0.13
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.184 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:99:88 10.100.0.13'], port_security=['fa:16:3e:dd:99:88 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '251ae181-b980-4338-a6b5-eee48450b510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1853f749-24a7-4699-9f13-e869ca5b59f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=076f043b-a4ac-4ba0-9e01-fc8b197a9834, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3af4eb0a-c48b-4857-8399-453429b6af53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.190 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3af4eb0a-c48b-4857-8399-453429b6af53 in datapath fb9605f8-2a2c-40d4-892f-fb75a29c07c3 bound to our chassis#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.192 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb9605f8-2a2c-40d4-892f-fb75a29c07c3#033[00m
Oct 14 04:56:50 np0005486808 systemd-machined[214636]: New machine qemu-15-instance-0000000f.
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[653713cf-6515-4c8b-9e7b-923313ad9162]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.219 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb9605f8-21 in ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:56:50 np0005486808 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.221 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb9605f8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b8130f55-d023-4654-96c3-2ec130d3b628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59e79797-c004-4dc2-b3ec-aebedb4706b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.247 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd10875-df39-445c-b22e-bf3ce8e140ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:50Z|00089|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 ovn-installed in OVS
Oct 14 04:56:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:50Z|00090|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 up in Southbound
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 systemd-udevd[284477]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.274 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38c5ea12-088f-47b3-9b21-51b69c1064dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.2969] device (tap3af4eb0a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.2984] device (tap3af4eb0a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.307 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9b72f0-7937-4168-8d8e-93a79628c116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.3176] manager: (tapfb9605f8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ac3277-b768-40b6-858d-05dd69704283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ae29df-60df-421f-bac1-d5ca665de7d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.363 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0400e4bb-96a8-4178-b90c-d5e87b1bb337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.3897] device (tapfb9605f8-20): carrier: link connected
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.396 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[caa684c0-4905-4722-ad5f-ad68befcd81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa069bf-58b5-4fc4-8635-1fc00920cad2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb9605f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:bd:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598915, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284506, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.424 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91cded16-db02-4683-80e1-7e80e5a5d47f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:bd78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598915, 'tstamp': 598915}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284507, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1c7166-3892-45cf-83c8-f4a592a292b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb9605f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:bd:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598915, 'reachable_time': 38139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284508, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 14 04:56:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 14 04:56:50 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f92bc69-45ff-4bd2-8cb6-7f640dcef307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.495 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(snap) on rbd image(c5692065-49f8-45a6-8eac-e30026b3f690) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.539 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b351a0ef-295b-4d47-84ea-a75c574ed296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.540 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb9605f8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb9605f8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 NetworkManager[44885]: <info>  [1760432210.5437] manager: (tapfb9605f8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 14 04:56:50 np0005486808 kernel: tapfb9605f8-20: entered promiscuous mode
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.551 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb9605f8-20, col_values=(('external_ids', {'iface-id': 'e772e4c9-7f41-4f58-a7a3-269a843bc77c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:50Z|00091|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.554 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2669d2-f7b2-4c3a-9fca-83cab941a570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.556 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-fb9605f8-2a2c-40d4-892f-fb75a29c07c3
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.pid.haproxy
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID fb9605f8-2a2c-40d4-892f-fb75a29c07c3
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:56:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:50.557 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'env', 'PROCESS_TAG=haproxy-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb9605f8-2a2c-40d4-892f-fb75a29c07c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]: {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    "0": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "devices": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "/dev/loop3"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            ],
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_name": "ceph_lv0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_size": "21470642176",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "name": "ceph_lv0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "tags": {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_name": "ceph",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.crush_device_class": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.encrypted": "0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_id": "0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.vdo": "0"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            },
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "vg_name": "ceph_vg0"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        }
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    ],
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    "1": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "devices": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "/dev/loop4"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            ],
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_name": "ceph_lv1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_size": "21470642176",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "name": "ceph_lv1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "tags": {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_name": "ceph",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.crush_device_class": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.encrypted": "0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_id": "1",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.vdo": "0"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            },
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "vg_name": "ceph_vg1"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        }
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    ],
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    "2": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "devices": [
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "/dev/loop5"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            ],
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_name": "ceph_lv2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_size": "21470642176",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "name": "ceph_lv2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "tags": {
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.cluster_name": "ceph",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.crush_device_class": "",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.encrypted": "0",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osd_id": "2",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:                "ceph.vdo": "0"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            },
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "type": "block",
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:            "vg_name": "ceph_vg2"
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:        }
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]:    ]
Oct 14 04:56:50 np0005486808 musing_aryabhata[284398]: }
Oct 14 04:56:50 np0005486808 systemd[1]: libpod-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope: Deactivated successfully.
Oct 14 04:56:50 np0005486808 podman[284379]: 2025-10-14 08:56:50.74629617 +0000 UTC m=+0.988072765 container died fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 04:56:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e6664f7d420f54b60931401a546e486e20be51f72fbe885ba74c022856763a47-merged.mount: Deactivated successfully.
Oct 14 04:56:50 np0005486808 podman[284379]: 2025-10-14 08:56:50.808081837 +0000 UTC m=+1.049858432 container remove fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_aryabhata, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 04:56:50 np0005486808 systemd[1]: libpod-conmon-fbbb9d72f228af6a45ce73431e066c05bfeb4f60d5af1aa2c511474521198687.scope: Deactivated successfully.
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.928 2 DEBUG nova.compute.manager [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.930 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.930 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.931 2 DEBUG oslo_concurrency.lockutils [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:50 np0005486808 nova_compute[259627]: 2025-10-14 08:56:50.932 2 DEBUG nova.compute.manager [req-3d37f6d5-2e5b-4dd8-9ed0-b916f1fb7a44 req-b0bada4d-4a78-4e56-9b3d-5a3c3ed56d3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Processing event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:56:50 np0005486808 podman[284621]: 2025-10-14 08:56:50.935797083 +0000 UTC m=+0.053264459 container create b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:56:50 np0005486808 systemd[1]: Started libpod-conmon-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope.
Oct 14 04:56:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:51 np0005486808 podman[284621]: 2025-10-14 08:56:50.91082566 +0000 UTC m=+0.028293026 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:56:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c99ed231e5c1fd99f5e9dee19ad83031abe3a98e852118ad1216c381a913f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:51 np0005486808 podman[284621]: 2025-10-14 08:56:51.025842043 +0000 UTC m=+0.143309479 container init b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 04:56:51 np0005486808 podman[284621]: 2025-10-14 08:56:51.031318638 +0000 UTC m=+0.148786024 container start b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 04:56:51 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : New worker (284711) forked
Oct 14 04:56:51 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : Loading success.
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.102 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.1023536, 251ae181-b980-4338-a6b5-eee48450b510 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.103 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.105 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.107 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.110 2 INFO nova.virt.libvirt.driver [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance spawned successfully.#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.111 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.134 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.138 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.143 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.144 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.144 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.145 2 DEBUG nova.virt.libvirt.driver [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.173 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.102583, 251ae181-b980-4338-a6b5-eee48450b510 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.173 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.197 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.205 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.205 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.208 2 INFO nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 6.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.208 2 DEBUG nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.213 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432211.1066892, 251ae181-b980-4338-a6b5-eee48450b510 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.213 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.238 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.241 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.243 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.288 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.305 2 INFO nova.compute.manager [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 8.04 seconds to build instance.#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.316 2 DEBUG oslo_concurrency.lockutils [None req-d727be4a-ca2b-46dd-9828-a80ed0613c16 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.343 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.343 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.350 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.350 2 INFO nova.compute.claims [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 14 04:56:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 14 04:56:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.492 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 16 MiB/s wr, 387 op/s
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.520939121 +0000 UTC m=+0.047693372 container create f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 04:56:51 np0005486808 systemd[1]: Started libpod-conmon-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope.
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.496780248 +0000 UTC m=+0.023534529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.611240478 +0000 UTC m=+0.137994729 container init f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.619575613 +0000 UTC m=+0.146329834 container start f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.624524875 +0000 UTC m=+0.151279106 container attach f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:51 np0005486808 awesome_turing[284803]: 167 167
Oct 14 04:56:51 np0005486808 systemd[1]: libpod-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope: Deactivated successfully.
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.630243185 +0000 UTC m=+0.156997426 container died f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:56:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-03f50a5a28da95121eb7137613496f9e275f5d38756c7c2aa58c5464276c0526-merged.mount: Deactivated successfully.
Oct 14 04:56:51 np0005486808 podman[284786]: 2025-10-14 08:56:51.664446915 +0000 UTC m=+0.191201136 container remove f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID c5692065-49f8-45a6-8eac-e30026b3f690
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver 
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver 
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.665 2 ERROR nova.virt.libvirt.driver #033[00m
Oct 14 04:56:51 np0005486808 systemd[1]: libpod-conmon-f464b8850b7f8e5ddcb4c4a41b37fdf0fb7ef91c9cb995c2327247dc1771f20b.scope: Deactivated successfully.
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.718 2 DEBUG nova.storage.rbd_utils [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(snap) on rbd image(c5692065-49f8-45a6-8eac-e30026b3f690) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:56:51 np0005486808 nova_compute[259627]: 2025-10-14 08:56:51.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:51 np0005486808 podman[284863]: 2025-10-14 08:56:51.866530677 +0000 UTC m=+0.045453817 container create 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:56:51 np0005486808 systemd[1]: Started libpod-conmon-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope.
Oct 14 04:56:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:56:51 np0005486808 podman[284863]: 2025-10-14 08:56:51.851622541 +0000 UTC m=+0.030545701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:56:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:56:51 np0005486808 podman[284863]: 2025-10-14 08:56:51.970616423 +0000 UTC m=+0.149539583 container init 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:56:51 np0005486808 podman[284863]: 2025-10-14 08:56:51.978771693 +0000 UTC m=+0.157694833 container start 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:56:51 np0005486808 podman[284863]: 2025-10-14 08:56:51.982440684 +0000 UTC m=+0.161363854 container attach 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437809994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.019 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.024 2 DEBUG nova.compute.provider_tree [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.047 2 DEBUG nova.scheduler.client.report [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.072 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.072 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.126 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.126 2 DEBUG nova.network.neutron [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.150 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.171 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.269 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.271 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.271 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating image(s)#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.298 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.325 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.347 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.350 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.377 2 DEBUG nova.network.neutron [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.377 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.432 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.433 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.434 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.434 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.452 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.455 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 14 04:56:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 14 04:56:52 np0005486808 nova_compute[259627]: 2025-10-14 08:56:52.944 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]: {
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_id": 2,
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "type": "bluestore"
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    },
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_id": 1,
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "type": "bluestore"
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    },
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_id": 0,
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:        "type": "bluestore"
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]:    }
Oct 14 04:56:53 np0005486808 youthful_hamilton[284880]: }
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.046 2 WARNING nova.compute.manager [None req-0eba2cbe-b5bb-421b-b651-e6bd0b8cdfd2 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Image not found during snapshot: nova.exception.ImageNotFound: Image c5692065-49f8-45a6-8eac-e30026b3f690 could not be found.#033[00m
Oct 14 04:56:53 np0005486808 systemd[1]: libpod-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Deactivated successfully.
Oct 14 04:56:53 np0005486808 podman[284863]: 2025-10-14 08:56:53.05168435 +0000 UTC m=+1.230607530 container died 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:56:53 np0005486808 systemd[1]: libpod-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Consumed 1.010s CPU time.
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.056 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] resizing rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e820943df39ec7e0af7fa8d5ae88ce8e422472d15ef52a7e5b95f628b36ed4ad-merged.mount: Deactivated successfully.
Oct 14 04:56:53 np0005486808 podman[284863]: 2025-10-14 08:56:53.115110308 +0000 UTC m=+1.294033468 container remove 8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:56:53 np0005486808 systemd[1]: libpod-conmon-8138a73624c477879462b9abc8a84f111d69f16b26b503f3b86352d365891cde.scope: Deactivated successfully.
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.182 2 DEBUG nova.objects.instance [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'migration_context' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d91aea29-ef9c-425b-86b8-d9354c8200b9 does not exist
Oct 14 04:56:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5359da39-87da-4cc7-8c9e-66b84b63cc7a does not exist
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.205 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.205 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Ensure instance console log exists: /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.206 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.207 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.215 2 WARNING nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.220 2 DEBUG nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.220 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG oslo_concurrency.lockutils [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.221 2 DEBUG nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.221 2 WARNING nova.compute.manager [req-c9030e4d-274c-484f-9c37-3f5edddbfcbb req-cd66ecd2-eb8c-402c-90f9-07644ae72e31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received unexpected event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.227 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.227 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.231 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.231 2 DEBUG nova.virt.libvirt.host [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.232 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.232 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.233 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.234 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.235 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.236 2 DEBUG nova.virt.hardware [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.239 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:56:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 311 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 24 MiB/s wr, 574 op/s
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909585213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.694 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.728 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:53 np0005486808 nova_compute[259627]: 2025-10-14 08:56:53.732 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:56:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617580688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.191 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.193 2 DEBUG nova.objects.instance [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.208 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <uuid>753ea698-6cc6-4a73-a0d2-1366e5374a9c</uuid>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <name>instance-00000010</name>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:name>tempest-LiveMigrationNegativeTest-server-101546050</nova:name>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:56:53</nova:creationTime>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:user uuid="6ecc59efebb941f4b0aa79b58a7e610e">tempest-LiveMigrationNegativeTest-588604906-project-member</nova:user>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <nova:project uuid="a618b00ff8c34f40bd31e4f56c019b1b">tempest-LiveMigrationNegativeTest-588604906</nova:project>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="serial">753ea698-6cc6-4a73-a0d2-1366e5374a9c</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="uuid">753ea698-6cc6-4a73-a0d2-1366e5374a9c</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/console.log" append="off"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:56:54 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:56:54 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:56:54 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:56:54 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.271 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.272 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.273 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Using config drive#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.292 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.453 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Creating config drive at /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.459 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mgz6ydt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.586 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mgz6ydt" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.610 2 DEBUG nova.storage.rbd_utils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.613 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.743 2 DEBUG oslo_concurrency.processutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config 753ea698-6cc6-4a73-a0d2-1366e5374a9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:54 np0005486808 nova_compute[259627]: 2025-10-14 08:56:54.744 2 INFO nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deleting local config drive /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c/disk.config because it was imported into RBD.#033[00m
Oct 14 04:56:54 np0005486808 systemd-machined[214636]: New machine qemu-16-instance-00000010.
Oct 14 04:56:54 np0005486808 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 14 04:56:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 20 MiB/s wr, 660 op/s
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.579 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.580 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.581 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.582 2 INFO nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Terminating instance#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.583 2 DEBUG nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:56:55 np0005486808 kernel: tape0cae4c8-f6 (unregistering): left promiscuous mode
Oct 14 04:56:55 np0005486808 NetworkManager[44885]: <info>  [1760432215.6262] device (tape0cae4c8-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:55Z|00092|binding|INFO|Releasing lport e0cae4c8-f654-471d-a9c1-c77a306f1edf from this chassis (sb_readonly=0)
Oct 14 04:56:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:55Z|00093|binding|INFO|Setting lport e0cae4c8-f654-471d-a9c1-c77a306f1edf down in Southbound
Oct 14 04:56:55 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:55Z|00094|binding|INFO|Removing iface tape0cae4c8-f6 ovn-installed in OVS
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.647 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:3d:9e 10.100.0.5'], port_security=['fa:16:3e:bc:3d:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5de76ef0-5c03-4b43-a691-c858cecd9e80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e0cae4c8-f654-471d-a9c1-c77a306f1edf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:56:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.648 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e0cae4c8-f654-471d-a9c1-c77a306f1edf in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis#033[00m
Oct 14 04:56:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.650 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:56:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.653 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca8af4e-c3a5-435c-a8c0-3753bffb18e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:55.655 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 14 04:56:55 np0005486808 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 12.674s CPU time.
Oct 14 04:56:55 np0005486808 systemd-machined[214636]: Machine qemu-14-instance-0000000e terminated.
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : haproxy version is 2.8.14-c23fe91
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [NOTICE]   (283078) : path to executable is /usr/sbin/haproxy
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : Exiting Master process...
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : Exiting Master process...
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [ALERT]    (283078) : Current worker (283080) exited with code 143 (Terminated)
Oct 14 04:56:55 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[283074]: [WARNING]  (283078) : All workers exited. Exiting... (0)
Oct 14 04:56:55 np0005486808 systemd[1]: libpod-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope: Deactivated successfully.
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 podman[285361]: 2025-10-14 08:56:55.816106593 +0000 UTC m=+0.070280887 container died 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.833 2 INFO nova.virt.libvirt.driver [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Instance destroyed successfully.#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.834 2 DEBUG nova.objects.instance [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid 5de76ef0-5c03-4b43-a691-c858cecd9e80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.849 2 DEBUG nova.virt.libvirt.vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-526971628',display_name='tempest-ImagesOneServerNegativeTestJSON-server-526971628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-526971628',id=14,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-6k171gzh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:53Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=5de76ef0-5c03-4b43-a691-c858cecd9e80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.850 2 DEBUG nova.network.os_vif_util [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "address": "fa:16:3e:bc:3d:9e", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cae4c8-f6", "ovs_interfaceid": "e0cae4c8-f654-471d-a9c1-c77a306f1edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.852 2 DEBUG nova.network.os_vif_util [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.852 2 DEBUG os_vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0cae4c8-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.864 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432215.8647397, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.867 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.867 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.886 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.889 2 INFO nova.virt.libvirt.driver [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance spawned successfully.#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.890 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.892 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f-userdata-shm.mount: Deactivated successfully.
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ea9fec2e8f9314ad08f9029050299940c2c033e7affbb383b843a03d80cd24df-merged.mount: Deactivated successfully.
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.912 2 INFO os_vif [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:3d:9e,bridge_name='br-int',has_traffic_filtering=True,id=e0cae4c8-f654-471d-a9c1-c77a306f1edf,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0cae4c8-f6')#033[00m
Oct 14 04:56:55 np0005486808 podman[285361]: 2025-10-14 08:56:55.921095501 +0000 UTC m=+0.175269755 container cleanup 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 04:56:55 np0005486808 systemd[1]: libpod-conmon-4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f.scope: Deactivated successfully.
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.933 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432215.8653927, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.933 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Started (Lifecycle Event)#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.945 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.945 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.946 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.947 2 DEBUG nova.virt.libvirt.driver [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.957 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.960 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:56:55 np0005486808 nova_compute[259627]: 2025-10-14 08:56:55.986 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:56:56 np0005486808 podman[285412]: 2025-10-14 08:56:56.008074427 +0000 UTC m=+0.050747517 container remove 4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.011 2 INFO nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 3.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.011 2 DEBUG nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56a397ae-fd45-49e4-af2a-f9d4eb66ee47]: (4, ('Tue Oct 14 08:56:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f)\n4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f\nTue Oct 14 08:56:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f)\n4fab5eac4d71ff6fc964158def5b038e31ad1f674c7a022f63f54a28f2c6cc4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.016 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ece2c3a-c73d-4337-bc82-6eb60c7049d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.016 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:56 np0005486808 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.028 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0973e95-6c8a-4726-b7b9-dfa009a2d4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.055 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7df35203-fca2-4ef3-8194-4f592512bca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.057 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0df2aadf-768b-4156-b0a2-55cf3649589a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a06a0d36-242f-4f08-b859-18a7f5e7d095]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597474, 'reachable_time': 38692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285430, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.081 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:56:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:56:56.081 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[621f13d2-b946-4250-be65-6860cee01ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.110 2 INFO nova.compute.manager [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 4.80 seconds to build instance.#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.133 2 DEBUG oslo_concurrency.lockutils [None req-0b559ca0-ac85-4437-bc7e-b9fb7dd1ae04 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.279 2 INFO nova.virt.libvirt.driver [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deleting instance files /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80_del#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.280 2 INFO nova.virt.libvirt.driver [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deletion of /var/lib/nova/instances/5de76ef0-5c03-4b43-a691-c858cecd9e80_del complete#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.361 2 INFO nova.compute.manager [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.362 2 DEBUG oslo.service.loopingcall [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.363 2 DEBUG nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.363 2 DEBUG nova.network.neutron [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.372 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.372 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.388 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.457 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.457 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.464 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.464 2 INFO nova.compute.claims [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.493 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.494 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.494 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG oslo_concurrency.lockutils [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.495 2 DEBUG nova.compute.manager [req-f4c96c99-e489-4a5a-8792-d2879de90d14 req-13968f57-add7-42a4-86f2-3736aa7168fa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-unplugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.623 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:56 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:56Z|00095|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 04:56:56 np0005486808 NetworkManager[44885]: <info>  [1760432216.8238] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 14 04:56:56 np0005486808 NetworkManager[44885]: <info>  [1760432216.8247] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:56 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:56Z|00096|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 04:56:56 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:56Z|00097|binding|INFO|Releasing lport e772e4c9-7f41-4f58-a7a3-269a843bc77c from this chassis (sb_readonly=0)
Oct 14 04:56:56 np0005486808 ovn_controller[152662]: 2025-10-14T08:56:56Z|00098|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:56 np0005486808 nova_compute[259627]: 2025-10-14 08:56:56.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386572064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.079 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.084 2 DEBUG nova.compute.provider_tree [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.112 2 DEBUG nova.scheduler.client.report [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.120 2 DEBUG nova.network.neutron [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.148 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.149 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.156 2 INFO nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.298 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.299 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.305 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.305 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.318 2 DEBUG nova.compute.manager [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.320 2 DEBUG nova.compute.manager [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.321 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.323 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.343 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.425 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.426 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.427 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating image(s)#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.446 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.467 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 14 04:56:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.496 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.501 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 271 op/s
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.534 2 DEBUG oslo_concurrency.processutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.566 2 DEBUG nova.policy [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50e2582d63041b682c71a379f763c0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bf65c21e4104af6981b071561617657', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.572 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.577 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.578 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.578 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.604 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.607 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:56:57 np0005486808 nova_compute[259627]: 2025-10-14 08:56:57.933 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.001 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] resizing rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:56:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:56:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531381479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.085 2 DEBUG nova.objects.instance [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'migration_context' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.088 2 DEBUG oslo_concurrency.processutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.093 2 DEBUG nova.compute.provider_tree [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.102 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.102 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Ensure instance console log exists: /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.103 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.104 2 DEBUG nova.scheduler.client.report [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.138 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.178 2 INFO nova.scheduler.client.report [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance 5de76ef0-5c03-4b43-a691-c858cecd9e80#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.246 2 DEBUG oslo_concurrency.lockutils [None req-2d620b24-335d-45fa-8e98-aedc7d5cdcfb aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.443 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Successfully created port: 499f3731-d66b-4964-b5a8-387adacf5166 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:56:58 np0005486808 podman[285643]: 2025-10-14 08:56:58.648005542 +0000 UTC m=+0.063491060 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid)
Oct 14 04:56:58 np0005486808 podman[285642]: 2025-10-14 08:56:58.654910202 +0000 UTC m=+0.072162403 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.664 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.665 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG oslo_concurrency.lockutils [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "5de76ef0-5c03-4b43-a691-c858cecd9e80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.666 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] No waiting events found dispatching network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.666 2 WARNING nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received unexpected event network-vif-plugged-e0cae4c8-f654-471d-a9c1-c77a306f1edf for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:56:58 np0005486808 nova_compute[259627]: 2025-10-14 08:56:58.667 2 DEBUG nova.compute.manager [req-ecb96d48-4d47-4d19-9928-b1d5955d64d7 req-01997485-ae3e-44b5-b7db-c3ee30e4e46b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Received event network-vif-deleted-e0cae4c8-f654-471d-a9c1-c77a306f1edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.148 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.149 2 DEBUG nova.network.neutron [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.170 2 DEBUG oslo_concurrency.lockutils [req-81ae9d67-3798-468c-9ed8-2b2b72999eb2 req-bd4c7ecb-4c44-4427-8b19-01c95e0d4050 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:56:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.531 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Successfully updated port: 499f3731-d66b-4964-b5a8-387adacf5166 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.558 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.558 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.559 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.752 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.855 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.856 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.876 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.954 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.955 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.965 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:56:59 np0005486808 nova_compute[259627]: 2025-10-14 08:56:59.965 2 INFO nova.compute.claims [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.172 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.585 2 DEBUG nova.network.neutron [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.616 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.617 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance network_info: |[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.621 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start _get_guest_xml network_info=[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.627 2 WARNING nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.632 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.633 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.637 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.638 2 DEBUG nova.virt.libvirt.host [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.639 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.640 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.641 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.641 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.642 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.642 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.643 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.644 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.644 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818534980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.645 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.646 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.647 2 DEBUG nova.virt.hardware [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.656 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.696 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.704 2 DEBUG nova.compute.provider_tree [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.723 2 DEBUG nova.scheduler.client.report [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.751 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.753 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.794 2 DEBUG nova.compute.manager [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.796 2 DEBUG nova.compute.manager [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.797 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.797 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.798 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.808 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.808 2 DEBUG nova.network.neutron [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.833 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.853 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.949 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.950 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.950 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating image(s)#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.973 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:00 np0005486808 nova_compute[259627]: 2025-10-14 08:57:00.994 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.013 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.024 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.079 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.079 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.080 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.080 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.100 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.104 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059173365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.127 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.147 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.150 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.175 2 DEBUG nova.network.neutron [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.176 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.348 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.421 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] resizing rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:57:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.8 MiB/s wr, 345 op/s
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.538 2 DEBUG nova.objects.instance [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'migration_context' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.563 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.564 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Ensure instance console log exists: /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.564 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.565 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.565 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.567 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.573 2 WARNING nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.577 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.578 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.582 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.583 2 DEBUG nova.virt.libvirt.host [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.583 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.584 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.585 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.586 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.586 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.587 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.587 2 DEBUG nova.virt.hardware [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.590 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585421242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.639 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.641 2 DEBUG nova.virt.libvirt.vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:57Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.642 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.643 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.644 2 DEBUG nova.objects.instance [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.656 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <uuid>01db05f6-07fb-41b5-8aaf-27ad5712fcda</uuid>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <name>instance-00000011</name>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1349611853</nova:name>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:00</nova:creationTime>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <nova:port uuid="499f3731-d66b-4964-b5a8-387adacf5166">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="serial">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="uuid">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:58:ee:87"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <target dev="tap499f3731-d6"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log" append="off"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:01 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:01 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:01 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:01 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.658 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Preparing to wait for external event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.659 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.662 2 DEBUG nova.virt.libvirt.vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:56:57Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.662 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.663 2 DEBUG nova.network.os_vif_util [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.664 2 DEBUG os_vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap499f3731-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap499f3731-d6, col_values=(('external_ids', {'iface-id': '499f3731-d66b-4964-b5a8-387adacf5166', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:ee:87', 'vm-uuid': '01db05f6-07fb-41b5-8aaf-27ad5712fcda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:01 np0005486808 NetworkManager[44885]: <info>  [1760432221.6715] manager: (tap499f3731-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.681 2 INFO os_vif [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.741 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.742 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.743 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] No VIF found with MAC fa:16:3e:58:ee:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.744 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Using config drive#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.770 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:01 np0005486808 nova_compute[259627]: 2025-10-14 08:57:01.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.000 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.001 2 DEBUG nova.network.neutron [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.016 2 DEBUG oslo_concurrency.lockutils [req-7d0ddd5d-105f-45a4-9310-9f18e1590986 req-d7e2ea42-e567-4571-ab70-921c5f10fbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243070830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.062 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.083 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.087 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.108 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Creating config drive at /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.113 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsw3uglaq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.239 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsw3uglaq" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.259 2 DEBUG nova.storage.rbd_utils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] rbd image 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.263 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.420 2 DEBUG oslo_concurrency.processutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config 01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.421 2 INFO nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deleting local config drive /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/disk.config because it was imported into RBD.#033[00m
Oct 14 04:57:02 np0005486808 kernel: tap499f3731-d6: entered promiscuous mode
Oct 14 04:57:02 np0005486808 NetworkManager[44885]: <info>  [1760432222.4638] manager: (tap499f3731-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00099|binding|INFO|Claiming lport 499f3731-d66b-4964-b5a8-387adacf5166 for this chassis.
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00100|binding|INFO|499f3731-d66b-4964-b5a8-387adacf5166: Claiming fa:16:3e:58:ee:87 10.100.0.5
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.472 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.474 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.475 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9#033[00m
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00101|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 ovn-installed in OVS
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00102|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 up in Southbound
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290988960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.494 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85f942ed-58c7-47f1-8d9f-81d6c0825b67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 systemd-udevd[286063]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:57:02 np0005486808 systemd-machined[214636]: New machine qemu-17-instance-00000011.
Oct 14 04:57:02 np0005486808 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.512 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.515 2 DEBUG nova.objects.instance [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:02 np0005486808 NetworkManager[44885]: <info>  [1760432222.5247] device (tap499f3731-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:57:02 np0005486808 NetworkManager[44885]: <info>  [1760432222.5257] device (tap499f3731-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.532 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <uuid>45f3b13d-65b1-4bbf-8192-7b842f616b4d</uuid>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <name>instance-00000012</name>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1263596378</nova:name>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:01</nova:creationTime>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:user uuid="6ecc59efebb941f4b0aa79b58a7e610e">tempest-LiveMigrationNegativeTest-588604906-project-member</nova:user>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <nova:project uuid="a618b00ff8c34f40bd31e4f56c019b1b">tempest-LiveMigrationNegativeTest-588604906</nova:project>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="serial">45f3b13d-65b1-4bbf-8192-7b842f616b4d</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="uuid">45f3b13d-65b1-4bbf-8192-7b842f616b4d</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/console.log" append="off"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:02 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:02 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:02 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:02 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.538 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c4878e76-bdc1-4869-8e62-50530a1de648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.544 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3043d0f5-9e28-4f13-ae50-120ea05acb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.576 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[55040020-5bef-45e6-b7b4-a336d39270a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.582 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.583 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.584 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Using config drive#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9acbfbbb-c52a-4b11-afae-1ea0e600d09f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286078, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a3ee0d-e530-4903-97f0-9b8608ef96ad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286093, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286093, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.610 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:02.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.616 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:99:88 10.100.0.13
Oct 14 04:57:02 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:02Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:99:88 10.100.0.13
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.976 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Creating config drive at /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config#033[00m
Oct 14 04:57:02 np0005486808 nova_compute[259627]: 2025-10-14 08:57:02.981 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_c09z7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.114 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_c09z7_" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.141 2 DEBUG nova.storage.rbd_utils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] rbd image 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.145 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.293 2 DEBUG oslo_concurrency.processutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config 45f3b13d-65b1-4bbf-8192-7b842f616b4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.294 2 INFO nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deleting local config drive /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d/disk.config because it was imported into RBD.#033[00m
Oct 14 04:57:03 np0005486808 systemd-machined[214636]: New machine qemu-18-instance-00000012.
Oct 14 04:57:03 np0005486808 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.430 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.4296439, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.431 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.452 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.457 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.4304578, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.457 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.485 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.489 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 260 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.513 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.608 2 DEBUG nova.compute.manager [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.609 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG oslo_concurrency.lockutils [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.610 2 DEBUG nova.compute.manager [req-07f0bdf3-d808-4e28-a67a-f0e212011568 req-f48a8dc3-a0fa-4283-a4a6-926c72c069ea 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Processing event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.611 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.613 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432223.6132832, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.613 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.615 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.623 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance spawned successfully.#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.623 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.637 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.644 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.647 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.647 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.648 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.649 2 DEBUG nova.virt.libvirt.driver [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.676 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.718 2 INFO nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 6.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.719 2 DEBUG nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.782 2 INFO nova.compute.manager [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 7.34 seconds to build instance.#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.801 2 DEBUG oslo_concurrency.lockutils [None req-c708e834-4e53-4ce6-ac9c-2d90ae21ace8 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.916 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.916 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.933 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.997 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:03 np0005486808 nova_compute[259627]: 2025-10-14 08:57:03.998 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.004 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.004 2 INFO nova.compute.claims [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.192 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.274 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432224.273628, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.275 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.285 2 DEBUG nova.compute.manager [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.285 2 DEBUG nova.compute.manager [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing instance network info cache due to event network-changed-3af4eb0a-c48b-4857-8399-453429b6af53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.286 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.287 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.288 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Refreshing network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.292 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.294 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.304 2 INFO nova.virt.libvirt.driver [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance spawned successfully.#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.305 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.333 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.333 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.334 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.336 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.337 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.337 2 DEBUG nova.virt.libvirt.driver [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.350 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.350 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432224.2754664, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.379 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.388 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.413 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.421 2 INFO nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 3.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.421 2 DEBUG nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.517 2 INFO nova.compute.manager [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 4.59 seconds to build instance.#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.535 2 DEBUG oslo_concurrency.lockutils [None req-3e83c9f5-01f3-4875-bac9-f1f087a51028 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236549572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.653 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.660 2 DEBUG nova.compute.provider_tree [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.680 2 DEBUG nova.scheduler.client.report [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.700 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.700 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.746 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.747 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.768 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.791 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.914 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.916 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.917 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating image(s)#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.950 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:04 np0005486808 nova_compute[259627]: 2025-10-14 08:57:04.981 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.007 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.011 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.044 2 DEBUG nova.policy [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.077 2 DEBUG nova.objects.instance [None req-662adbaf-2a63-42c6-9fcc-fa225606b41c 82ddf91380754dd8ae02917eaf89cc5e 4e1512a994154263bd259548462f8c31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.101 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.101 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.102 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.102 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.135 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.138 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432225.1069105, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.159 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.181 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:05 np0005486808 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 14 04:57:05 np0005486808 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 1.629s CPU time.
Oct 14 04:57:05 np0005486808 systemd-machined[214636]: Machine qemu-18-instance-00000012 terminated.
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.386 2 DEBUG nova.compute.manager [None req-662adbaf-2a63-42c6-9fcc-fa225606b41c 82ddf91380754dd8ae02917eaf89cc5e 4e1512a994154263bd259548462f8c31 - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.393 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.446 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updated VIF entry in instance network info cache for port 3af4eb0a-c48b-4857-8399-453429b6af53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.447 2 DEBUG nova.network.neutron [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [{"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.456 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.486 2 DEBUG oslo_concurrency.lockutils [req-51d95bcc-c442-470d-ba7a-4c87bb03320e req-9c9c5ae0-d450-46dc-994b-dba31e3c4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-251ae181-b980-4338-a6b5-eee48450b510" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:57:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:57:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:57:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650321656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:57:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 363 op/s
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.547 2 DEBUG nova.objects.instance [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.560 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.560 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Ensure instance console log exists: /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:05 np0005486808 nova_compute[259627]: 2025-10-14 08:57:05.561 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:06 np0005486808 nova_compute[259627]: 2025-10-14 08:57:06.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:06 np0005486808 nova_compute[259627]: 2025-10-14 08:57:06.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.014 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.220 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Successfully created port: d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:57:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.2 MiB/s wr, 362 op/s
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.643 2 DEBUG nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.643 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG oslo_concurrency.lockutils [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.644 2 DEBUG nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.645 2 WARNING nova.compute.manager [req-c9cef85e-d1e7-48f4-864c-7569fa12fc76 req-9dcb2720-06a4-40e6-8194-7561d81848c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.716 2 DEBUG nova.compute.manager [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG nova.compute.manager [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.717 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:07 np0005486808 nova_compute[259627]: 2025-10-14 08:57:07.994 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Successfully updated port: d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.008 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.009 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.009 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.140 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.222 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.223 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.223 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.224 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.224 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.225 2 INFO nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Terminating instance#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.227 2 DEBUG nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:08 np0005486808 kernel: tap3af4eb0a-c4 (unregistering): left promiscuous mode
Oct 14 04:57:08 np0005486808 NetworkManager[44885]: <info>  [1760432228.2828] device (tap3af4eb0a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:08 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:08Z|00103|binding|INFO|Releasing lport 3af4eb0a-c48b-4857-8399-453429b6af53 from this chassis (sb_readonly=0)
Oct 14 04:57:08 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:08Z|00104|binding|INFO|Setting lport 3af4eb0a-c48b-4857-8399-453429b6af53 down in Southbound
Oct 14 04:57:08 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:08Z|00105|binding|INFO|Removing iface tap3af4eb0a-c4 ovn-installed in OVS
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.305 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:99:88 10.100.0.13'], port_security=['fa:16:3e:dd:99:88 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '251ae181-b980-4338-a6b5-eee48450b510', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffd1ba9c7eb426eba02b7fa1cb571e2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1853f749-24a7-4699-9f13-e869ca5b59f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=076f043b-a4ac-4ba0-9e01-fc8b197a9834, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3af4eb0a-c48b-4857-8399-453429b6af53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.307 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3af4eb0a-c48b-4857-8399-453429b6af53 in datapath fb9605f8-2a2c-40d4-892f-fb75a29c07c3 unbound from our chassis#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.308 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb9605f8-2a2c-40d4-892f-fb75a29c07c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5a3ee3-0f21-440d-8655-89dc389c750e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.313 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 namespace which is not needed anymore#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.327 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.328 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.328 2 INFO nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Rebooting instance#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.345 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:08 np0005486808 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 14 04:57:08 np0005486808 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 12.127s CPU time.
Oct 14 04:57:08 np0005486808 systemd-machined[214636]: Machine qemu-15-instance-0000000f terminated.
Oct 14 04:57:08 np0005486808 podman[286433]: 2025-10-14 08:57:08.396440315 +0000 UTC m=+0.080747034 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:57:08 np0005486808 podman[286430]: 2025-10-14 08:57:08.43004278 +0000 UTC m=+0.115202900 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : haproxy version is 2.8.14-c23fe91
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.465 2 INFO nova.virt.libvirt.driver [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Instance destroyed successfully.#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.466 2 DEBUG nova.objects.instance [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lazy-loading 'resources' on Instance uuid 251ae181-b980-4338-a6b5-eee48450b510 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [NOTICE]   (284688) : path to executable is /usr/sbin/haproxy
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : Exiting Master process...
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : Exiting Master process...
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [ALERT]    (284688) : Current worker (284711) exited with code 143 (Terminated)
Oct 14 04:57:08 np0005486808 neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3[284678]: [WARNING]  (284688) : All workers exited. Exiting... (0)
Oct 14 04:57:08 np0005486808 systemd[1]: libpod-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope: Deactivated successfully.
Oct 14 04:57:08 np0005486808 podman[286489]: 2025-10-14 08:57:08.478352096 +0000 UTC m=+0.054661933 container died b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.480 2 DEBUG nova.virt.libvirt.vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-928191118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-928191118',id=15,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dffd1ba9c7eb426eba02b7fa1cb571e2',ramdisk_id='',reservation_id='r-wq8uysq1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-39168433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:51Z,user_data=None,user_id='11f9a8052a8349b0a21b3acc32a7f2b1',uuid=251ae181-b980-4338-a6b5-eee48450b510,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.480 2 DEBUG nova.network.os_vif_util [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converting VIF {"id": "3af4eb0a-c48b-4857-8399-453429b6af53", "address": "fa:16:3e:dd:99:88", "network": {"id": "fb9605f8-2a2c-40d4-892f-fb75a29c07c3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2089994571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffd1ba9c7eb426eba02b7fa1cb571e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af4eb0a-c4", "ovs_interfaceid": "3af4eb0a-c48b-4857-8399-453429b6af53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.481 2 DEBUG nova.network.os_vif_util [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.481 2 DEBUG os_vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3af4eb0a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.488 2 INFO os_vif [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:99:88,bridge_name='br-int',has_traffic_filtering=True,id=3af4eb0a-c48b-4857-8399-453429b6af53,network=Network(fb9605f8-2a2c-40d4-892f-fb75a29c07c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af4eb0a-c4')#033[00m
Oct 14 04:57:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4-userdata-shm.mount: Deactivated successfully.
Oct 14 04:57:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c3c99ed231e5c1fd99f5e9dee19ad83031abe3a98e852118ad1216c381a913f6-merged.mount: Deactivated successfully.
Oct 14 04:57:08 np0005486808 podman[286489]: 2025-10-14 08:57:08.515511139 +0000 UTC m=+0.091820976 container cleanup b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 04:57:08 np0005486808 systemd[1]: libpod-conmon-b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4.scope: Deactivated successfully.
Oct 14 04:57:08 np0005486808 podman[286550]: 2025-10-14 08:57:08.593921874 +0000 UTC m=+0.057014111 container remove b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.604 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b78b5f74-3109-4100-9920-042da629f573]: (4, ('Tue Oct 14 08:57:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 (b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4)\nb0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4\nTue Oct 14 08:57:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 (b0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4)\nb0119e84e5be956931d7561f4b7824a48ecc8e73e0cf20992fdf8fc7b9a33ae4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7adddc7e-9550-4d32-b437-6ae047e5a95f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.608 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb9605f8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:08 np0005486808 kernel: tapfb9605f8-20: left promiscuous mode
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[936ffb49-86ee-4237-9393-3029573def59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e206d79f-c1e1-4717-b758-01e177dca692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.645 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd72b99-b20f-43f8-9cad-1a730ba8362b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c69e5352-e5cb-42ca-8646-bb36e38a26ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598906, 'reachable_time': 40193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286562, 'error': None, 'target': 'ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 systemd[1]: run-netns-ovnmeta\x2dfb9605f8\x2d2a2c\x2d40d4\x2d892f\x2dfb75a29c07c3.mount: Deactivated successfully.
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.671 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb9605f8-2a2c-40d4-892f-fb75a29c07c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:57:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:08.671 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1e520074-281c-4b0e-9191-0ccd1a828cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.898 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.899 2 DEBUG nova.network.neutron [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.908 2 INFO nova.virt.libvirt.driver [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deleting instance files /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510_del#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.909 2 INFO nova.virt.libvirt.driver [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deletion of /var/lib/nova/instances/251ae181-b980-4338-a6b5-eee48450b510_del complete#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.928 2 DEBUG oslo_concurrency.lockutils [req-c97ad0e2-a6ce-4d0a-b303-8fa95d013367 req-2f80a452-0f07-4875-83ac-67d8cd200687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.929 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.929 2 DEBUG nova.network.neutron [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.968 2 INFO nova.compute.manager [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG oslo.service.loopingcall [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:08 np0005486808 nova_compute[259627]: 2025-10-14 08:57:08.969 2 DEBUG nova.network.neutron [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 365 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 302 op/s
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.731 2 DEBUG nova.network.neutron [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.772 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.772 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance network_info: |[{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.777 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start _get_guest_xml network_info=[{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.783 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-changed-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.784 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Refreshing instance network info cache due to event network-changed-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.784 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.785 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.785 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Refreshing network info cache for port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.793 2 WARNING nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.806 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.807 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.812 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.813 2 DEBUG nova.virt.libvirt.host [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.814 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.814 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.815 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.816 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.816 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.817 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.817 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.818 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.819 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.819 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.820 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.820 2 DEBUG nova.virt.hardware [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.825 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.937 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.938 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.939 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.941 2 INFO nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Terminating instance#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.942 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.942 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquired lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:09 np0005486808 nova_compute[259627]: 2025-10-14 08:57:09.943 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.133 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648541971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.272 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.311 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.315 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.344 2 DEBUG nova.network.neutron [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.363 2 INFO nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.419 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.420 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.619 2 DEBUG nova.network.neutron [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.623 2 DEBUG nova.compute.manager [req-ab211404-250b-4031-b512-add73e08cf76 req-d35398cf-21ef-44a3-8db9-898f7e789fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-deleted-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.648 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Releasing lock "refresh_cache-45f3b13d-65b1-4bbf-8192-7b842f616b4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.649 2 DEBUG nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.654 2 INFO nova.virt.libvirt.driver [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance destroyed successfully.#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.654 2 DEBUG nova.objects.instance [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'resources' on Instance uuid 45f3b13d-65b1-4bbf-8192-7b842f616b4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578576337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.761 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.762 2 DEBUG nova.virt.libvirt.vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:04Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.762 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.763 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.764 2 DEBUG nova.objects.instance [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.784 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <uuid>cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</uuid>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <name>instance-00000013</name>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-476297486</nova:name>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:09</nova:creationTime>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <nova:port uuid="d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="serial">cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="uuid">cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:80:e1:2a"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <target dev="tapd87ee1ff-7b"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/console.log" append="off"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:10 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:10 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:10 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:10 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.785 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Preparing to wait for external event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.786 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.virt.libvirt.vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:04Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.787 2 DEBUG nova.network.os_vif_util [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG os_vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd87ee1ff-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd87ee1ff-7b, col_values=(('external_ids', {'iface-id': 'd87ee1ff-7b3b-4020-b6ac-b1aa7b688adb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:e1:2a', 'vm-uuid': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:10 np0005486808 NetworkManager[44885]: <info>  [1760432230.7935] manager: (tapd87ee1ff-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.799 2 INFO os_vif [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b')#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432215.8282287, 5de76ef0-5c03-4b43-a691-c858cecd9e80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.830 2 INFO nova.compute.manager [-] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.838 2 DEBUG oslo_concurrency.processutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.869 2 DEBUG nova.compute.manager [None req-629cb982-2b32-4d29-b01c-1e12217042bb - - - - - -] [instance: 5de76ef0-5c03-4b43-a691-c858cecd9e80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.893 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.893 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.894 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:80:e1:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.894 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Using config drive#033[00m
Oct 14 04:57:10 np0005486808 nova_compute[259627]: 2025-10-14 08:57:10.913 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.037 2 INFO nova.virt.libvirt.driver [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deleting instance files /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d_del#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.037 2 INFO nova.virt.libvirt.driver [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deletion of /var/lib/nova/instances/45f3b13d-65b1-4bbf-8192-7b842f616b4d_del complete#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.134 2 INFO nova.compute.manager [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.134 2 DEBUG oslo.service.loopingcall [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.135 2 DEBUG nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.135 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918469538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.297 2 DEBUG oslo_concurrency.processutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.305 2 DEBUG nova.compute.provider_tree [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.339 2 DEBUG nova.scheduler.client.report [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.370 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.427 2 INFO nova.scheduler.client.report [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Deleted allocations for instance 251ae181-b980-4338-a6b5-eee48450b510#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.435 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.453 2 DEBUG nova.network.neutron [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.494 2 INFO nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.6 MiB/s wr, 428 op/s
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.521 2 DEBUG nova.network.neutron [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.525 2 DEBUG oslo_concurrency.lockutils [None req-91ffe1ef-40a0-485b-8d42-c6fc7951ff14 11f9a8052a8349b0a21b3acc32a7f2b1 dffd1ba9c7eb426eba02b7fa1cb571e2 - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.581 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.584 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.597 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.598 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.740 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Creating config drive at /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.747 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvlrurl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:11 np0005486808 kernel: tap499f3731-d6 (unregistering): left promiscuous mode
Oct 14 04:57:11 np0005486808 NetworkManager[44885]: <info>  [1760432231.7562] device (tap499f3731-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:11 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:11Z|00106|binding|INFO|Releasing lport 499f3731-d66b-4964-b5a8-387adacf5166 from this chassis (sb_readonly=0)
Oct 14 04:57:11 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:11Z|00107|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 down in Southbound
Oct 14 04:57:11 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:11Z|00108|binding|INFO|Removing iface tap499f3731-d6 ovn-installed in OVS
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.826 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updated VIF entry in instance network info cache for port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.827 2 DEBUG nova.network.neutron [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [{"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '5', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c187b085-1516-43a6-afa3-a71292e511c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.851 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-unplugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "251ae181-b980-4338-a6b5-eee48450b510-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.852 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.853 2 DEBUG oslo_concurrency.lockutils [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "251ae181-b980-4338-a6b5-eee48450b510-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.853 2 DEBUG nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] No waiting events found dispatching network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.853 2 WARNING nova.compute.manager [req-4e3d8b0d-2f77-46cd-b3a9-201bb3855923 req-4e5fbf9f-603a-44ec-8319-5968b69edb1c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Received unexpected event network-vif-plugged-3af4eb0a-c48b-4857-8399-453429b6af53 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:57:11 np0005486808 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 14 04:57:11 np0005486808 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 8.854s CPU time.
Oct 14 04:57:11 np0005486808 systemd-machined[214636]: Machine qemu-17-instance-00000011 terminated.
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.876 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[935cbe16-15d9-4006-baa8-af3ded9a959d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.879 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[554ffca7-9da5-4b5d-b8f1-556286595fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.881 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvlrurl_" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.906 2 DEBUG nova.storage.rbd_utils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.907 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d95869f0-f0d9-4504-a099-e6a0d7f74bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.916 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2439d2aa-42dd-4ad7-8422-f471a9f6f5f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286721, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.940 2 DEBUG oslo_concurrency.processutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.939 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a495b0b9-8daa-4b90-8c55-a7e0682d97b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286725, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286725, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.941 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:11.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.970 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance destroyed successfully.#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.971 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.992 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.992 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.994 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.994 2 DEBUG os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:11 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap499f3731-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:11.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.005 2 INFO os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.012 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start _get_guest_xml network_info=[{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.016 2 WARNING nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.019 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.020 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.023 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.023 2 DEBUG nova.virt.libvirt.host [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.024 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.024 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.025 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.026 2 DEBUG nova.virt.hardware [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.027 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.046 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.074 2 DEBUG oslo_concurrency.processutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.074 2 INFO nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deleting local config drive /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a/disk.config because it was imported into RBD.#033[00m
Oct 14 04:57:12 np0005486808 kernel: tapd87ee1ff-7b: entered promiscuous mode
Oct 14 04:57:12 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:12Z|00109|binding|INFO|Claiming lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for this chassis.
Oct 14 04:57:12 np0005486808 systemd-udevd[286694]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:57:12 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:12Z|00110|binding|INFO|d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb: Claiming fa:16:3e:80:e1:2a 10.100.0.11
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.1339] manager: (tapd87ee1ff-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.142 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:e1:2a 10.100.0.11'], port_security=['fa:16:3e:80:e1:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.144 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.145 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce#033[00m
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.1534] device (tapd87ee1ff-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:57:12 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:12Z|00111|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb ovn-installed in OVS
Oct 14 04:57:12 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:12Z|00112|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb up in Southbound
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.1546] device (tapd87ee1ff-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73ac41a9-2670-47f8-980a-aabfad4e5e91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.160 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.161 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.161 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3975b002-e031-4034-975f-cc40be547d87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[675a4e96-5f34-4f41-9877-089daf18691e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 systemd-machined[214636]: New machine qemu-19-instance-00000013.
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.177 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[17690999-c627-4360-a7bd-84e3dbaf202d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.200 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8dc0199-fd64-4c08-8ef4-c7090114a89f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.231 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7983b692-0673-4396-9c8a-abe6d8397611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5279131-d242-448c-bab6-2e8beec78064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.2372] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.267 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49528c-da4d-4e22-b5ba-5bcdc36c5c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.270 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[567c39f6-2c93-43d0-810d-31c179c3108c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.2895] device (tap58d74886-d0): carrier: link connected
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.295 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f08967ce-5bf7-43cb-8219-505b55d9e14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d5c8df-36e7-4eca-9a50-3c82b5ce1935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601105, 'reachable_time': 33903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286842, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.329 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcb32ed-da06-4691-bb30-425d548845d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601105, 'tstamp': 601105}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286843, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f32c77d8-84bb-4153-a04b-bb01d791132c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601105, 'reachable_time': 33903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286844, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508029087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.376 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29ecbbca-2f6e-4735-8baa-8ae5ba9ce869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.385 2 DEBUG oslo_concurrency.processutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.392 2 DEBUG nova.compute.provider_tree [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.408 2 DEBUG nova.scheduler.client.report [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.432 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.436 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f990ba98-110b-4e6f-b207-5101f9074d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:12 np0005486808 NetworkManager[44885]: <info>  [1760432232.4408] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct 14 04:57:12 np0005486808 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.445 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:12 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:12Z|00113|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.449 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0103d4-0c6d-478f-9cf6-9fafaa3e7d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.456 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.457 2 INFO nova.scheduler.client.report [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Deleted allocations for instance 45f3b13d-65b1-4bbf-8192-7b842f616b4d#033[00m
Oct 14 04:57:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:12.456 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959744692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.505 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.534 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:12 np0005486808 podman[286960]: 2025-10-14 08:57:12.85424303 +0000 UTC m=+0.051887206 container create e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.860 2 DEBUG nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG oslo_concurrency.lockutils [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.861 2 DEBUG nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.862 2 WARNING nova.compute.manager [req-adaa813d-f9eb-4d19-abef-79c8300817b1 req-9d0275d8-7e92-4ef1-b044-aa48a54755d3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.868 2 DEBUG oslo_concurrency.lockutils [None req-af906122-9ae0-448c-b377-eb2fc505fe06 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "45f3b13d-65b1-4bbf-8192-7b842f616b4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:12 np0005486808 systemd[1]: Started libpod-conmon-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope.
Oct 14 04:57:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cee4e261b09b2f84c9173291ee5a39f3882304b4c7d47e1f06ce36bd1963e8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:12 np0005486808 podman[286960]: 2025-10-14 08:57:12.826605541 +0000 UTC m=+0.024249707 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:57:12 np0005486808 podman[286960]: 2025-10-14 08:57:12.932113952 +0000 UTC m=+0.129758118 container init e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.942 2 DEBUG nova.compute.manager [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.942 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:12 np0005486808 podman[286960]: 2025-10-14 08:57:12.943190214 +0000 UTC m=+0.140834350 container start e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG oslo_concurrency.lockutils [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.943 2 DEBUG nova.compute.manager [req-5b9a842e-6b04-42f4-89b4-9a9cdab8131c req-85566877-5f0a-49d1-897c-120af40fad3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Processing event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951469822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:12 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : New worker (286983) forked
Oct 14 04:57:12 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : Loading success.
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.987 2 DEBUG oslo_concurrency.processutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.988 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.988 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.989 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:12 np0005486808 nova_compute[259627]: 2025-10-14 08:57:12.990 2 DEBUG nova.objects.instance [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.009 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <uuid>01db05f6-07fb-41b5-8aaf-27ad5712fcda</uuid>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <name>instance-00000011</name>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1349611853</nova:name>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:12</nova:creationTime>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:user uuid="f50e2582d63041b682c71a379f763c0e">tempest-SecurityGroupsTestJSON-663845074-project-member</nova:user>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:project uuid="9bf65c21e4104af6981b071561617657">tempest-SecurityGroupsTestJSON-663845074</nova:project>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <nova:port uuid="499f3731-d66b-4964-b5a8-387adacf5166">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="serial">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="uuid">01db05f6-07fb-41b5-8aaf-27ad5712fcda</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/01db05f6-07fb-41b5-8aaf-27ad5712fcda_disk.config">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:58:ee:87"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <target dev="tap499f3731-d6"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda/console.log" append="off"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:13 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:13 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:13 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:13 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.010 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.010 2 DEBUG nova.virt.libvirt.driver [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.virt.libvirt.vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:11Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.011 2 DEBUG nova.network.os_vif_util [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.012 2 DEBUG os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap499f3731-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap499f3731-d6, col_values=(('external_ids', {'iface-id': '499f3731-d66b-4964-b5a8-387adacf5166', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:ee:87', 'vm-uuid': '01db05f6-07fb-41b5-8aaf-27ad5712fcda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 NetworkManager[44885]: <info>  [1760432233.0649] manager: (tap499f3731-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.070 2 INFO os_vif [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')#033[00m
Oct 14 04:57:13 np0005486808 NetworkManager[44885]: <info>  [1760432233.1401] manager: (tap499f3731-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 14 04:57:13 np0005486808 kernel: tap499f3731-d6: entered promiscuous mode
Oct 14 04:57:13 np0005486808 systemd-udevd[286837]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:13Z|00114|binding|INFO|Claiming lport 499f3731-d66b-4964-b5a8-387adacf5166 for this chassis.
Oct 14 04:57:13 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:13Z|00115|binding|INFO|499f3731-d66b-4964-b5a8-387adacf5166: Claiming fa:16:3e:58:ee:87 10.100.0.5
Oct 14 04:57:13 np0005486808 NetworkManager[44885]: <info>  [1760432233.1537] device (tap499f3731-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:57:13 np0005486808 NetworkManager[44885]: <info>  [1760432233.1564] device (tap499f3731-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.155 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '6', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.156 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 bound to our chassis#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.158 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.173 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1729403, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.173 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.172 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[941a526c-a319-4688-ac39-0f64638253c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.175 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.182 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:57:13 np0005486808 systemd-machined[214636]: New machine qemu-20-instance-00000011.
Oct 14 04:57:13 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:13Z|00116|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 ovn-installed in OVS
Oct 14 04:57:13 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:13Z|00117|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 up in Southbound
Oct 14 04:57:13 np0005486808 systemd[1]: Started Virtual Machine qemu-20-instance-00000011.
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.200 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.202 2 INFO nova.virt.libvirt.driver [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance spawned successfully.#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.202 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.203 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e707d5-f5d3-4353-a344-9ce36d23aebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.207 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.208 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e56d749-3380-43dc-bf03-f0164b424f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.228 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.228 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1730754, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.228 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.232 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.233 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.234 2 DEBUG nova.virt.libvirt.driver [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.242 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0523be-b5c4-44e4-a0fe-7f84a0427c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.254 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.257 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432233.1807742, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.257 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a75e006-1cab-40ec-bdaa-b5bc21826037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287017, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f883de1-f38c-4931-8beb-45953813e68c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287019, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287019, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.285 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:13.286 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.293 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.296 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.315 2 INFO nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 8.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.316 2 DEBUG nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.386 2 INFO nova.compute.manager [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 9.41 seconds to build instance.#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.410 2 DEBUG oslo_concurrency.lockutils [None req-92af6c88-04ea-4e05-80d5-2656025de5e6 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.814 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.815 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.816 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.817 2 INFO nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Terminating instance#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.818 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.818 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquired lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.819 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:57:13 np0005486808 nova_compute[259627]: 2025-10-14 08:57:13.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.017 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.043 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 01db05f6-07fb-41b5-8aaf-27ad5712fcda due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.044 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432234.0434237, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.044 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.046 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.049 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance rebooted successfully.#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.049 2 DEBUG nova.compute.manager [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.425 2 DEBUG nova.network.neutron [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:14 np0005486808 nova_compute[259627]: 2025-10-14 08:57:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 461 op/s
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.511 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.773 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Releasing lock "refresh_cache-753ea698-6cc6-4a73-a0d2-1366e5374a9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.774 2 DEBUG nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.777 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432234.0455208, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.820 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.825 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.839 2 DEBUG oslo_concurrency.lockutils [None req-7da98ac4-9ece-4c63-849b-4424e2cd3e17 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:15 np0005486808 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 14 04:57:15 np0005486808 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 12.705s CPU time.
Oct 14 04:57:15 np0005486808 systemd-machined[214636]: Machine qemu-16-instance-00000010 terminated.
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.994 2 INFO nova.virt.libvirt.driver [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance destroyed successfully.#033[00m
Oct 14 04:57:15 np0005486808 nova_compute[259627]: 2025-10-14 08:57:15.995 2 DEBUG nova.objects.instance [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lazy-loading 'resources' on Instance uuid 753ea698-6cc6-4a73-a0d2-1366e5374a9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.028 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.029 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.030 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.148 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.149 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.150 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.150 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.150 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.151 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.152 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.152 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.153 2 DEBUG oslo_concurrency.lockutils [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.153 2 DEBUG nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.153 2 WARNING nova.compute.manager [req-7040953b-f1b4-4dc7-b160-ba7f838e74fe req-a0e43006-96e2-47c5-902b-21ec892809b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.209 2 DEBUG nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.209 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG oslo_concurrency.lockutils [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.210 2 DEBUG nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.210 2 WARNING nova.compute.manager [req-1166ceac-0086-4122-ada7-95ec78f6311e req-ed3cffb2-b704-450f-bb0b-fdc0e0257261 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received unexpected event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.423 2 INFO nova.virt.libvirt.driver [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deleting instance files /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c_del#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.424 2 INFO nova.virt.libvirt.driver [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deletion of /var/lib/nova/instances/753ea698-6cc6-4a73-a0d2-1366e5374a9c_del complete#033[00m
Oct 14 04:57:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.471 2 INFO nova.compute.manager [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601108212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.471 2 DEBUG oslo.service.loopingcall [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.472 2 DEBUG nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.472 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.588 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.589 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.655 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.746 2 DEBUG nova.network.neutron [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.771 2 INFO nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Took 0.30 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.806 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=59.85552215576172GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.807 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.827 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.854 2 DEBUG nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.891 2 INFO nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] instance snapshotting#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.899 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 548bff7e-531b-4f5d-b4d3-18d586f46581 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.900 2 WARNING nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 753ea698-6cc6-4a73-a0d2-1366e5374a9c is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 01db05f6-07fb-41b5-8aaf-27ad5712fcda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:57:16 np0005486808 nova_compute[259627]: 2025-10-14 08:57:16.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.160 2 INFO nova.virt.libvirt.driver [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Beginning live snapshot process#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.407 2 DEBUG nova.virt.libvirt.imagebackend [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 04:57:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 281 op/s
Oct 14 04:57:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841566202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.541 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.547 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.565 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.601 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.603 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.604 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.617 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.647 2 INFO nova.scheduler.client.report [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Deleted allocations for instance 753ea698-6cc6-4a73-a0d2-1366e5374a9c#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.668 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(836a67783e6e421cb193ce066e0062c6) on rbd image(cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:57:17 np0005486808 nova_compute[259627]: 2025-10-14 08:57:17.723 2 DEBUG oslo_concurrency.lockutils [None req-9d338736-644d-42f7-b7eb-a28ea4cb0441 6ecc59efebb941f4b0aa79b58a7e610e a618b00ff8c34f40bd31e4f56c019b1b - - default default] Lock "753ea698-6cc6-4a73-a0d2-1366e5374a9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:18 np0005486808 nova_compute[259627]: 2025-10-14 08:57:18.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:18 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:18Z|00118|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:57:18 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:18Z|00119|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 04:57:18 np0005486808 nova_compute[259627]: 2025-10-14 08:57:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:18 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:18Z|00120|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:57:18 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:18Z|00121|binding|INFO|Releasing lport 1eaf3c85-b7b7-4dd7-ad1f-33385c25330b from this chassis (sb_readonly=0)
Oct 14 04:57:18 np0005486808 nova_compute[259627]: 2025-10-14 08:57:18.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 14 04:57:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 14 04:57:18 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 14 04:57:18 np0005486808 nova_compute[259627]: 2025-10-14 08:57:18.793 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] cloning vms/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk@836a67783e6e421cb193ce066e0062c6 to images/b53dc6f8-1bef-45e3-8ec2-4d8ee625214e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 04:57:18 np0005486808 nova_compute[259627]: 2025-10-14 08:57:18.906 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] flattening images/b53dc6f8-1bef-45e3-8ec2-4d8ee625214e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 04:57:19 np0005486808 nova_compute[259627]: 2025-10-14 08:57:19.170 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(836a67783e6e421cb193ce066e0062c6) on rbd image(cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:57:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 293 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 337 op/s
Oct 14 04:57:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 14 04:57:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 14 04:57:19 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 14 04:57:19 np0005486808 nova_compute[259627]: 2025-10-14 08:57:19.785 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] creating snapshot(snap) on rbd image(b53dc6f8-1bef-45e3-8ec2-4d8ee625214e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.388 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432225.38623, 45f3b13d-65b1-4bbf-8192-7b842f616b4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.388 2 INFO nova.compute.manager [-] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.392 2 DEBUG nova.compute.manager [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-changed-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.392 2 DEBUG nova.compute.manager [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing instance network info cache due to event network-changed-499f3731-d66b-4964-b5a8-387adacf5166. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.393 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Refreshing network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.415 2 DEBUG nova.compute.manager [None req-381e5263-0efa-46e9-8117-347b7cc7f174 - - - - - -] [instance: 45f3b13d-65b1-4bbf-8192-7b842f616b4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.605 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 14 04:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 14 04:57:20 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.830 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 04:57:20 np0005486808 nova_compute[259627]: 2025-10-14 08:57:20.831 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b53dc6f8-1bef-45e3-8ec2-4d8ee625214e
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver 
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver 
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.168 2 ERROR nova.virt.libvirt.driver #033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.242 2 DEBUG nova.storage.rbd_utils [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] removing snapshot(snap) on rbd image(b53dc6f8-1bef-45e3-8ec2-4d8ee625214e) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.401 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.401 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.402 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.403 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.403 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.405 2 INFO nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Terminating instance#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.407 2 DEBUG nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:21 np0005486808 kernel: tap499f3731-d6 (unregistering): left promiscuous mode
Oct 14 04:57:21 np0005486808 NetworkManager[44885]: <info>  [1760432241.4478] device (tap499f3731-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:21 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:21Z|00122|binding|INFO|Releasing lport 499f3731-d66b-4964-b5a8-387adacf5166 from this chassis (sb_readonly=0)
Oct 14 04:57:21 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:21Z|00123|binding|INFO|Setting lport 499f3731-d66b-4964-b5a8-387adacf5166 down in Southbound
Oct 14 04:57:21 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:21Z|00124|binding|INFO|Removing iface tap499f3731-d6 ovn-installed in OVS
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.467 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:87 10.100.0.5'], port_security=['fa:16:3e:58:ee:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '01db05f6-07fb-41b5-8aaf-27ad5712fcda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '8', 'neutron:security_group_ids': '62a2565f-d2d5-452b-bee3-932bd15ef802 b044039a-151e-49aa-aa1d-0e709a3a113a e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=499f3731-d66b-4964-b5a8-387adacf5166) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.468 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 499f3731-d66b-4964-b5a8-387adacf5166 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f32b683-5f9a-40e0-8a87-3430c77d4224]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 14 04:57:21 np0005486808 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 14 04:57:21 np0005486808 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Consumed 8.363s CPU time.
Oct 14 04:57:21 np0005486808 systemd-machined[214636]: Machine qemu-20-instance-00000011 terminated.
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.530 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f90d61-a7cb-40d5-9a23-d70cf01cea7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.534 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[44d81a3e-aab2-4f45-b175-9dc4284bad0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85efc62f-f4da-4507-8c55-87ab10906692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e7ad63-d38f-4294-9221-3354073754a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58ff48d6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:28:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597136, 'reachable_time': 43498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287301, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.598 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6773ece-4dd7-4844-9649-dc86af3b368c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597151, 'tstamp': 597151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287302, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58ff48d6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597154, 'tstamp': 597154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287302, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.600 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.606 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ff48d6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.606 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.607 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58ff48d6-a0, col_values=(('external_ids', {'iface-id': '1eaf3c85-b7b7-4dd7-ad1f-33385c25330b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:21.608 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.646 2 INFO nova.virt.libvirt.driver [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Instance destroyed successfully.#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.647 2 DEBUG nova.objects.instance [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 01db05f6-07fb-41b5-8aaf-27ad5712fcda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.667 2 DEBUG nova.virt.libvirt.vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1349611853',display_name='tempest-SecurityGroupsTestJSON-server-1349611853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1349611853',id=17,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-qk8wy1ld',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:15Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=01db05f6-07fb-41b5-8aaf-27ad5712fcda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.667 2 DEBUG nova.network.os_vif_util [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.668 2 DEBUG nova.network.os_vif_util [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.668 2 DEBUG os_vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap499f3731-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.680 2 INFO os_vif [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:87,bridge_name='br-int',has_traffic_filtering=True,id=499f3731-d66b-4964-b5a8-387adacf5166,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499f3731-d6')#033[00m
Oct 14 04:57:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 14 04:57:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 14 04:57:21 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 14 04:57:21 np0005486808 nova_compute[259627]: 2025-10-14 08:57:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.184 2 WARNING nova.compute.manager [None req-5a3c6972-9238-490a-9093-adcde4ff5820 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Image not found during snapshot: nova.exception.ImageNotFound: Image b53dc6f8-1bef-45e3-8ec2-4d8ee625214e could not be found.#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.230 2 INFO nova.virt.libvirt.driver [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deleting instance files /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda_del#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.231 2 INFO nova.virt.libvirt.driver [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deletion of /var/lib/nova/instances/01db05f6-07fb-41b5-8aaf-27ad5712fcda_del complete#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.303 2 INFO nova.compute.manager [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.303 2 DEBUG oslo.service.loopingcall [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.304 2 DEBUG nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.304 2 DEBUG nova.network.neutron [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.400 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updated VIF entry in instance network info cache for port 499f3731-d66b-4964-b5a8-387adacf5166. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.401 2 DEBUG nova.network.neutron [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [{"id": "499f3731-d66b-4964-b5a8-387adacf5166", "address": "fa:16:3e:58:ee:87", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499f3731-d6", "ovs_interfaceid": "499f3731-d66b-4964-b5a8-387adacf5166", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.425 2 DEBUG oslo_concurrency.lockutils [req-21f898bb-d25c-4f97-8257-d846e235898c req-2e9325dd-3126-4ceb-99cd-28a1b7314132 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-01db05f6-07fb-41b5-8aaf-27ad5712fcda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.527 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.528 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-unplugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.529 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG oslo_concurrency.lockutils [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.530 2 DEBUG nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] No waiting events found dispatching network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:22 np0005486808 nova_compute[259627]: 2025-10-14 08:57:22.531 2 WARNING nova.compute.manager [req-c4e7621d-1edc-4e47-87b4-3a773b265541 req-906376c8-0e21-4f20-940e-92363399d9bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received unexpected event network-vif-plugged-499f3731-d66b-4964-b5a8-387adacf5166 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.114 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [{"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.139 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-548bff7e-531b-4f5d-b4d3-18d586f46581" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.140 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.141 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432228.4614785, 251ae181-b980-4338-a6b5-eee48450b510 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.464 2 INFO nova.compute.manager [-] [instance: 251ae181-b980-4338-a6b5-eee48450b510] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.491 2 DEBUG nova.compute.manager [None req-6148d398-19f5-4518-b1ac-a6ca9eff49c5 - - - - - -] [instance: 251ae181-b980-4338-a6b5-eee48450b510] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.491 2 DEBUG nova.network.neutron [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.508 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:57:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 260 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 240 op/s
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.527 2 INFO nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.576 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.577 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:23 np0005486808 nova_compute[259627]: 2025-10-14 08:57:23.678 2 DEBUG oslo_concurrency.processutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961221219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.134 2 DEBUG oslo_concurrency.processutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.141 2 DEBUG nova.compute.provider_tree [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.159 2 DEBUG nova.scheduler.client.report [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.174 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.175 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.176 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.177 2 INFO nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Terminating instance#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.178 2 DEBUG nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.180 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.209 2 INFO nova.scheduler.client.report [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Deleted allocations for instance 01db05f6-07fb-41b5-8aaf-27ad5712fcda#033[00m
Oct 14 04:57:24 np0005486808 kernel: tapd87ee1ff-7b (unregistering): left promiscuous mode
Oct 14 04:57:24 np0005486808 NetworkManager[44885]: <info>  [1760432244.2493] device (tapd87ee1ff-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:24Z|00125|binding|INFO|Releasing lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb from this chassis (sb_readonly=0)
Oct 14 04:57:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:24Z|00126|binding|INFO|Setting lport d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb down in Southbound
Oct 14 04:57:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:24Z|00127|binding|INFO|Removing iface tapd87ee1ff-7b ovn-installed in OVS
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.266 2 DEBUG oslo_concurrency.lockutils [None req-e9423668-7906-4950-8917-387a804d8f93 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "01db05f6-07fb-41b5-8aaf-27ad5712fcda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.266 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:e1:2a 10.100.0.11'], port_security=['fa:16:3e:80:e1:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.268 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.270 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.271 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15435b0d-0ec1-432d-a2bb-3e80c0b39520]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.272 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 14 04:57:24 np0005486808 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 11.587s CPU time.
Oct 14 04:57:24 np0005486808 systemd-machined[214636]: Machine qemu-19-instance-00000013 terminated.
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.422 2 INFO nova.virt.libvirt.driver [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Instance destroyed successfully.#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.424 2 DEBUG nova.objects.instance [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.439 2 DEBUG nova.virt.libvirt.vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:57:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-476297486',display_name='tempest-ImagesOneServerNegativeTestJSON-server-476297486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-476297486',id=19,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-7i3jta7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:22Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.440 2 DEBUG nova.network.os_vif_util [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "address": "fa:16:3e:80:e1:2a", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87ee1ff-7b", "ovs_interfaceid": "d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.441 2 DEBUG nova.network.os_vif_util [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.442 2 DEBUG os_vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd87ee1ff-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : haproxy version is 2.8.14-c23fe91
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [NOTICE]   (286979) : path to executable is /usr/sbin/haproxy
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : Exiting Master process...
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : Exiting Master process...
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [ALERT]    (286979) : Current worker (286983) exited with code 143 (Terminated)
Oct 14 04:57:24 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[286975]: [WARNING]  (286979) : All workers exited. Exiting... (0)
Oct 14 04:57:24 np0005486808 systemd[1]: libpod-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope: Deactivated successfully.
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.457 2 INFO os_vif [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:e1:2a,bridge_name='br-int',has_traffic_filtering=True,id=d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87ee1ff-7b')#033[00m
Oct 14 04:57:24 np0005486808 podman[287397]: 2025-10-14 08:57:24.464111411 +0000 UTC m=+0.080923279 container died e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 04:57:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3cee4e261b09b2f84c9173291ee5a39f3882304b4c7d47e1f06ce36bd1963e8e-merged.mount: Deactivated successfully.
Oct 14 04:57:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536-userdata-shm.mount: Deactivated successfully.
Oct 14 04:57:24 np0005486808 podman[287397]: 2025-10-14 08:57:24.514256042 +0000 UTC m=+0.131067920 container cleanup e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 04:57:24 np0005486808 systemd[1]: libpod-conmon-e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536.scope: Deactivated successfully.
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.552 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.553 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.553 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG oslo_concurrency.lockutils [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.554 2 DEBUG nova.compute.manager [req-e064658c-9af4-4a28-85fd-cfa17c71d4c3 req-9de1b09b-bfe8-4b9c-acfa-6292aee7599f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-unplugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:57:24 np0005486808 podman[287449]: 2025-10-14 08:57:24.597433324 +0000 UTC m=+0.048472461 container remove e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.604 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53079310-3f8b-4924-9df9-3ca42f649e73]: (4, ('Tue Oct 14 08:57:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536)\ne5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536\nTue Oct 14 08:57:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536)\ne5b12f221f634ed5cc9715bfd56f99b325b3855eaa31c1a197aff198eb6da536\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.606 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8a8321-61ae-43bf-9e45-777ac5112e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.607 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:24 np0005486808 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.628 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4c6cb-d564-46a2-ab76-2ecf38bff814]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e64b0bd-e9e3-4612-ab60-923a1fd3fd21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55fb8494-bba7-4546-bcc5-a9839a272900]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.665 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9924ce-26b2-4c54-b1a0-98625aa06063]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601098, 'reachable_time': 27715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287464, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.670 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:57:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:24.670 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6c72af89-038d-4ee8-a881-8049e47a2e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.717 2 DEBUG nova.compute.manager [req-b9da01fb-57ce-4359-9c1d-ce311e98d904 req-e2883e82-3130-412d-9b85-272c0ff074c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Received event network-vif-deleted-499f3731-d66b-4964-b5a8-387adacf5166 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.846 2 INFO nova.virt.libvirt.driver [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deleting instance files /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_del#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.848 2 INFO nova.virt.libvirt.driver [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deletion of /var/lib/nova/instances/cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a_del complete#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.897 2 INFO nova.compute.manager [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.898 2 DEBUG oslo.service.loopingcall [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.899 2 DEBUG nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:24 np0005486808 nova_compute[259627]: 2025-10-14 08:57:24.899 2 DEBUG nova.network.neutron [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:25 np0005486808 nova_compute[259627]: 2025-10-14 08:57:25.480 2 DEBUG nova.network.neutron [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 431 op/s
Oct 14 04:57:25 np0005486808 nova_compute[259627]: 2025-10-14 08:57:25.518 2 INFO nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Took 0.62 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:25 np0005486808 nova_compute[259627]: 2025-10-14 08:57:25.607 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:25 np0005486808 nova_compute[259627]: 2025-10-14 08:57:25.608 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:25 np0005486808 nova_compute[259627]: 2025-10-14 08:57:25.673 2 DEBUG oslo_concurrency.processutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587001898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.157 2 DEBUG oslo_concurrency.processutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.162 2 DEBUG nova.compute.provider_tree [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.190 2 DEBUG nova.scheduler.client.report [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.220 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.246 2 INFO nova.scheduler.client.report [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.326 2 DEBUG oslo_concurrency.lockutils [None req-1eafa39e-1705-47c0-a32d-272d29c79f00 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.750 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.750 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG oslo_concurrency.lockutils [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.751 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] No waiting events found dispatching network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.752 2 WARNING nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received unexpected event network-vif-plugged-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.752 2 DEBUG nova.compute.manager [req-abf447e6-01c6-4a71-952a-4f3a2dcfdfa8 req-46b8f4c0-89ed-4b90-8ab4-90c39c68ec31 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Received event network-vif-deleted-d87ee1ff-7b3b-4020-b6ac-b1aa7b688adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:26 np0005486808 nova_compute[259627]: 2025-10-14 08:57:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 14 04:57:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 14 04:57:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 14 04:57:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 14 04:57:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 3.6 MiB/s wr, 212 op/s
Oct 14 04:57:29 np0005486808 nova_compute[259627]: 2025-10-14 08:57:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 121 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 3.1 MiB/s wr, 179 op/s
Oct 14 04:57:29 np0005486808 podman[287491]: 2025-10-14 08:57:29.656002283 +0000 UTC m=+0.064724401 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:57:29 np0005486808 podman[287492]: 2025-10-14 08:57:29.663823795 +0000 UTC m=+0.066246908 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 04:57:30 np0005486808 nova_compute[259627]: 2025-10-14 08:57:30.993 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432235.9921277, 753ea698-6cc6-4a73-a0d2-1366e5374a9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:30 np0005486808 nova_compute[259627]: 2025-10-14 08:57:30.993 2 INFO nova.compute.manager [-] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:31 np0005486808 nova_compute[259627]: 2025-10-14 08:57:31.015 2 DEBUG nova.compute.manager [None req-8e0ac198-8c85-4a0a-8761-1c5e8049edd5 - - - - - -] [instance: 753ea698-6cc6-4a73-a0d2-1366e5374a9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 256 KiB/s rd, 2.5 MiB/s wr, 147 op/s
Oct 14 04:57:31 np0005486808 nova_compute[259627]: 2025-10-14 08:57:31.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.204 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.205 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.226 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.307 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.308 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.313 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.313 2 INFO nova.compute.claims [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.454 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:32.462 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:32.464 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:57:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:57:32
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'vms']
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:57:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016741972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:57:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.964 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:32 np0005486808 nova_compute[259627]: 2025-10-14 08:57:32.972 2 DEBUG nova.compute.provider_tree [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.003 2 DEBUG nova.scheduler.client.report [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.026 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.027 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.073 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.073 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.163 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.189 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.324 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.326 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.326 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating image(s)#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.351 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.376 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.402 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.406 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.441 2 DEBUG nova.policy [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aafd6ad40c944c3eb14e7fbf454040c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.486 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.487 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.487 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.488 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.488 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.489 2 INFO nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Terminating instance#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.490 2 DEBUG nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.501 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.501 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.502 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.502 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.528 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.531 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fc212c27-f5c2-4656-9c1f-7c39234fea45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:33 np0005486808 kernel: tap7da6c99d-4e (unregistering): left promiscuous mode
Oct 14 04:57:33 np0005486808 NetworkManager[44885]: <info>  [1760432253.5457] device (tap7da6c99d-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:33Z|00128|binding|INFO|Releasing lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 from this chassis (sb_readonly=0)
Oct 14 04:57:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:33Z|00129|binding|INFO|Setting lport 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 down in Southbound
Oct 14 04:57:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:33Z|00130|binding|INFO|Removing iface tap7da6c99d-4e ovn-installed in OVS
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.602 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2f:21 10.100.0.9'], port_security=['fa:16:3e:a5:2f:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '548bff7e-531b-4f5d-b4d3-18d586f46581', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bf65c21e4104af6981b071561617657', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4c703b9d-d52b-458a-9ec5-48a1addb8499 6ea635a1-0626-4474-9bd2-f9cda3d316d5 e50f77b0-42a8-4edd-9a84-05435c5fb458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2c96ef8-3846-498d-b4a9-f6fd46fb5d04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.605 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 in datapath 58ff48d6-a644-40e6-8fc9-ee19b4354df9 unbound from our chassis#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.606 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58ff48d6-a644-40e6-8fc9-ee19b4354df9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07532e72-f289-4531-83ab-0fa0706160b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.608 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 namespace which is not needed anymore#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 14 04:57:33 np0005486808 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 15.089s CPU time.
Oct 14 04:57:33 np0005486808 systemd-machined[214636]: Machine qemu-13-instance-0000000d terminated.
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.724 2 INFO nova.virt.libvirt.driver [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Instance destroyed successfully.#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.725 2 DEBUG nova.objects.instance [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lazy-loading 'resources' on Instance uuid 548bff7e-531b-4f5d-b4d3-18d586f46581 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : haproxy version is 2.8.14-c23fe91
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [NOTICE]   (282801) : path to executable is /usr/sbin/haproxy
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : Exiting Master process...
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : Exiting Master process...
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [ALERT]    (282801) : Current worker (282804) exited with code 143 (Terminated)
Oct 14 04:57:33 np0005486808 neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9[282796]: [WARNING]  (282801) : All workers exited. Exiting... (0)
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.750 2 DEBUG nova.virt.libvirt.vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1911316539',display_name='tempest-SecurityGroupsTestJSON-server-1911316539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1911316539',id=13,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:56:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bf65c21e4104af6981b071561617657',ramdisk_id='',reservation_id='r-xovv24ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-663845074',owner_user_name='tempest-SecurityGroupsTestJSON-663845074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:56:34Z,user_data=None,user_id='f50e2582d63041b682c71a379f763c0e',uuid=548bff7e-531b-4f5d-b4d3-18d586f46581,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.751 2 DEBUG nova.network.os_vif_util [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converting VIF {"id": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "address": "fa:16:3e:a5:2f:21", "network": {"id": "58ff48d6-a644-40e6-8fc9-ee19b4354df9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1653725355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bf65c21e4104af6981b071561617657", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7da6c99d-4e", "ovs_interfaceid": "7da6c99d-4e04-4c0b-b4d0-d32a2e19c462", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.752 2 DEBUG nova.network.os_vif_util [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:33 np0005486808 systemd[1]: libpod-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope: Deactivated successfully.
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.752 2 DEBUG os_vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7da6c99d-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 podman[287670]: 2025-10-14 08:57:33.759614221 +0000 UTC m=+0.057034472 container died 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.761 2 INFO os_vif [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2f:21,bridge_name='br-int',has_traffic_filtering=True,id=7da6c99d-4e04-4c0b-b4d0-d32a2e19c462,network=Network(58ff48d6-a644-40e6-8fc9-ee19b4354df9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7da6c99d-4e')#033[00m
Oct 14 04:57:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765-userdata-shm.mount: Deactivated successfully.
Oct 14 04:57:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-47fd54790ff5fec54b5deca5f48d8cccc184d3f02bd560c1168873a2e4409722-merged.mount: Deactivated successfully.
Oct 14 04:57:33 np0005486808 podman[287670]: 2025-10-14 08:57:33.803319564 +0000 UTC m=+0.100739815 container cleanup 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 04:57:33 np0005486808 systemd[1]: libpod-conmon-6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765.scope: Deactivated successfully.
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.840 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fc212c27-f5c2-4656-9c1f-7c39234fea45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:33 np0005486808 podman[287731]: 2025-10-14 08:57:33.865350597 +0000 UTC m=+0.040493325 container remove 6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa32b163-5c1b-492e-814f-31df3ac564fd]: (4, ('Tue Oct 14 08:57:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 (6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765)\n6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765\nTue Oct 14 08:57:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 (6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765)\n6cd10f8a39ccc185c66e3288af243dbf478b93f26b821cd4a99cecc03dbfc765\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.875 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8af4e0-d02f-4546-84ab-4b1c8cb53644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ff48d6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:33 np0005486808 kernel: tap58ff48d6-a0: left promiscuous mode
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.891 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] resizing rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12fbbcdc-bda9-4ffe-ae19-65e64c9ad43f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 nova_compute[259627]: 2025-10-14 08:57:33.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79db778c-61ba-455d-bfbc-d0a00d10962a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.923 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b50ab952-040d-4c27-af45-67024eb018d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.937 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7958beac-91b8-442f-917b-96c1b3ff5412]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597126, 'reachable_time': 38808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287801, 'error': None, 'target': 'ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:33 np0005486808 systemd[1]: run-netns-ovnmeta\x2d58ff48d6\x2da644\x2d40e6\x2d8fc9\x2dee19b4354df9.mount: Deactivated successfully.
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.939 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58ff48d6-a644-40e6-8fc9-ee19b4354df9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:57:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:33.939 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[07cf275c-53b6-46a9-9500-6eb7e18d2bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.004 2 DEBUG nova.objects.instance [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'migration_context' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.040 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.041 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Ensure instance console log exists: /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.043 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.044 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.044 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.147 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Successfully created port: bbcf1d8e-1698-4198-a641-527122f98e09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.184 2 INFO nova.virt.libvirt.driver [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deleting instance files /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581_del#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.185 2 INFO nova.virt.libvirt.driver [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deletion of /var/lib/nova/instances/548bff7e-531b-4f5d-b4d3-18d586f46581_del complete#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.240 2 INFO nova.compute.manager [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.241 2 DEBUG oslo.service.loopingcall [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.242 2 DEBUG nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.242 2 DEBUG nova.network.neutron [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.918 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Successfully updated port: bbcf1d8e-1698-4198-a641-527122f98e09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquired lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:34 np0005486808 nova_compute[259627]: 2025-10-14 08:57:34.947 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.087 2 DEBUG nova.network.neutron [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.124 2 INFO nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Took 0.88 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.177 2 DEBUG nova.compute.manager [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-changed-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.177 2 DEBUG nova.compute.manager [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Refreshing instance network info cache due to event network-changed-bbcf1d8e-1698-4198-a641-527122f98e09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.178 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.192 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.192 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.222 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.250 2 DEBUG nova.compute.manager [req-700d3bf5-bfa0-4d57-9078-baa99c8ecf29 req-fac76693-345e-49d6-a8de-a40a9c8c025b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Received event network-vif-deleted-7da6c99d-4e04-4c0b-b4d0-d32a2e19c462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.257 2 DEBUG oslo_concurrency.processutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 04:57:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185393835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.762 2 DEBUG oslo_concurrency.processutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.768 2 DEBUG nova.compute.provider_tree [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.792 2 DEBUG nova.scheduler.client.report [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.836 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.874 2 INFO nova.scheduler.client.report [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Deleted allocations for instance 548bff7e-531b-4f5d-b4d3-18d586f46581#033[00m
Oct 14 04:57:35 np0005486808 nova_compute[259627]: 2025-10-14 08:57:35.969 2 DEBUG oslo_concurrency.lockutils [None req-ae55dc31-55d4-47d5-9888-8405bb5cf7d2 f50e2582d63041b682c71a379f763c0e 9bf65c21e4104af6981b071561617657 - - default default] Lock "548bff7e-531b-4f5d-b4d3-18d586f46581" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:36 np0005486808 nova_compute[259627]: 2025-10-14 08:57:36.644 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432241.643745, 01db05f6-07fb-41b5-8aaf-27ad5712fcda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:36 np0005486808 nova_compute[259627]: 2025-10-14 08:57:36.645 2 INFO nova.compute.manager [-] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:36 np0005486808 nova_compute[259627]: 2025-10-14 08:57:36.665 2 DEBUG nova.compute.manager [None req-755909fe-ee97-403d-8b10-2342e1aba93a - - - - - -] [instance: 01db05f6-07fb-41b5-8aaf-27ad5712fcda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:36 np0005486808 nova_compute[259627]: 2025-10-14 08:57:36.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.151 2 DEBUG nova.network.neutron [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.169 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Releasing lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.170 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance network_info: |[{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.170 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.171 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Refreshing network info cache for port bbcf1d8e-1698-4198-a641-527122f98e09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.174 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start _get_guest_xml network_info=[{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.179 2 WARNING nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.182 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.183 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.185 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.libvirt.host [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.186 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.187 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.188 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.189 2 DEBUG nova.virt.hardware [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.191 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 04:57:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313730875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.701 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.721 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:37 np0005486808 nova_compute[259627]: 2025-10-14 08:57:37.725 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550333764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.158 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.161 2 DEBUG nova.virt.libvirt.vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:33Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.162 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.164 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.167 2 DEBUG nova.objects.instance [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.183 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <uuid>fc212c27-f5c2-4656-9c1f-7c39234fea45</uuid>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <name>instance-00000014</name>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-373239651</nova:name>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:37</nova:creationTime>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:user uuid="aafd6ad40c944c3eb14e7fbf454040c3">tempest-ImagesOneServerNegativeTestJSON-531836018-project-member</nova:user>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:project uuid="f24bbeb2f91141e294590ca2afc5ed42">tempest-ImagesOneServerNegativeTestJSON-531836018</nova:project>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <nova:port uuid="bbcf1d8e-1698-4198-a641-527122f98e09">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="serial">fc212c27-f5c2-4656-9c1f-7c39234fea45</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="uuid">fc212c27-f5c2-4656-9c1f-7c39234fea45</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fc212c27-f5c2-4656-9c1f-7c39234fea45_disk">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:03:88:18"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <target dev="tapbbcf1d8e-16"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/console.log" append="off"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:38 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:38 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:38 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:38 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.184 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Preparing to wait for external event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.185 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.186 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.186 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.187 2 DEBUG nova.virt.libvirt.vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:57:33Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.188 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.189 2 DEBUG nova.network.os_vif_util [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.189 2 DEBUG os_vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbcf1d8e-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbbcf1d8e-16, col_values=(('external_ids', {'iface-id': 'bbcf1d8e-1698-4198-a641-527122f98e09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:88:18', 'vm-uuid': 'fc212c27-f5c2-4656-9c1f-7c39234fea45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:38 np0005486808 NetworkManager[44885]: <info>  [1760432258.2016] manager: (tapbbcf1d8e-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.208 2 INFO os_vif [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16')#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.283 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.284 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.284 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] No VIF found with MAC fa:16:3e:03:88:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.285 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Using config drive#033[00m
Oct 14 04:57:38 np0005486808 nova_compute[259627]: 2025-10-14 08:57:38.315 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:38 np0005486808 podman[287927]: 2025-10-14 08:57:38.677578975 +0000 UTC m=+0.071408054 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 04:57:38 np0005486808 podman[287926]: 2025-10-14 08:57:38.734047202 +0000 UTC m=+0.140246815 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.234 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Creating config drive at /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.245 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtbvfi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.295 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updated VIF entry in instance network info cache for port bbcf1d8e-1698-4198-a641-527122f98e09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.297 2 DEBUG nova.network.neutron [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [{"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.328 2 DEBUG oslo_concurrency.lockutils [req-38a9e447-5750-4244-8747-31c9ae9230d5 req-8381449e-9450-43f4-81e0-f129b5ec3382 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fc212c27-f5c2-4656-9c1f-7c39234fea45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.403 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtbvfi7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.441 2 DEBUG nova.storage.rbd_utils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] rbd image fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.445 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.477 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432244.418375, cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.480 2 INFO nova.compute.manager [-] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.499 2 DEBUG nova.compute.manager [None req-ec3993b5-3257-4b2b-b7b4-3f3ae60ffdd0 - - - - - -] [instance: cb79fdc7-fb8b-40ce-a3ff-7e77eb09bd8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.657 2 DEBUG oslo_concurrency.processutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config fc212c27-f5c2-4656-9c1f-7c39234fea45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.658 2 INFO nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deleting local config drive /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45/disk.config because it was imported into RBD.#033[00m
Oct 14 04:57:39 np0005486808 kernel: tapbbcf1d8e-16: entered promiscuous mode
Oct 14 04:57:39 np0005486808 NetworkManager[44885]: <info>  [1760432259.7284] manager: (tapbbcf1d8e-16): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 14 04:57:39 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:39Z|00131|binding|INFO|Claiming lport bbcf1d8e-1698-4198-a641-527122f98e09 for this chassis.
Oct 14 04:57:39 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:39Z|00132|binding|INFO|bbcf1d8e-1698-4198-a641-527122f98e09: Claiming fa:16:3e:03:88:18 10.100.0.6
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.739 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:88:18 10.100.0.6'], port_security=['fa:16:3e:03:88:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc212c27-f5c2-4656-9c1f-7c39234fea45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bbcf1d8e-1698-4198-a641-527122f98e09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.742 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bbcf1d8e-1698-4198-a641-527122f98e09 in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce bound to our chassis#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.744 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d74886-d603-4fb5-b8ff-9c184284bdce#033[00m
Oct 14 04:57:39 np0005486808 systemd-udevd[288022]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:57:39 np0005486808 NetworkManager[44885]: <info>  [1760432259.8015] device (tapbbcf1d8e-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:57:39 np0005486808 NetworkManager[44885]: <info>  [1760432259.8038] device (tapbbcf1d8e-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f5b7db-0555-4338-b393-9943333c3389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.804 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d74886-d1 in ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:57:39 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:39Z|00133|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 ovn-installed in OVS
Oct 14 04:57:39 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:39Z|00134|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 up in Southbound
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.807 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d74886-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.807 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9e7dab-df35-4ab8-85a6-df0ec5c82a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.809 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c927e0f-3cef-46eb-ae92-19a0ac037ee8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 nova_compute[259627]: 2025-10-14 08:57:39.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:39 np0005486808 systemd-machined[214636]: New machine qemu-21-instance-00000014.
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.831 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[04bb43fa-39f5-4d1e-baca-b36cbbb90176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 systemd[1]: Started Virtual Machine qemu-21-instance-00000014.
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.858 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[121bc9f3-e80a-4cd4-ae0d-336193d3426d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.890 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed696cf-2ccb-4fa3-9402-c623e897c31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5fcf0-9075-4651-a8f5-eb60211235d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 NetworkManager[44885]: <info>  [1760432259.8977] manager: (tap58d74886-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Oct 14 04:57:39 np0005486808 systemd-udevd[288024]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.939 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74ab51f3-9a71-47dc-991b-39e1e33d4710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.943 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[770db971-9e29-4f12-833a-12d561191c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:39 np0005486808 NetworkManager[44885]: <info>  [1760432259.9773] device (tap58d74886-d0): carrier: link connected
Oct 14 04:57:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:39.984 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d32641d5-aadb-40b7-a338-1c42f2fdb4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.006 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f96bc2d-1d57-4c71-85e0-4a38aa93fd5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603873, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288056, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.027 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7b7a9a-f6f9-417b-92e8-8cf6df89a9e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:d2a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603873, 'tstamp': 603873}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288057, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58d50778-0668-4079-a71c-352cdffd4a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d74886-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:d2:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603873, 'reachable_time': 21612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288058, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.096 2 DEBUG nova.compute.manager [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.096 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG oslo_concurrency.lockutils [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.097 2 DEBUG nova.compute.manager [req-15b82b71-b530-4469-92ad-4cef8d7d57fa req-fb0622e7-3bf0-4a86-9e91-55a08f66e3e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Processing event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa79b659-cc44-47ae-af0e-18a7737c960c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9525ab69-c88e-48a9-9ded-d9506917121d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.176 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.177 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.178 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d74886-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:40 np0005486808 kernel: tap58d74886-d0: entered promiscuous mode
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:40 np0005486808 NetworkManager[44885]: <info>  [1760432260.1879] manager: (tap58d74886-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.187 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d74886-d0, col_values=(('external_ids', {'iface-id': 'ef5c894d-34c4-4781-b15c-6813576a45e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:40 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:40Z|00135|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.194 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a137d66-98d4-4ea1-b2da-64533951057e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.197 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/58d74886-d603-4fb5-b8ff-9c184284bdce.pid.haproxy
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 58d74886-d603-4fb5-b8ff-9c184284bdce
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:57:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:40.197 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'env', 'PROCESS_TAG=haproxy-58d74886-d603-4fb5-b8ff-9c184284bdce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d74886-d603-4fb5-b8ff-9c184284bdce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:40 np0005486808 podman[288132]: 2025-10-14 08:57:40.571317618 +0000 UTC m=+0.045796835 container create e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:57:40 np0005486808 systemd[1]: Started libpod-conmon-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope.
Oct 14 04:57:40 np0005486808 podman[288132]: 2025-10-14 08:57:40.548123309 +0000 UTC m=+0.022602536 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:57:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dab89cc5adeb10fc352a887eff47aa5a4cb5a21ba42bc369c3d35d1606c66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:40 np0005486808 podman[288132]: 2025-10-14 08:57:40.677958047 +0000 UTC m=+0.152437274 container init e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:57:40 np0005486808 podman[288132]: 2025-10-14 08:57:40.685058631 +0000 UTC m=+0.159537838 container start e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:57:40 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : New worker (288153) forked
Oct 14 04:57:40 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : Loading success.
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.883 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.885 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.883868, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.885 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.891 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.900 2 INFO nova.virt.libvirt.driver [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance spawned successfully.#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.902 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.938 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.946 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.954 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.955 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.956 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.957 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.958 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:40 np0005486808 nova_compute[259627]: 2025-10-14 08:57:40.959 2 DEBUG nova.virt.libvirt.driver [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.006 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.007 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.883987, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.041 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432260.8906248, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.045 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.050 2 INFO nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 7.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.051 2 DEBUG nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.075 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.079 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.102 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.113 2 INFO nova.compute.manager [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 8.84 seconds to build instance.#033[00m
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.134 2 DEBUG oslo_concurrency.lockutils [None req-cc9609ef-1b6d-4f7f-af69-ecec84c9c8b0 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:41 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:41Z|00136|binding|INFO|Releasing lport ef5c894d-34c4-4781-b15c-6813576a45e8 from this chassis (sb_readonly=0)
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:41.466 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 04:57:41 np0005486808 nova_compute[259627]: 2025-10-14 08:57:41.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.324 2 DEBUG nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG oslo_concurrency.lockutils [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.325 2 DEBUG nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:42 np0005486808 nova_compute[259627]: 2025-10-14 08:57:42.326 2 WARNING nova.compute.manager [req-abe1e10b-0702-4823-9a96-4681ceb781c6 req-495ffc7f-8c83-4e8c-b541-659e4fccd244 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received unexpected event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:57:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:57:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.130 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.131 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.131 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.132 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.133 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.135 2 INFO nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Terminating instance#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.136 2 DEBUG nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:57:43 np0005486808 kernel: tapbbcf1d8e-16 (unregistering): left promiscuous mode
Oct 14 04:57:43 np0005486808 NetworkManager[44885]: <info>  [1760432263.1912] device (tapbbcf1d8e-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:57:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:43Z|00137|binding|INFO|Releasing lport bbcf1d8e-1698-4198-a641-527122f98e09 from this chassis (sb_readonly=0)
Oct 14 04:57:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:43Z|00138|binding|INFO|Setting lport bbcf1d8e-1698-4198-a641-527122f98e09 down in Southbound
Oct 14 04:57:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:57:43Z|00139|binding|INFO|Removing iface tapbbcf1d8e-16 ovn-installed in OVS
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.244 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:88:18 10.100.0.6'], port_security=['fa:16:3e:03:88:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc212c27-f5c2-4656-9c1f-7c39234fea45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d74886-d603-4fb5-b8ff-9c184284bdce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f24bbeb2f91141e294590ca2afc5ed42', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd818ab3d-f5ea-4d77-bc47-7efe2295e146', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1adf6e68-9c1a-4ee7-a829-03bbd6a5ae48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bbcf1d8e-1698-4198-a641-527122f98e09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.246 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bbcf1d8e-1698-4198-a641-527122f98e09 in datapath 58d74886-d603-4fb5-b8ff-9c184284bdce unbound from our chassis#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.247 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d74886-d603-4fb5-b8ff-9c184284bdce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.249 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c861cef8-c687-4fe9-b760-61f201297fc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.249 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce namespace which is not needed anymore#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 14 04:57:43 np0005486808 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Consumed 3.373s CPU time.
Oct 14 04:57:43 np0005486808 systemd-machined[214636]: Machine qemu-21-instance-00000014 terminated.
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.383 2 INFO nova.virt.libvirt.driver [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Instance destroyed successfully.#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.384 2 DEBUG nova.objects.instance [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lazy-loading 'resources' on Instance uuid fc212c27-f5c2-4656-9c1f-7c39234fea45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.405 2 DEBUG nova.virt.libvirt.vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-373239651',display_name='tempest-ImagesOneServerNegativeTestJSON-server-373239651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-373239651',id=20,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:57:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f24bbeb2f91141e294590ca2afc5ed42',ramdisk_id='',reservation_id='r-p1gxj4jk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-531836018',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-531836018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:57:41Z,user_data=None,user_id='aafd6ad40c944c3eb14e7fbf454040c3',uuid=fc212c27-f5c2-4656-9c1f-7c39234fea45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.406 2 DEBUG nova.network.os_vif_util [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converting VIF {"id": "bbcf1d8e-1698-4198-a641-527122f98e09", "address": "fa:16:3e:03:88:18", "network": {"id": "58d74886-d603-4fb5-b8ff-9c184284bdce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1216759128-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f24bbeb2f91141e294590ca2afc5ed42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcf1d8e-16", "ovs_interfaceid": "bbcf1d8e-1698-4198-a641-527122f98e09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.408 2 DEBUG nova.network.os_vif_util [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.408 2 DEBUG os_vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbcf1d8e-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.423 2 INFO os_vif [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:88:18,bridge_name='br-int',has_traffic_filtering=True,id=bbcf1d8e-1698-4198-a641-527122f98e09,network=Network(58d74886-d603-4fb5-b8ff-9c184284bdce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcf1d8e-16')#033[00m
Oct 14 04:57:43 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : haproxy version is 2.8.14-c23fe91
Oct 14 04:57:43 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [NOTICE]   (288151) : path to executable is /usr/sbin/haproxy
Oct 14 04:57:43 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [WARNING]  (288151) : Exiting Master process...
Oct 14 04:57:43 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [ALERT]    (288151) : Current worker (288153) exited with code 143 (Terminated)
Oct 14 04:57:43 np0005486808 neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce[288147]: [WARNING]  (288151) : All workers exited. Exiting... (0)
Oct 14 04:57:43 np0005486808 systemd[1]: libpod-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope: Deactivated successfully.
Oct 14 04:57:43 np0005486808 podman[288187]: 2025-10-14 08:57:43.438884684 +0000 UTC m=+0.077127745 container died e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 04:57:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043-userdata-shm.mount: Deactivated successfully.
Oct 14 04:57:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7e8dab89cc5adeb10fc352a887eff47aa5a4cb5a21ba42bc369c3d35d1606c66-merged.mount: Deactivated successfully.
Oct 14 04:57:43 np0005486808 podman[288187]: 2025-10-14 08:57:43.48923632 +0000 UTC m=+0.127479361 container cleanup e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:57:43 np0005486808 systemd[1]: libpod-conmon-e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043.scope: Deactivated successfully.
Oct 14 04:57:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct 14 04:57:43 np0005486808 podman[288244]: 2025-10-14 08:57:43.556516412 +0000 UTC m=+0.040595757 container remove e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[994df02a-7df0-46e5-98f2-7474cccf5191]: (4, ('Tue Oct 14 08:57:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043)\ne15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043\nTue Oct 14 08:57:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce (e15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043)\ne15fe29f26128b1c3b17525abff7454fcfabee05a176e0b2011318feb5807043\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee1b447-1a9c-4c29-8310-7e090f9a0b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.572 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d74886-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 kernel: tap58d74886-d0: left promiscuous mode
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.585 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18d200c3-75ee-4a03-9b0c-320cfab6d754]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.610 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1b4128-11f4-4776-b0f4-d1c588b0d468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[221782fe-aee3-4085-827b-140c6d23d133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6818fc68-2aee-47c7-8116-a8c42e90adec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603864, 'reachable_time': 32392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288259, 'error': None, 'target': 'ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 systemd[1]: run-netns-ovnmeta\x2d58d74886\x2dd603\x2d4fb5\x2db8ff\x2d9c184284bdce.mount: Deactivated successfully.
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.636 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d74886-d603-4fb5-b8ff-9c184284bdce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:57:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:57:43.636 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b09ae004-7853-4d1f-8a23-202ffe77a2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:57:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 14 04:57:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 14 04:57:43 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.780 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.780 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.809 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.849 2 INFO nova.virt.libvirt.driver [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deleting instance files /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45_del#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.850 2 INFO nova.virt.libvirt.driver [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deletion of /var/lib/nova/instances/fc212c27-f5c2-4656-9c1f-7c39234fea45_del complete#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.890 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.891 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.904 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.905 2 INFO nova.compute.claims [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.926 2 INFO nova.compute.manager [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.927 2 DEBUG oslo.service.loopingcall [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.928 2 DEBUG nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:57:43 np0005486808 nova_compute[259627]: 2025-10-14 08:57:43.928 2 DEBUG nova.network.neutron [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.043 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.427 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.428 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.428 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.429 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.430 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.430 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-unplugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.431 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.431 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG oslo_concurrency.lockutils [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.432 2 DEBUG nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] No waiting events found dispatching network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.433 2 WARNING nova.compute.manager [req-f3cc3400-760d-46f2-bce0-32ea86ec3b9e req-9a6fbe96-8a57-4385-9d3b-b54a0231869a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received unexpected event network-vif-plugged-bbcf1d8e-1698-4198-a641-527122f98e09 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:57:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714449684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.553 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.563 2 DEBUG nova.compute.provider_tree [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.584 2 DEBUG nova.scheduler.client.report [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.608 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.609 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.653 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.670 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.686 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.775 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.777 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.778 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating image(s)#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.818 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.857 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.896 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.903 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.965 2 DEBUG nova.network.neutron [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:57:44 np0005486808 nova_compute[259627]: 2025-10-14 08:57:44.991 2 INFO nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Took 1.06 seconds to deallocate network for instance.#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.013 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.014 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.014 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.015 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.046 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.051 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c79d4673-ee43-418f-8d38-f48cb8dc4659_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.089 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.090 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.178 2 DEBUG oslo_concurrency.processutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.316 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c79d4673-ee43-418f-8d38-f48cb8dc4659_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.388 2 DEBUG nova.compute.manager [req-66dc5236-b6ff-4543-9bc6-efcf75469d9b req-bdcc7c51-7d59-45ef-9c9f-330af3e98fb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Received event network-vif-deleted-bbcf1d8e-1698-4198-a641-527122f98e09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.394 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] resizing rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.492 2 DEBUG nova.objects.instance [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'migration_context' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.507 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.508 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Ensure instance console log exists: /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.509 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.511 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.517 2 WARNING nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.523 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:57:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.524 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.528 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.529 2 DEBUG nova.virt.libvirt.host [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.530 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.530 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.531 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.532 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.533 2 DEBUG nova.virt.hardware [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.537 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:57:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666485927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.649 2 DEBUG oslo_concurrency.processutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.658 2 DEBUG nova.compute.provider_tree [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.676 2 DEBUG nova.scheduler.client.report [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.702 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.726 2 INFO nova.scheduler.client.report [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Deleted allocations for instance fc212c27-f5c2-4656-9c1f-7c39234fea45#033[00m
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.802 2 DEBUG oslo_concurrency.lockutils [None req-77246b3d-4c3f-4823-a74f-2d473f4cedc1 aafd6ad40c944c3eb14e7fbf454040c3 f24bbeb2f91141e294590ca2afc5ed42 - - default default] Lock "fc212c27-f5c2-4656-9c1f-7c39234fea45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574151818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:45 np0005486808 nova_compute[259627]: 2025-10-14 08:57:45.994 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.027 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.033 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:57:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770348061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.489 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.493 2 DEBUG nova.objects.instance [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'pci_devices' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.521 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <uuid>c79d4673-ee43-418f-8d38-f48cb8dc4659</uuid>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <name>instance-00000015</name>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1352080678</nova:name>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:57:45</nova:creationTime>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:user uuid="d52b590f38bb47e0abb3e01c8a1352af">tempest-ServerDiagnosticsV248Test-911640918-project-member</nova:user>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <nova:project uuid="53de42c913444310bd1af3c50b917f19">tempest-ServerDiagnosticsV248Test-911640918</nova:project>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="serial">c79d4673-ee43-418f-8d38-f48cb8dc4659</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="uuid">c79d4673-ee43-418f-8d38-f48cb8dc4659</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c79d4673-ee43-418f-8d38-f48cb8dc4659_disk">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/console.log" append="off"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:57:46 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:57:46 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:57:46 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:57:46 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.589 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.590 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.591 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Using config drive#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.625 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:46 np0005486808 nova_compute[259627]: 2025-10-14 08:57:46.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.049 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Creating config drive at /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.054 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uxx0gof execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.186 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uxx0gof" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.232 2 DEBUG nova.storage.rbd_utils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] rbd image c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.238 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.438 2 DEBUG oslo_concurrency.processutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config c79d4673-ee43-418f-8d38-f48cb8dc4659_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:57:47 np0005486808 nova_compute[259627]: 2025-10-14 08:57:47.439 2 INFO nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deleting local config drive /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659/disk.config because it was imported into RBD.#033[00m
Oct 14 04:57:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 04:57:47 np0005486808 systemd-machined[214636]: New machine qemu-22-instance-00000015.
Oct 14 04:57:47 np0005486808 systemd[1]: Started Virtual Machine qemu-22-instance-00000015.
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.531 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432268.5304124, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.531 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.534 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.534 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.537 2 INFO nova.virt.libvirt.driver [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance spawned successfully.#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.538 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.549 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.554 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.558 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.559 2 DEBUG nova.virt.libvirt.driver [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.586 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432268.5311723, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.586 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Started (Lifecycle Event)#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.610 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.613 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.622 2 INFO nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 3.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.622 2 DEBUG nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.629 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.668 2 INFO nova.compute.manager [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 4.81 seconds to build instance.#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.685 2 DEBUG oslo_concurrency.lockutils [None req-ceef9e93-8734-478c-b237-ea1ab327a819 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.723 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432253.7214246, 548bff7e-531b-4f5d-b4d3-18d586f46581 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.723 2 INFO nova.compute.manager [-] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:48 np0005486808 nova_compute[259627]: 2025-10-14 08:57:48.743 2 DEBUG nova.compute.manager [None req-aedfa9ce-9107-4467-bb0d-80aa14fb7183 - - - - - -] [instance: 548bff7e-531b-4f5d-b4d3-18d586f46581] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 55 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 674 KiB/s wr, 149 op/s
Oct 14 04:57:50 np0005486808 nova_compute[259627]: 2025-10-14 08:57:50.520 2 DEBUG nova.compute.manager [None req-29373806-73f4-4dfa-8b9f-0124a9c9392d 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:50 np0005486808 nova_compute[259627]: 2025-10-14 08:57:50.524 2 INFO nova.compute.manager [None req-29373806-73f4-4dfa-8b9f-0124a9c9392d 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Retrieving diagnostics#033[00m
Oct 14 04:57:51 np0005486808 nova_compute[259627]: 2025-10-14 08:57:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 04:57:51 np0005486808 nova_compute[259627]: 2025-10-14 08:57:51.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:53 np0005486808 nova_compute[259627]: 2025-10-14 08:57:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:57:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8f539f58-d1ee-4602-b817-88e5fb8e3b1e does not exist
Oct 14 04:57:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0f4baf51-cc91-4227-a03a-ab538f650d34 does not exist
Oct 14 04:57:54 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5e58bb41-7ff1-4d75-ba23-5f1e6dc7b0fd does not exist
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.720297639 +0000 UTC m=+0.035702177 container create c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:57:54 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:57:54 np0005486808 systemd[1]: Started libpod-conmon-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope.
Oct 14 04:57:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.704777668 +0000 UTC m=+0.020182226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.804292272 +0000 UTC m=+0.119696810 container init c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.81074762 +0000 UTC m=+0.126152158 container start c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.814038921 +0000 UTC m=+0.129443489 container attach c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 04:57:54 np0005486808 optimistic_darwin[288939]: 167 167
Oct 14 04:57:54 np0005486808 systemd[1]: libpod-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope: Deactivated successfully.
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.816527622 +0000 UTC m=+0.131932150 container died c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 04:57:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7ad5f94520192653472d495653be34411d33a70b620750c95ca0ecb0eed8c3a6-merged.mount: Deactivated successfully.
Oct 14 04:57:54 np0005486808 podman[288922]: 2025-10-14 08:57:54.851519702 +0000 UTC m=+0.166924240 container remove c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 04:57:54 np0005486808 systemd[1]: libpod-conmon-c0607f30402baac412763d2cb252c11efdc37485073b637a9472f4fcaa896d56.scope: Deactivated successfully.
Oct 14 04:57:55 np0005486808 podman[288962]: 2025-10-14 08:57:55.054543537 +0000 UTC m=+0.071653970 container create 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 04:57:55 np0005486808 systemd[1]: Started libpod-conmon-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope.
Oct 14 04:57:55 np0005486808 podman[288962]: 2025-10-14 08:57:55.02492993 +0000 UTC m=+0.042040423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:55 np0005486808 podman[288962]: 2025-10-14 08:57:55.186378544 +0000 UTC m=+0.203488977 container init 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:57:55 np0005486808 podman[288962]: 2025-10-14 08:57:55.198368349 +0000 UTC m=+0.215478752 container start 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:57:55 np0005486808 podman[288962]: 2025-10-14 08:57:55.201950907 +0000 UTC m=+0.219061310 container attach 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:57:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct 14 04:57:56 np0005486808 vibrant_pasteur[288979]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:57:56 np0005486808 vibrant_pasteur[288979]: --> relative data size: 1.0
Oct 14 04:57:56 np0005486808 vibrant_pasteur[288979]: --> All data devices are unavailable
Oct 14 04:57:56 np0005486808 systemd[1]: libpod-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Deactivated successfully.
Oct 14 04:57:56 np0005486808 systemd[1]: libpod-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Consumed 1.094s CPU time.
Oct 14 04:57:56 np0005486808 podman[288962]: 2025-10-14 08:57:56.357231015 +0000 UTC m=+1.374341408 container died 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:57:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-652bd525aa6e0f2e5ee7e649c699c8cf897143f6b4a9572a7f57e0c971934635-merged.mount: Deactivated successfully.
Oct 14 04:57:56 np0005486808 podman[288962]: 2025-10-14 08:57:56.410610076 +0000 UTC m=+1.427720479 container remove 13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:57:56 np0005486808 systemd[1]: libpod-conmon-13c5709bdf4dd5b7042fd2ae9b4755307ef52180c55b8c51b1bb7708328e3850.scope: Deactivated successfully.
Oct 14 04:57:56 np0005486808 nova_compute[259627]: 2025-10-14 08:57:56.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.050891599 +0000 UTC m=+0.044698579 container create 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 04:57:57 np0005486808 systemd[1]: Started libpod-conmon-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope.
Oct 14 04:57:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.030888548 +0000 UTC m=+0.024695618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.133651421 +0000 UTC m=+0.127458451 container init 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.14011279 +0000 UTC m=+0.133919780 container start 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.143282818 +0000 UTC m=+0.137089828 container attach 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:57:57 np0005486808 sleepy_elbakyan[289180]: 167 167
Oct 14 04:57:57 np0005486808 systemd[1]: libpod-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope: Deactivated successfully.
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.145115053 +0000 UTC m=+0.138922063 container died 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 14 04:57:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3f517dce1b1f3a939264c309f9b581b37d11e9d215fc23d5e0f4c33f67d441cb-merged.mount: Deactivated successfully.
Oct 14 04:57:57 np0005486808 podman[289162]: 2025-10-14 08:57:57.179463166 +0000 UTC m=+0.173270146 container remove 3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_elbakyan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 04:57:57 np0005486808 systemd[1]: libpod-conmon-3fe98af29fda9a8bde5122168a527db3747a49d79975a067b2dd9337e53cc831.scope: Deactivated successfully.
Oct 14 04:57:57 np0005486808 podman[289205]: 2025-10-14 08:57:57.355417997 +0000 UTC m=+0.041915160 container create 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 04:57:57 np0005486808 systemd[1]: Started libpod-conmon-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope.
Oct 14 04:57:57 np0005486808 podman[289205]: 2025-10-14 08:57:57.334669207 +0000 UTC m=+0.021166420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:57 np0005486808 podman[289205]: 2025-10-14 08:57:57.450593454 +0000 UTC m=+0.137090637 container init 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:57:57 np0005486808 podman[289205]: 2025-10-14 08:57:57.459177835 +0000 UTC m=+0.145674998 container start 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:57:57 np0005486808 podman[289205]: 2025-10-14 08:57:57.462521617 +0000 UTC m=+0.149018780 container attach 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 14 04:57:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:57:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 04:57:58 np0005486808 interesting_wing[289221]: {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    "0": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "devices": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "/dev/loop3"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            ],
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_name": "ceph_lv0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_size": "21470642176",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "name": "ceph_lv0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "tags": {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_name": "ceph",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.crush_device_class": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.encrypted": "0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_id": "0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.vdo": "0"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            },
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "vg_name": "ceph_vg0"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        }
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    ],
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    "1": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "devices": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "/dev/loop4"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            ],
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_name": "ceph_lv1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_size": "21470642176",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "name": "ceph_lv1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "tags": {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_name": "ceph",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.crush_device_class": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.encrypted": "0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_id": "1",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.vdo": "0"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            },
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "vg_name": "ceph_vg1"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        }
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    ],
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    "2": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "devices": [
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "/dev/loop5"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            ],
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_name": "ceph_lv2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_size": "21470642176",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "name": "ceph_lv2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "tags": {
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.cluster_name": "ceph",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.crush_device_class": "",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.encrypted": "0",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osd_id": "2",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:                "ceph.vdo": "0"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            },
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "type": "block",
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:            "vg_name": "ceph_vg2"
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:        }
Oct 14 04:57:58 np0005486808 interesting_wing[289221]:    ]
Oct 14 04:57:58 np0005486808 interesting_wing[289221]: }
Oct 14 04:57:58 np0005486808 systemd[1]: libpod-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope: Deactivated successfully.
Oct 14 04:57:58 np0005486808 podman[289205]: 2025-10-14 08:57:58.300602427 +0000 UTC m=+0.987099620 container died 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:57:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4df155e7a6a13318cc33103fc92fc8e65bbe4c9994a1c7db8e3315c3b48c9231-merged.mount: Deactivated successfully.
Oct 14 04:57:58 np0005486808 podman[289205]: 2025-10-14 08:57:58.360088138 +0000 UTC m=+1.046585301 container remove 9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 04:57:58 np0005486808 systemd[1]: libpod-conmon-9ea2b63917f7ed5fcb6bd34413ea13de48b7e9626466c13cbc936bc21fe49098.scope: Deactivated successfully.
Oct 14 04:57:58 np0005486808 nova_compute[259627]: 2025-10-14 08:57:58.379 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432263.3788803, fc212c27-f5c2-4656-9c1f-7c39234fea45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:57:58 np0005486808 nova_compute[259627]: 2025-10-14 08:57:58.381 2 INFO nova.compute.manager [-] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:57:58 np0005486808 nova_compute[259627]: 2025-10-14 08:57:58.405 2 DEBUG nova.compute.manager [None req-62ae390c-765f-4506-a3bc-f70a6734491e - - - - - -] [instance: fc212c27-f5c2-4656-9c1f-7c39234fea45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:57:58 np0005486808 nova_compute[259627]: 2025-10-14 08:57:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:57:58 np0005486808 podman[289382]: 2025-10-14 08:57:58.980638205 +0000 UTC m=+0.038232559 container create cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:57:59 np0005486808 systemd[1]: Started libpod-conmon-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope.
Oct 14 04:57:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:59.040884185 +0000 UTC m=+0.098478559 container init cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:59.047444646 +0000 UTC m=+0.105039000 container start cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:59.051356962 +0000 UTC m=+0.108951356 container attach cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 04:57:59 np0005486808 nifty_thompson[289398]: 167 167
Oct 14 04:57:59 np0005486808 systemd[1]: libpod-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope: Deactivated successfully.
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:59.053890854 +0000 UTC m=+0.111485208 container died cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:58.963917165 +0000 UTC m=+0.021511539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-30028a58ae730c2ebbef6a6453f98d27678ac1fa69dfacff7f1d70dbef238903-merged.mount: Deactivated successfully.
Oct 14 04:57:59 np0005486808 podman[289382]: 2025-10-14 08:57:59.095740112 +0000 UTC m=+0.153334456 container remove cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:57:59 np0005486808 systemd[1]: libpod-conmon-cab34cf7d99a98e7859e8c8d718d0a459d99c2fc1a2a501ebb03fd9528c4c1fb.scope: Deactivated successfully.
Oct 14 04:57:59 np0005486808 podman[289422]: 2025-10-14 08:57:59.254390897 +0000 UTC m=+0.038833964 container create b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:57:59 np0005486808 systemd[1]: Started libpod-conmon-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope.
Oct 14 04:57:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:57:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:57:59 np0005486808 podman[289422]: 2025-10-14 08:57:59.319540757 +0000 UTC m=+0.103983844 container init b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 04:57:59 np0005486808 podman[289422]: 2025-10-14 08:57:59.332169457 +0000 UTC m=+0.116612514 container start b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:57:59 np0005486808 podman[289422]: 2025-10-14 08:57:59.238149769 +0000 UTC m=+0.022592866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:57:59 np0005486808 podman[289422]: 2025-10-14 08:57:59.335170411 +0000 UTC m=+0.119613478 container attach b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 04:57:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]: {
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_id": 2,
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "type": "bluestore"
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    },
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_id": 1,
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "type": "bluestore"
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    },
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_id": 0,
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:        "type": "bluestore"
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]:    }
Oct 14 04:58:00 np0005486808 dazzling_thompson[289439]: }
Oct 14 04:58:00 np0005486808 systemd[1]: libpod-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope: Deactivated successfully.
Oct 14 04:58:00 np0005486808 conmon[289439]: conmon b17236158007fedfe48d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope/container/memory.events
Oct 14 04:58:00 np0005486808 podman[289422]: 2025-10-14 08:58:00.330450231 +0000 UTC m=+1.114893298 container died b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2581e58298a1d9675f4d601442d0226923d2368d04f0f4d2ee2f6a79990e5a8d-merged.mount: Deactivated successfully.
Oct 14 04:58:00 np0005486808 podman[289422]: 2025-10-14 08:58:00.395047308 +0000 UTC m=+1.179490375 container remove b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:00 np0005486808 systemd[1]: libpod-conmon-b17236158007fedfe48d2faefcdd3475ad5177f9bb2476bbec7c0a54e8326646.scope: Deactivated successfully.
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:58:00 np0005486808 podman[289481]: 2025-10-14 08:58:00.436828574 +0000 UTC m=+0.071030126 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 04:58:00 np0005486808 podman[289472]: 2025-10-14 08:58:00.436866964 +0000 UTC m=+0.076891839 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:58:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ce990c2d-32e7-4665-ac9b-e61463cd5840 does not exist
Oct 14 04:58:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a274596-c563-4684-93a0-a6e67c82f22a does not exist
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 14 04:58:00 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 14 04:58:00 np0005486808 nova_compute[259627]: 2025-10-14 08:58:00.818 2 DEBUG nova.compute.manager [None req-5f4511de-3744-4d0d-8dfd-eaa94387243a 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:00 np0005486808 nova_compute[259627]: 2025-10-14 08:58:00.826 2 INFO nova.compute.manager [None req-5f4511de-3744-4d0d-8dfd-eaa94387243a 4086d396d4af49dfad54dbc8ee5ac67c a636329e57e7406abf5fc3ca0a39a6f0 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Retrieving diagnostics#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.101 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.102 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.103 2 INFO nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Terminating instance#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquired lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.104 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.406 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:58:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:58:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.732 2 DEBUG nova.network.neutron [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.749 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Releasing lock "refresh_cache-c79d4673-ee43-418f-8d38-f48cb8dc4659" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.749 2 DEBUG nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:58:01 np0005486808 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 14 04:58:01 np0005486808 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000015.scope: Consumed 12.061s CPU time.
Oct 14 04:58:01 np0005486808 systemd-machined[214636]: Machine qemu-22-instance-00000015 terminated.
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.981 2 INFO nova.virt.libvirt.driver [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance destroyed successfully.#033[00m
Oct 14 04:58:01 np0005486808 nova_compute[259627]: 2025-10-14 08:58:01.982 2 DEBUG nova.objects.instance [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lazy-loading 'resources' on Instance uuid c79d4673-ee43-418f-8d38-f48cb8dc4659 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.471 2 INFO nova.virt.libvirt.driver [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deleting instance files /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659_del#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.473 2 INFO nova.virt.libvirt.driver [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deletion of /var/lib/nova/instances/c79d4673-ee43-418f-8d38-f48cb8dc4659_del complete#033[00m
Oct 14 04:58:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.541 2 INFO nova.compute.manager [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.541 2 DEBUG oslo.service.loopingcall [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.542 2 DEBUG nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.542 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.830 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.854 2 DEBUG nova.network.neutron [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.871 2 INFO nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Took 0.33 seconds to deallocate network for instance.#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.911 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.912 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:02 np0005486808 nova_compute[259627]: 2025-10-14 08:58:02.967 2 DEBUG oslo_concurrency.processutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198239872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.428 2 DEBUG oslo_concurrency.processutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.461 2 DEBUG nova.compute.provider_tree [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.486 2 DEBUG nova.scheduler.client.report [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.512 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 120 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.554 2 INFO nova.scheduler.client.report [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Deleted allocations for instance c79d4673-ee43-418f-8d38-f48cb8dc4659#033[00m
Oct 14 04:58:03 np0005486808 nova_compute[259627]: 2025-10-14 08:58:03.609 2 DEBUG oslo_concurrency.lockutils [None req-f545ca9a-5fa7-4a9c-9dcb-fc65d2636fb1 d52b590f38bb47e0abb3e01c8a1352af 53de42c913444310bd1af3c50b917f19 - - default default] Lock "c79d4673-ee43-418f-8d38-f48cb8dc4659" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:58:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:58:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:58:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465458057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:58:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Oct 14 04:58:06 np0005486808 nova_compute[259627]: 2025-10-14 08:58:06.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:07.015 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 14 04:58:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 14 04:58:07 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 14 04:58:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 532 KiB/s rd, 3.2 MiB/s wr, 166 op/s
Oct 14 04:58:07 np0005486808 nova_compute[259627]: 2025-10-14 08:58:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:07 np0005486808 nova_compute[259627]: 2025-10-14 08:58:07.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 04:58:08 np0005486808 nova_compute[259627]: 2025-10-14 08:58:08.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:08 np0005486808 nova_compute[259627]: 2025-10-14 08:58:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 42 KiB/s wr, 69 op/s
Oct 14 04:58:09 np0005486808 podman[289616]: 2025-10-14 08:58:09.714809022 +0000 UTC m=+0.113816485 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 04:58:09 np0005486808 podman[289615]: 2025-10-14 08:58:09.729211016 +0000 UTC m=+0.138275446 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:58:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 04:58:11 np0005486808 nova_compute[259627]: 2025-10-14 08:58:11.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:12 np0005486808 nova_compute[259627]: 2025-10-14 08:58:12.988 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:13 np0005486808 nova_compute[259627]: 2025-10-14 08:58:13.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 36 KiB/s wr, 60 op/s
Oct 14 04:58:13 np0005486808 nova_compute[259627]: 2025-10-14 08:58:13.781 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:13 np0005486808 nova_compute[259627]: 2025-10-14 08:58:13.782 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:13 np0005486808 nova_compute[259627]: 2025-10-14 08:58:13.800 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:13 np0005486808 nova_compute[259627]: 2025-10-14 08:58:13.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.003 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.014 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.014 2 INFO nova.compute.claims [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.315 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246393399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.811 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.821 2 DEBUG nova.compute.provider_tree [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.853 2 DEBUG nova.scheduler.client.report [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.903 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.904 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.977 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:14 np0005486808 nova_compute[259627]: 2025-10-14 08:58:14.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.013 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.039 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.161 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.164 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.165 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.206 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.248 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.278 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.283 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.380 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.381 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.382 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.382 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.409 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.413 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.681 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.769 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.884 2 DEBUG nova.objects.instance [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.903 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.904 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.904 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.905 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.905 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.906 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.910 2 WARNING nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.916 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.917 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.922 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.libvirt.host [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.923 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.924 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.925 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.926 2 DEBUG nova.virt.hardware [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.929 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:15 np0005486808 nova_compute[259627]: 2025-10-14 08:58:15.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715691113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.367 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.394 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.400 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301376505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.909 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.912 2 DEBUG nova.objects.instance [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.980 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432281.9769046, c79d4673-ee43-418f-8d38-f48cb8dc4659 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.981 2 INFO nova.compute.manager [-] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:58:16 np0005486808 nova_compute[259627]: 2025-10-14 08:58:16.987 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <name>instance-00000016</name>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:15</nova:creationTime>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:16 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:16 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:16 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:16 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.198 2 DEBUG nova.compute.manager [None req-07187fbb-98d6-4000-b259-b6be5f75ab26 - - - - - -] [instance: c79d4673-ee43-418f-8d38-f48cb8dc4659] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.238 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.240 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.241 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.285 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.286 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.288 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.318 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.394 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.527 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.528 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.538 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.538 2 INFO nova.compute.claims [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.657 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283171025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.740 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.800 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.800 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.844 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.849 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6cw0xl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:17 np0005486808 nova_compute[259627]: 2025-10-14 08:58:17.977 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6cw0xl_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.001 2 DEBUG nova.storage.rbd_utils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.005 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.061 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.062 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.062 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348919076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.153 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.160 2 DEBUG nova.compute.provider_tree [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.187 2 DEBUG oslo_concurrency.processutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.188 2 INFO nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.192 2 DEBUG nova.scheduler.client.report [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.222 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.223 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.228 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:18 np0005486808 systemd-machined[214636]: New machine qemu-23-instance-00000016.
Oct 14 04:58:18 np0005486808 systemd[1]: Started Virtual Machine qemu-23-instance-00000016.
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.327 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.328 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.349 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 51c76e0f-284d-4122-83b4-32c4518b9056 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.349 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance a071857d-db87-4931-95ad-f8c627f74160 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.350 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.350 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.355 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.373 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.432 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.507 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.509 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.509 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating image(s)#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.532 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.562 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.595 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.610 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.657 2 DEBUG nova.policy [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '664abc01a11d458d9644488bf31e47f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '99649891054745d8a5186a1ad099e5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.721 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.722 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.724 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.725 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.755 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.760 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a071857d-db87-4931-95ad-f8c627f74160_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583583320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.989 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.995 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:18 np0005486808 nova_compute[259627]: 2025-10-14 08:58:18.997 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a071857d-db87-4931-95ad-f8c627f74160_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.027 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.060 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.061 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.061 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.062 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.067 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] resizing rbd image a071857d-db87-4931-95ad-f8c627f74160_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.096 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.154 2 DEBUG nova.objects.instance [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'migration_context' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.171 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.172 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Ensure instance console log exists: /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.172 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.173 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.173 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.315 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432299.315043, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.316 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.317 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.318 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.322 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.322 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.354 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.360 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.363 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.364 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.364 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.365 2 DEBUG nova.virt.libvirt.driver [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.410 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.412 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432299.3158717, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.412 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.439 2 INFO nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 4.28 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.440 2 DEBUG nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.441 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.450 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.494 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.533 2 INFO nova.compute.manager [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 5.68 seconds to build instance.#033[00m
Oct 14 04:58:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.564 2 DEBUG oslo_concurrency.lockutils [None req-0926613c-9b98-49bc-851c-a8a1ae89936a 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:19 np0005486808 nova_compute[259627]: 2025-10-14 08:58:19.816 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully created port: fd335735-b88a-42f7-911e-af4b2b9396fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.097 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.183 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.184 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.213 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.295 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.296 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.302 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.303 2 INFO nova.compute.claims [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.426 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225914499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.845 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.850 2 DEBUG nova.compute.provider_tree [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.870 2 DEBUG nova.scheduler.client.report [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.904 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.905 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.974 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.975 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.982 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully updated port: fd335735-b88a-42f7-911e-af4b2b9396fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:20 np0005486808 nova_compute[259627]: 2025-10-14 08:58:20.987 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.000 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.002 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.003 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.003 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.020 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.122 2 DEBUG nova.compute.manager [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-changed-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.123 2 DEBUG nova.compute.manager [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing instance network info cache due to event network-changed-fd335735-b88a-42f7-911e-af4b2b9396fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.124 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.127 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.129 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.130 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating image(s)#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.160 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.193 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.221 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.226 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.269 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.300 2 DEBUG nova.policy [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.310 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.312 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.313 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.313 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.348 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.354 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2826d9ce-d739-49a1-abfa-80cee62173fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.619 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2826d9ce-d739-49a1-abfa-80cee62173fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.673 2 INFO nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Rebuilding instance#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.680 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.761 2 DEBUG nova.objects.instance [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.779 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.780 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Ensure instance console log exists: /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.780 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.781 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.781 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.925 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Successfully created port: 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.980 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:58:21 np0005486808 nova_compute[259627]: 2025-10-14 08:58:21.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.033 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.067 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'trusted_certs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.089 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.141 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_requests' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.158 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.178 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.199 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.220 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.224 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.280 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.281 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.297 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.340 2 DEBUG nova.network.neutron [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.360 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.360 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance network_info: |[{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.361 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.362 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing network info cache for port fd335735-b88a-42f7-911e-af4b2b9396fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.365 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start _get_guest_xml network_info=[{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.369 2 WARNING nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.371 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.372 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.378 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.379 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.381 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.381 2 INFO nova.compute.claims [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.409 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.libvirt.host [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.410 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.411 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.412 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.413 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.414 2 DEBUG nova.virt.hardware [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.416 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.812 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611157671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.864 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.887 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:22 np0005486808 nova_compute[259627]: 2025-10-14 08:58:22.890 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.021 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Successfully updated port: 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.044 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.045 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.045 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.174 2 DEBUG nova.compute.manager [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-changed-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.175 2 DEBUG nova.compute.manager [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Refreshing instance network info cache due to event network-changed-146ca52f-0b4f-46f0-9153-1120bf1c9e4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.175 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488972167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.253 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.255 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.262 2 DEBUG nova.compute.provider_tree [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589485826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.286 2 DEBUG nova.scheduler.client.report [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.300 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.301 2 DEBUG nova.virt.libvirt.vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:18Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.302 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.303 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.304 2 DEBUG nova.objects.instance [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.317 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.317 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.325 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <uuid>a071857d-db87-4931-95ad-f8c627f74160</uuid>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <name>instance-00000017</name>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:name>tempest-AttachInterfacesV270Test-server-635702397</nova:name>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:22</nova:creationTime>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:user uuid="664abc01a11d458d9644488bf31e47f4">tempest-AttachInterfacesV270Test-1504567615-project-member</nova:user>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:project uuid="99649891054745d8a5186a1ad099e5a7">tempest-AttachInterfacesV270Test-1504567615</nova:project>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <nova:port uuid="fd335735-b88a-42f7-911e-af4b2b9396fb">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="serial">a071857d-db87-4931-95ad-f8c627f74160</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="uuid">a071857d-db87-4931-95ad-f8c627f74160</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/a071857d-db87-4931-95ad-f8c627f74160_disk">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/a071857d-db87-4931-95ad-f8c627f74160_disk.config">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:58:04:b2"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <target dev="tapfd335735-b8"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/console.log" append="off"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:23 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:23 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:23 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:23 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.326 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Preparing to wait for external event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.327 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.328 2 DEBUG nova.virt.libvirt.vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:18Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.328 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.329 2 DEBUG nova.network.os_vif_util [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.329 2 DEBUG os_vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd335735-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd335735-b8, col_values=(('external_ids', {'iface-id': 'fd335735-b88a-42f7-911e-af4b2b9396fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:04:b2', 'vm-uuid': 'a071857d-db87-4931-95ad-f8c627f74160'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.356 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.356 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:23 np0005486808 NetworkManager[44885]: <info>  [1760432303.3677] manager: (tapfd335735-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.372 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.375 2 INFO os_vif [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8')#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.389 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.464 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.465 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.465 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:58:04:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.466 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Using config drive#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.484 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 134 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.545 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.548 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.549 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating image(s)#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.575 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.601 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.626 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.630 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.673 2 DEBUG nova.policy [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aacb60ad29c418c9161e71bb72da036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.709 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.710 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.711 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.711 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.731 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.734 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:23 np0005486808 nova_compute[259627]: 2025-10-14 08:58:23.965 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.035 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] resizing rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.120 2 DEBUG nova.objects.instance [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'migration_context' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Ensure instance console log exists: /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.132 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.133 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.133 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.178 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Creating config drive at /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.182 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoenbk_71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.240 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.310 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updated VIF entry in instance network info cache for port fd335735-b88a-42f7-911e-af4b2b9396fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.311 2 DEBUG nova.network.neutron [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.320 2 DEBUG nova.network.neutron [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.325 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoenbk_71" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.326 2 DEBUG oslo_concurrency.lockutils [req-787c6cf2-1619-4d86-806b-f7f321b4ab08 req-6aeab69b-19a0-4fc4-b7b2-735a95947cee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.353 2 DEBUG nova.storage.rbd_utils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] rbd image a071857d-db87-4931-95ad-f8c627f74160_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.358 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config a071857d-db87-4931-95ad-f8c627f74160_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.394 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.395 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance network_info: |[{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.396 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.396 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Refreshing network info cache for port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.398 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start _get_guest_xml network_info=[{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.403 2 WARNING nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.408 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.408 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.423 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.423 2 DEBUG nova.virt.libvirt.host [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.424 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.425 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.426 2 DEBUG nova.virt.hardware [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.429 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.567 2 DEBUG oslo_concurrency.processutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config a071857d-db87-4931-95ad-f8c627f74160_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.568 2 INFO nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deleting local config drive /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:24 np0005486808 kernel: tapfd335735-b8: entered promiscuous mode
Oct 14 04:58:24 np0005486808 NetworkManager[44885]: <info>  [1760432304.6206] manager: (tapfd335735-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct 14 04:58:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:24Z|00140|binding|INFO|Claiming lport fd335735-b88a-42f7-911e-af4b2b9396fb for this chassis.
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:24Z|00141|binding|INFO|fd335735-b88a-42f7-911e-af4b2b9396fb: Claiming fa:16:3e:58:04:b2 10.100.0.3
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.640 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:04:b2 10.100.0.3'], port_security=['fa:16:3e:58:04:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd335735-b88a-42f7-911e-af4b2b9396fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.641 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd335735-b88a-42f7-911e-af4b2b9396fb in datapath 8e06007a-4993-4328-9612-b43b931e2e3b bound to our chassis#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.643 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e06007a-4993-4328-9612-b43b931e2e3b#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b58a496a-90ee-43f9-8a2f-840401fe2cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.659 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e06007a-41 in ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:24 np0005486808 systemd-machined[214636]: New machine qemu-24-instance-00000017.
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.671 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e06007a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49fbefb9-c950-4d89-8764-e42c6463d97e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2428ea-d4cc-4823-b53f-fd92add5f4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.684 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb39eb-86cf-4ee7-aac6-0a43da11fb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 systemd[1]: Started Virtual Machine qemu-24-instance-00000017.
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.708 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac682ee0-8bf2-4801-a0a2-7d3efbf71ee3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 systemd-udevd[290797]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:24Z|00142|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb ovn-installed in OVS
Oct 14 04:58:24 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:24Z|00143|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb up in Southbound
Oct 14 04:58:24 np0005486808 NetworkManager[44885]: <info>  [1760432304.7271] device (tapfd335735-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:24 np0005486808 NetworkManager[44885]: <info>  [1760432304.7284] device (tapfd335735-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.758 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf6c403-87c2-4f8b-a45d-b95f281bd821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 NetworkManager[44885]: <info>  [1760432304.7692] manager: (tap8e06007a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Oct 14 04:58:24 np0005486808 systemd-udevd[290800]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36af6436-80d5-424d-8b02-cbac564cd0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.821 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4dfb51-0427-4209-bf08-d6fb95199ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7a6730-642d-41ce-8765-7c1f85f5d304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 NetworkManager[44885]: <info>  [1760432304.8548] device (tap8e06007a-40): carrier: link connected
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.863 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[61d9d308-e576-4ede-ab8d-ed716a80387f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[657db799-eeb2-48c3-aaa5-e23c3b7c57cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290827, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98e50c6b-28db-4eb0-8ea6-c5292ad9f8fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:643c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608361, 'tstamp': 608361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290828, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.914 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8db4dee2-a35a-4b90-b32a-cc140605b92c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290829, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501182591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:24.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e72065e2-438b-444b-a1c0-ee773b3b1535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.959 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.977 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:24 np0005486808 nova_compute[259627]: 2025-10-14 08:58:24.994 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9058b3c1-c734-4b43-be18-4832d3f33779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.008 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.009 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e06007a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 NetworkManager[44885]: <info>  [1760432305.0110] manager: (tap8e06007a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 14 04:58:25 np0005486808 kernel: tap8e06007a-40: entered promiscuous mode
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e06007a-40, col_values=(('external_ids', {'iface-id': 'd2ffe2ca-9cdc-4c2a-a690-8b35b09cf563'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:25Z|00144|binding|INFO|Releasing lport d2ffe2ca-9cdc-4c2a-a690-8b35b09cf563 from this chassis (sb_readonly=0)
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.037 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a81026-01dc-41ff-8ff6-9f1b58f90b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.038 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/8e06007a-4993-4328-9612-b43b931e2e3b.pid.haproxy
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 8e06007a-4993-4328-9612-b43b931e2e3b
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:25.039 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'env', 'PROCESS_TAG=haproxy-8e06007a-4993-4328-9612-b43b931e2e3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e06007a-4993-4328-9612-b43b931e2e3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.271 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 685df8a6-7b64-441e-9a56-4ede8db5faa9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:25 np0005486808 podman[290901]: 2025-10-14 08:58:25.430455465 +0000 UTC m=+0.061017020 container create 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031768786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:25 np0005486808 systemd[1]: Started libpod-conmon-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope.
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.473 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.475 2 DEBUG nova.virt.libvirt.vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:21Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.476 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.477 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.478 2 DEBUG nova.objects.instance [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:25 np0005486808 podman[290901]: 2025-10-14 08:58:25.399693529 +0000 UTC m=+0.030255104 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbf6ecce0931ae581de6497fdff55fce469de3179658359b1e470c5e9f85c3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.496 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <uuid>2826d9ce-d739-49a1-abfa-80cee62173fb</uuid>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <name>instance-00000018</name>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesTestJSON-server-1877571750</nova:name>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:24</nova:creationTime>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <nova:port uuid="146ca52f-0b4f-46f0-9153-1120bf1c9e4e">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="serial">2826d9ce-d739-49a1-abfa-80cee62173fb</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="uuid">2826d9ce-d739-49a1-abfa-80cee62173fb</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c7:a1:5d"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <target dev="tap146ca52f-0b"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/console.log" append="off"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:25 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:25 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:25 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:25 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.497 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Preparing to wait for external event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.497 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.498 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.498 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.499 2 DEBUG nova.virt.libvirt.vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:21Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.499 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.500 2 DEBUG nova.network.os_vif_util [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.500 2 DEBUG os_vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap146ca52f-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap146ca52f-0b, col_values=(('external_ids', {'iface-id': '146ca52f-0b4f-46f0-9153-1120bf1c9e4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:a1:5d', 'vm-uuid': '2826d9ce-d739-49a1-abfa-80cee62173fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 NetworkManager[44885]: <info>  [1760432305.5074] manager: (tap146ca52f-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct 14 04:58:25 np0005486808 podman[290901]: 2025-10-14 08:58:25.508604274 +0000 UTC m=+0.139165859 container init 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:25 np0005486808 podman[290901]: 2025-10-14 08:58:25.516109518 +0000 UTC m=+0.146671073 container start 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.516 2 INFO os_vif [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b')#033[00m
Oct 14 04:58:25 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : New worker (290955) forked
Oct 14 04:58:25 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : Loading success.
Oct 14 04:58:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.572 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.573 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.573 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:c7:a1:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.573 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Using config drive#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.595 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.613 2 DEBUG nova.compute.manager [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.614 2 DEBUG oslo_concurrency.lockutils [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.615 2 DEBUG nova.compute.manager [req-6ae7e7a2-6ab1-4108-8954-e5af70c6c9d8 req-f81f7e13-4b90-445e-ba17-ced92be8f222 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Processing event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.722025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305722051, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1497, "num_deletes": 255, "total_data_size": 2114606, "memory_usage": 2151312, "flush_reason": "Manual Compaction"}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305730571, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2068577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24145, "largest_seqno": 25641, "table_properties": {"data_size": 2061594, "index_size": 3994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15573, "raw_average_key_size": 20, "raw_value_size": 2047250, "raw_average_value_size": 2708, "num_data_blocks": 177, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432188, "oldest_key_time": 1760432188, "file_creation_time": 1760432305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8599 microseconds, and 4637 cpu microseconds.
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.730619) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2068577 bytes OK
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.730640) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732225) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732241) EVENT_LOG_v1 {"time_micros": 1760432305732236, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2107844, prev total WAL file size 2107844, number of live WAL files 2.
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.733040) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2020KB)], [56(6701KB)]
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305733091, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8931006, "oldest_snapshot_seqno": -1}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4717 keys, 7199110 bytes, temperature: kUnknown
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305766594, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7199110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7167794, "index_size": 18406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118281, "raw_average_key_size": 25, "raw_value_size": 7082815, "raw_average_value_size": 1501, "num_data_blocks": 760, "num_entries": 4717, "num_filter_entries": 4717, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.766987) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7199110 bytes
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.768281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 264.8 rd, 213.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 5240, records dropped: 523 output_compression: NoCompression
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.768300) EVENT_LOG_v1 {"time_micros": 1760432305768290, "job": 30, "event": "compaction_finished", "compaction_time_micros": 33727, "compaction_time_cpu_micros": 18251, "output_level": 6, "num_output_files": 1, "total_output_size": 7199110, "num_input_records": 5240, "num_output_records": 4717, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305769174, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432305770526, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.732930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-08:58:25.770689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 04:58:25 np0005486808 nova_compute[259627]: 2025-10-14 08:58:25.953 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully created port: 9db6eef3-e4da-4c17-91ea-1c3124906f61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.029 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.028866, a071857d-db87-4931-95ad-f8c627f74160 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.030 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.032 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.034 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.038 2 INFO nova.virt.libvirt.driver [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance spawned successfully.#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.038 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.068 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.076 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.082 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.083 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.084 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.084 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.085 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.086 2 DEBUG nova.virt.libvirt.driver [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.128 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.128 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.0299046, a071857d-db87-4931-95ad-f8c627f74160 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.129 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.166 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.170 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432306.0339997, a071857d-db87-4931-95ad-f8c627f74160 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.171 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.183 2 INFO nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 7.68 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.183 2 DEBUG nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.193 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.197 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.272 2 INFO nova.compute.manager [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 8.81 seconds to build instance.#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.296 2 DEBUG oslo_concurrency.lockutils [None req-f17367e0-0f25-4fde-aeb8-4fbf7959310f 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.308 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Creating config drive at /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.318 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l97a1xi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.359 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updated VIF entry in instance network info cache for port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.361 2 DEBUG nova.network.neutron [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [{"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.379 2 DEBUG oslo_concurrency.lockutils [req-2f3189ed-8229-4a5f-96ad-c6d1f4dc5156 req-de88b6da-5618-44f3-8ed9-23bd9b7c57a8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2826d9ce-d739-49a1-abfa-80cee62173fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.469 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l97a1xi" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.515 2 DEBUG nova.storage.rbd_utils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.522 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.676 2 DEBUG oslo_concurrency.processutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config 2826d9ce-d739-49a1-abfa-80cee62173fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.677 2 INFO nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deleting local config drive /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:26 np0005486808 kernel: tap146ca52f-0b: entered promiscuous mode
Oct 14 04:58:26 np0005486808 NetworkManager[44885]: <info>  [1760432306.7257] manager: (tap146ca52f-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 14 04:58:26 np0005486808 systemd-udevd[290824]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:26 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:26Z|00145|binding|INFO|Claiming lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e for this chassis.
Oct 14 04:58:26 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:26Z|00146|binding|INFO|146ca52f-0b4f-46f0-9153-1120bf1c9e4e: Claiming fa:16:3e:c7:a1:5d 10.100.0.10
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.747 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a1:5d 10.100.0.10'], port_security=['fa:16:3e:c7:a1:5d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2826d9ce-d739-49a1-abfa-80cee62173fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=146ca52f-0b4f-46f0-9153-1120bf1c9e4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.749 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis#033[00m
Oct 14 04:58:26 np0005486808 NetworkManager[44885]: <info>  [1760432306.7511] device (tap146ca52f-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.752 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a#033[00m
Oct 14 04:58:26 np0005486808 NetworkManager[44885]: <info>  [1760432306.7532] device (tap146ca52f-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c2ea55-7036-4883-804b-ac32cf6eedd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.769 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.772 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.772 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd99117-b904-4a2f-b680-e06360cb2bf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.773 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa27032c-004e-452f-ada6-5cd89ef51cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.787 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[905377d5-83df-44e5-8fb1-a9a3992d0aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 systemd-machined[214636]: New machine qemu-25-instance-00000018.
Oct 14 04:58:26 np0005486808 systemd[1]: Started Virtual Machine qemu-25-instance-00000018.
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.812 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d42b7f-c3da-4930-a200-1174e3aec892]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:26 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:26Z|00147|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e ovn-installed in OVS
Oct 14 04:58:26 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:26Z|00148|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e up in Southbound
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.845 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4250ef29-ec6e-4076-af70-e2fcc2e10864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc8f855-ff37-419a-8b6d-3c487be36da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 NetworkManager[44885]: <info>  [1760432306.8519] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.878 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.889 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8bcfbe-ddd0-43be-90f6-731b83344f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.893 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8c179c28-0f45-4488-b50c-dfcfe0f469dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 nova_compute[259627]: 2025-10-14 08:58:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:26 np0005486808 NetworkManager[44885]: <info>  [1760432306.9214] device (tap2322cf7a-00): carrier: link connected
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.928 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fec130-7cc9-437c-832b-952670cd3be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26f421e9-e830-4271-af49-553d3ae5d3b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608568, 'reachable_time': 17290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291065, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.962 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[981cf5b4-18fd-4bf1-9d3a-7b66e4ff9d2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608568, 'tstamp': 608568}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291066, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:26.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2d3f25-6e65-4ae4-8800-c50f8b1cc22e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608568, 'reachable_time': 17290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291067, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.018 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e94d5a56-695d-44d3-9b4e-8e63529849bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.078 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29f38978-6a4e-43dc-820c-ec6df1ecf5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.080 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:27 np0005486808 NetworkManager[44885]: <info>  [1760432307.0842] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:27 np0005486808 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.087 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:27 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:27Z|00149|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.090 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.092 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83303758-8dbf-4ab7-bd3c-425203c55038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.093 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:27.094 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.107 2 DEBUG nova.compute.manager [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.107 2 DEBUG nova.compute.manager [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-57b4d441-0c29-4419-b20b-3b5c4223b7a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.108 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.108 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.109 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.248 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.249 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.249 2 DEBUG nova.objects.instance [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'flavor' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.277 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.290 2 DEBUG nova.objects.instance [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'pci_requests' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.308 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:27 np0005486808 podman[291140]: 2025-10-14 08:58:27.505186682 +0000 UTC m=+0.066598786 container create 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 04:58:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 186 op/s
Oct 14 04:58:27 np0005486808 systemd[1]: Started libpod-conmon-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope.
Oct 14 04:58:27 np0005486808 podman[291140]: 2025-10-14 08:58:27.46885811 +0000 UTC m=+0.030270304 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c703f783893fc5cf0d9cbca7dbc962deb1b04986a44b3368c6b7cd5b11d993/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:27 np0005486808 podman[291140]: 2025-10-14 08:58:27.582311236 +0000 UTC m=+0.143723340 container init 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 04:58:27 np0005486808 podman[291140]: 2025-10-14 08:58:27.587993755 +0000 UTC m=+0.149405859 container start 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:58:27 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : New worker (291160) forked
Oct 14 04:58:27 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : Loading success.
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.640 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 685df8a6-7b64-441e-9a56-4ede8db5faa9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.840 2 DEBUG nova.network.neutron [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.843 2 DEBUG nova.policy [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '664abc01a11d458d9644488bf31e47f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '99649891054745d8a5186a1ad099e5a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.879 2 DEBUG oslo_concurrency.lockutils [req-76c949d0-4e2b-426c-9550-71e927807b3b req-248d03a2-d0af-41d9-8bf7-62b35bd84430 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.892 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432307.8921707, 2826d9ce-d739-49a1-abfa-80cee62173fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.893 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.915 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.921 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432307.8929381, 2826d9ce-d739-49a1-abfa-80cee62173fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.921 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.947 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.952 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:27 np0005486808 nova_compute[259627]: 2025-10-14 08:58:27.974 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.162 2 DEBUG nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG oslo_concurrency.lockutils [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.163 2 DEBUG nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.163 2 WARNING nova.compute.manager [req-3265a89a-755e-482e-b53a-35ac2f97ea22 req-b7e58616-e451-42b8-ab5c-fe3322bd35ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with vm_state active and task_state None.#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.449 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully created port: cada6b6a-e534-4cd7-8abf-e402059d6964 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.486 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Successfully updated port: 9db6eef3-e4da-4c17-91ea-1c3124906f61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.500 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.501 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.501 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:28 np0005486808 nova_compute[259627]: 2025-10-14 08:58:28.648 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.051 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Successfully updated port: cada6b6a-e534-4cd7-8abf-e402059d6964 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.066 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.067 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.067 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.235 2 WARNING nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] 8e06007a-4993-4328-9612-b43b931e2e3b already exists in list: networks containing: ['8e06007a-4993-4328-9612-b43b931e2e3b']. ignoring it#033[00m
Oct 14 04:58:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 227 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 187 op/s
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.627 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.628 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.629 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.630 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.631 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Processing event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.631 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.632 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.633 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.634 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.634 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.635 2 WARNING nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received unexpected event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.636 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.636 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-685df8a6-7b64-441e-9a56-4ede8db5faa9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.637 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.643 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.652 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.654 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432309.6528177, 2826d9ce-d739-49a1-abfa-80cee62173fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.662 2 INFO nova.virt.libvirt.driver [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance spawned successfully.#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.662 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.679 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.690 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.700 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.701 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.701 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.702 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.703 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.703 2 DEBUG nova.virt.libvirt.driver [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.741 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.785 2 INFO nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 8.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.786 2 DEBUG nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:29 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.847 2 INFO nova.compute.manager [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 9.57 seconds to build instance.#033[00m
Oct 14 04:58:29 np0005486808 nova_compute[259627]: 2025-10-14 08:58:29.867 2 DEBUG oslo_concurrency.lockutils [None req-d0df6552-6d49-4225-b6c2-ceec8dcc6327 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG nova.compute.manager [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-changed-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG nova.compute.manager [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing instance network info cache due to event network-changed-cada6b6a-e534-4cd7-8abf-e402059d6964. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.511 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:30 np0005486808 podman[291169]: 2025-10-14 08:58:30.638821941 +0000 UTC m=+0.060625180 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:30 np0005486808 podman[291170]: 2025-10-14 08:58:30.64081743 +0000 UTC m=+0.057966244 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.856 2 DEBUG nova.network.neutron [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.880 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.881 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance network_info: |[{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.881 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.882 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 685df8a6-7b64-441e-9a56-4ede8db5faa9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.886 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start _get_guest_xml network_info=[{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.890 2 WARNING nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.896 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.897 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.900 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.901 2 DEBUG nova.virt.libvirt.host [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.901 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.902 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.902 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.903 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.904 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.905 2 DEBUG nova.virt.hardware [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:30 np0005486808 nova_compute[259627]: 2025-10-14 08:58:30.908 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.135 2 DEBUG nova.network.neutron [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.159 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.160 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.161 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Refreshing network info cache for port cada6b6a-e534-4cd7-8abf-e402059d6964 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.164 2 DEBUG nova.virt.libvirt.vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.164 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.165 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.166 2 DEBUG os_vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcada6b6a-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcada6b6a-e5, col_values=(('external_ids', {'iface-id': 'cada6b6a-e534-4cd7-8abf-e402059d6964', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:47:df', 'vm-uuid': 'a071857d-db87-4931-95ad-f8c627f74160'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.1736] manager: (tapcada6b6a-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.182 2 INFO os_vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5')#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.183 2 DEBUG nova.virt.libvirt.vif [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.184 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.184 2 DEBUG nova.network.os_vif_util [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.188 2 DEBUG nova.virt.libvirt.guest [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] attach device xml: <interface type="ethernet">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:49:47:df"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <target dev="tapcada6b6a-e5"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: </interface>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 04:58:31 np0005486808 kernel: tapcada6b6a-e5: entered promiscuous mode
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.2028] manager: (tapcada6b6a-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Oct 14 04:58:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:31Z|00150|binding|INFO|Claiming lport cada6b6a-e534-4cd7-8abf-e402059d6964 for this chassis.
Oct 14 04:58:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:31Z|00151|binding|INFO|cada6b6a-e534-4cd7-8abf-e402059d6964: Claiming fa:16:3e:49:47:df 10.100.0.10
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.216 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:47:df 10.100.0.10'], port_security=['fa:16:3e:49:47:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=cada6b6a-e534-4cd7-8abf-e402059d6964) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.220 162547 INFO neutron.agent.ovn.metadata.agent [-] Port cada6b6a-e534-4cd7-8abf-e402059d6964 in datapath 8e06007a-4993-4328-9612-b43b931e2e3b bound to our chassis#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.223 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e06007a-4993-4328-9612-b43b931e2e3b#033[00m
Oct 14 04:58:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:31Z|00152|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 ovn-installed in OVS
Oct 14 04:58:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:31Z|00153|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 up in Southbound
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6db17c87-6aab-4860-bcb0-9e0512b20abd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 systemd-udevd[291235]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.2675] device (tapcada6b6a-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.2687] device (tapcada6b6a-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.281 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e9329773-b714-4e65-8616-dc95b9097319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[94b726a4-ff29-4b66-9376-47d84708075c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.311 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb2e752-1a4a-4b13-9149-bee2b1449517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.328 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[436e94c1-02ec-449a-83a7-cbd7d52dee60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e06007a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608361, 'reachable_time': 37451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291242, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.331 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:58:04:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.332 2 DEBUG nova.virt.libvirt.driver [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] No VIF found with MAC fa:16:3e:49:47:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[44a98871-76f8-47c0-be59-08a5846c4e1a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8e06007a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608373, 'tstamp': 608373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8e06007a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608376, 'tstamp': 608376}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.347 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e06007a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e06007a-40, col_values=(('external_ids', {'iface-id': 'd2ffe2ca-9cdc-4c2a-a690-8b35b09cf563'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:31.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.354 2 DEBUG nova.virt.libvirt.guest [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesV270Test-server-635702397</nova:name>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 08:58:31</nova:creationTime>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:user uuid="664abc01a11d458d9644488bf31e47f4">tempest-AttachInterfacesV270Test-1504567615-project-member</nova:user>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:project uuid="99649891054745d8a5186a1ad099e5a7">tempest-AttachInterfacesV270Test-1504567615</nova:project>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:port uuid="fd335735-b88a-42f7-911e-af4b2b9396fb">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:port uuid="cada6b6a-e534-4cd7-8abf-e402059d6964">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 04:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940373503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.379 2 DEBUG oslo_concurrency.lockutils [None req-bcab87ce-fc7a-4d51-a76f-ea90a3bf0091 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "interface-a071857d-db87-4931-95ad-f8c627f74160-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.407 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.436 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.440 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 9.2 MiB/s wr, 302 op/s
Oct 14 04:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942476262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.842 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.845 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.846 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.847 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.849 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.849 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.850 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.852 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.852 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.853 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.855 2 DEBUG nova.objects.instance [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'pci_devices' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.870 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <uuid>aefbf308-7f99-4a76-8d5e-54613f6bdf83</uuid>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <name>instance-00000019</name>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestMultiNic-server-620013762</nova:name>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:30</nova:creationTime>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:user uuid="5aacb60ad29c418c9161e71bb72da036">tempest-ServersTestMultiNic-840673976-project-member</nova:user>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:project uuid="3566f35c659a45bd9b9bbddf6552ed43">tempest-ServersTestMultiNic-840673976</nova:project>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:port uuid="57b4d441-0c29-4419-b20b-3b5c4223b7a8">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.177" ipVersion="4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:port uuid="685df8a6-7b64-441e-9a56-4ede8db5faa9">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.1.144" ipVersion="4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <nova:port uuid="9db6eef3-e4da-4c17-91ea-1c3124906f61">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.215" ipVersion="4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="serial">aefbf308-7f99-4a76-8d5e-54613f6bdf83</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="uuid">aefbf308-7f99-4a76-8d5e-54613f6bdf83</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:3c:7f:93"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <target dev="tap57b4d441-0c"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:22:86:b1"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <target dev="tap685df8a6-7b"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:24:11:79"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <target dev="tap9db6eef3-e4"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/console.log" append="off"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:31 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:31 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:31 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.873 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.873 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.874 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.875 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.875 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.876 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.876 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.877 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.877 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Preparing to wait for external event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.878 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.878 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.879 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.880 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.881 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.882 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.883 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.885 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57b4d441-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57b4d441-0c, col_values=(('external_ids', {'iface-id': '57b4d441-0c29-4419-b20b-3b5c4223b7a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:7f:93', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.8999] manager: (tap57b4d441-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.909 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c')#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.910 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.911 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.912 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.913 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.915 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updated VIF entry in instance network info cache for port 685df8a6-7b64-441e-9a56-4ede8db5faa9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.916 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685df8a6-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap685df8a6-7b, col_values=(('external_ids', {'iface-id': '685df8a6-7b64-441e-9a56-4ede8db5faa9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:86:b1', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.9292] manager: (tap685df8a6-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.936 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.937 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-changed-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.938 2 DEBUG nova.compute.manager [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing instance network info cache due to event network-changed-9db6eef3-e4da-4c17-91ea-1c3124906f61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.938 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.939 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.940 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Refreshing network info cache for port 9db6eef3-e4da-4c17-91ea-1c3124906f61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.945 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b')#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.947 2 DEBUG nova.virt.libvirt.vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:23Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.947 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.949 2 DEBUG nova.network.os_vif_util [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.950 2 DEBUG os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9db6eef3-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9db6eef3-e4, col_values=(('external_ids', {'iface-id': '9db6eef3-e4da-4c17-91ea-1c3124906f61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:11:79', 'vm-uuid': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 NetworkManager[44885]: <info>  [1760432311.9592] manager: (tap9db6eef3-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:31 np0005486808 nova_compute[259627]: 2025-10-14 08:58:31.973 2 INFO os_vif [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4')#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.033 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.034 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.035 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:3c:7f:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.035 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:22:86:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.036 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:24:11:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.037 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Using config drive#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.069 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.110 2 INFO nova.compute.manager [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Pausing#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.112 2 DEBUG nova.objects.instance [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'flavor' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.135 2 DEBUG nova.compute.manager [None req-ff69c675-271b-44e6-9964-e902424590ac 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.136 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432312.135138, 2826d9ce-d739-49a1-abfa-80cee62173fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.136 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.160 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.164 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.197 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.281 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 04:58:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:58:32
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', '.rgw.root', 'images']
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.799 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updated VIF entry in instance network info cache for port cada6b6a-e534-4cd7-8abf-e402059d6964. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.800 2 DEBUG nova.network.neutron [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.806 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Creating config drive at /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.815 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7cp_cpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.853 2 DEBUG oslo_concurrency.lockutils [req-893ab63d-eeb1-46e3-a3ef-fef3c67d672b req-0ad289ae-f221-4b94-a274-d1d3a57830fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a071857d-db87-4931-95ad-f8c627f74160" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.889 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.889 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.890 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.890 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.891 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.891 2 WARNING nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.892 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.892 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG oslo_concurrency.lockutils [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.893 2 DEBUG nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.894 2 WARNING nova.compute.manager [req-4c640756-e477-4e7c-9b16-2ad416718617 req-ab99eec8-f678-4528-b112-9209ad38c4eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.950 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph7cp_cpl" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:58:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.991 2 DEBUG nova.storage.rbd_utils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:32 np0005486808 nova_compute[259627]: 2025-10-14 08:58:32.994 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.144 2 DEBUG oslo_concurrency.processutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config aefbf308-7f99-4a76-8d5e-54613f6bdf83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.145 2 INFO nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deleting local config drive /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2028] manager: (tap57b4d441-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 14 04:58:33 np0005486808 systemd-udevd[291239]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:33 np0005486808 kernel: tap57b4d441-0c: entered promiscuous mode
Oct 14 04:58:33 np0005486808 kernel: tap685df8a6-7b: entered promiscuous mode
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2218] manager: (tap685df8a6-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00154|binding|INFO|Claiming lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 for this chassis.
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00155|binding|INFO|685df8a6-7b64-441e-9a56-4ede8db5faa9: Claiming fa:16:3e:22:86:b1 10.100.1.144
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00156|binding|INFO|Claiming lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 for this chassis.
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00157|binding|INFO|57b4d441-0c29-4419-b20b-3b5c4223b7a8: Claiming fa:16:3e:3c:7f:93 10.100.0.177
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2388] device (tap57b4d441-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2421] device (tap57b4d441-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2498] manager: (tap9db6eef3-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2531] device (tap685df8a6-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.2572] device (tap685df8a6-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.251 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:7f:93 10.100.0.177'], port_security=['fa:16:3e:3c:7f:93 10.100.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.177/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=57b4d441-0c29-4419-b20b-3b5c4223b7a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.256 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:86:b1 10.100.1.144'], port_security=['fa:16:3e:22:86:b1 10.100.1.144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.144/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2534100b-a4f5-4f68-9f75-a1af37008664', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4c418fd-a28b-433f-be67-07c285fde4ec, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=685df8a6-7b64-441e-9a56-4ede8db5faa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.258 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 in datapath 03753014-b87c-4672-9d66-fdc254813b6e bound to our chassis#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.261 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13a9ed18-26f2-4a19-a475-9b036c210595]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.277 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03753014-b1 in ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.281 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03753014-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[abe08a3d-c945-4a7e-a258-84072b6f8e27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf32c00-fd34-41c3-b4d5-a5a6ff7d0910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 systemd-machined[214636]: New machine qemu-26-instance-00000019.
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.305 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[048e3be3-4ffb-4b84-83de-4a09efb18ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 systemd[1]: Started Virtual Machine qemu-26-instance-00000019.
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa02c0fe-115d-454c-9409-3a815ee19dff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 kernel: tap9db6eef3-e4: entered promiscuous mode
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.3513] device (tap9db6eef3-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00158|binding|INFO|Claiming lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 for this chassis.
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00159|binding|INFO|9db6eef3-e4da-4c17-91ea-1c3124906f61: Claiming fa:16:3e:24:11:79 10.100.0.215
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.3544] device (tap9db6eef3-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.360 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:11:79 10.100.0.215'], port_security=['fa:16:3e:24:11:79 10.100.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.215/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9db6eef3-e4da-4c17-91ea-1c3124906f61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00160|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 ovn-installed in OVS
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00161|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 up in Southbound
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00162|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 ovn-installed in OVS
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00163|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 up in Southbound
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.367 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d173dec2-c79a-45d2-b848-3cd1b1dfb743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00164|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 ovn-installed in OVS
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00165|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 up in Southbound
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9464890e-a0cb-4608-bb8d-c6382e583474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.3806] manager: (tap03753014-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5239c1b5-17f6-45b4-ab8a-013b2c405f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.421 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74426f26-e923-4962-978b-06e011500bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.4454] device (tap03753014-b0): carrier: link connected
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.452 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0a192c52-5646-4f85-a0b8-3bba2c206bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.473 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35d0b3bd-8cc2-4720-a03c-547a6fc966bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291404, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.490 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updated VIF entry in instance network info cache for port 9db6eef3-e4da-4c17-91ea-1c3124906f61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.491 2 DEBUG nova.network.neutron [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.507 2 DEBUG oslo_concurrency.lockutils [req-f59cd21f-0de4-490c-93e2-5d80a856a423 req-cc1ac6b7-6b11-4dd9-9089-f20e2a02941e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aefbf308-7f99-4a76-8d5e-54613f6bdf83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7b4128-b508-4ce5-b159-f5cec453fb66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:7a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609220, 'tstamp': 609220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291405, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 248 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 204 op/s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[809613e6-e83a-48fe-8333-8421cde75b07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291406, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[380663b0-8171-4da9-821c-459c6f911710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.638 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2275d122-74fd-42cd-a679-7e5e748d1c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.649 2 DEBUG nova.compute.manager [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG oslo_concurrency.lockutils [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.650 2 DEBUG nova.compute.manager [req-440d7507-9040-40b9-bd4d-f3e90af0adc6 req-28d66ef8-87b2-4200-b55c-6bc8a6132c40 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.691 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.692 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.692 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.693 2 INFO nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Terminating instance#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.694 2 DEBUG nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.6962] manager: (tap03753014-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct 14 04:58:33 np0005486808 kernel: tap03753014-b0: entered promiscuous mode
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00166|binding|INFO|Releasing lport 70e65942-3441-4aa4-b413-2595e7186410 from this chassis (sb_readonly=0)
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.716 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dacca3-0c18-402f-a681-6d69598db594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.717 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/03753014-b87c-4672-9d66-fdc254813b6e.pid.haproxy
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 03753014-b87c-4672-9d66-fdc254813b6e
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.719 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'env', 'PROCESS_TAG=haproxy-03753014-b87c-4672-9d66-fdc254813b6e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03753014-b87c-4672-9d66-fdc254813b6e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:33 np0005486808 kernel: tapfd335735-b8 (unregistering): left promiscuous mode
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.7335] device (tapfd335735-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.736 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00167|binding|INFO|Releasing lport fd335735-b88a-42f7-911e-af4b2b9396fb from this chassis (sb_readonly=0)
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00168|binding|INFO|Setting lport fd335735-b88a-42f7-911e-af4b2b9396fb down in Southbound
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00169|binding|INFO|Removing iface tapfd335735-b8 ovn-installed in OVS
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.754 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:04:b2 10.100.0.3'], port_security=['fa:16:3e:58:04:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fd335735-b88a-42f7-911e-af4b2b9396fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 kernel: tapcada6b6a-e5 (unregistering): left promiscuous mode
Oct 14 04:58:33 np0005486808 NetworkManager[44885]: <info>  [1760432313.7883] device (tapcada6b6a-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00170|binding|INFO|Releasing lport cada6b6a-e534-4cd7-8abf-e402059d6964 from this chassis (sb_readonly=0)
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00171|binding|INFO|Setting lport cada6b6a-e534-4cd7-8abf-e402059d6964 down in Southbound
Oct 14 04:58:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:33Z|00172|binding|INFO|Removing iface tapcada6b6a-e5 ovn-installed in OVS
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:33.809 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:47:df 10.100.0.10'], port_security=['fa:16:3e:49:47:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a071857d-db87-4931-95ad-f8c627f74160', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e06007a-4993-4328-9612-b43b931e2e3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99649891054745d8a5186a1ad099e5a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cdd7ee35-5316-4ad1-b1f1-84c66df2ce6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb643099-5fd1-493f-8a92-ffa6e64eb2b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=cada6b6a-e534-4cd7-8abf-e402059d6964) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 14 04:58:33 np0005486808 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000017.scope: Consumed 8.893s CPU time.
Oct 14 04:58:33 np0005486808 systemd-machined[214636]: Machine qemu-24-instance-00000017 terminated.
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.952 2 INFO nova.virt.libvirt.driver [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Instance destroyed successfully.#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.953 2 DEBUG nova.objects.instance [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lazy-loading 'resources' on Instance uuid a071857d-db87-4931-95ad-f8c627f74160 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.976 2 DEBUG nova.virt.libvirt.vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.976 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "fd335735-b88a-42f7-911e-af4b2b9396fb", "address": "fa:16:3e:58:04:b2", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd335735-b8", "ovs_interfaceid": "fd335735-b88a-42f7-911e-af4b2b9396fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.977 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.978 2 DEBUG os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd335735-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.989 2 INFO os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:04:b2,bridge_name='br-int',has_traffic_filtering=True,id=fd335735-b88a-42f7-911e-af4b2b9396fb,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd335735-b8')#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.990 2 DEBUG nova.virt.libvirt.vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-635702397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-635702397',id=23,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='99649891054745d8a5186a1ad099e5a7',ramdisk_id='',reservation_id='r-swuxrn8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1504567615',owner_user_name='tempest-AttachInterfacesV270Test-1504567615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:26Z,user_data=None,user_id='664abc01a11d458d9644488bf31e47f4',uuid=a071857d-db87-4931-95ad-f8c627f74160,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.991 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converting VIF {"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.992 2 DEBUG nova.network.os_vif_util [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.992 2 DEBUG os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.994 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcada6b6a-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:33 np0005486808 nova_compute[259627]: 2025-10-14 08:58:33.998 2 INFO os_vif [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:47:df,bridge_name='br-int',has_traffic_filtering=True,id=cada6b6a-e534-4cd7-8abf-e402059d6964,network=Network(8e06007a-4993-4328-9612-b43b931e2e3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcada6b6a-e5')#033[00m
Oct 14 04:58:34 np0005486808 podman[291529]: 2025-10-14 08:58:34.137081324 +0000 UTC m=+0.064609197 container create 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:58:34 np0005486808 systemd[1]: Started libpod-conmon-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope.
Oct 14 04:58:34 np0005486808 podman[291529]: 2025-10-14 08:58:34.108426861 +0000 UTC m=+0.035954754 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1eac5cfa0dfaeaa47633dd3bc85d51f3de862c522038183f2763ceaaa4205/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:34 np0005486808 podman[291529]: 2025-10-14 08:58:34.22611385 +0000 UTC m=+0.153641723 container init 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 04:58:34 np0005486808 podman[291529]: 2025-10-14 08:58:34.233398079 +0000 UTC m=+0.160925952 container start 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:58:34 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : New worker (291553) forked
Oct 14 04:58:34 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : Loading success.
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 685df8a6-7b64-441e-9a56-4ede8db5faa9 in datapath 2534100b-a4f5-4f68-9f75-a1af37008664 unbound from our chassis#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2534100b-a4f5-4f68-9f75-a1af37008664#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ccbed9-ca71-4c7e-96a8-ce0ecdd99342]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2534100b-a1 in ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.313 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2534100b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11930c83-9e1a-430c-bc97-a670c8c6e16b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a69237e-ac9d-4898-ac05-9892268fb581]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.327 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b28a135c-5b09-4aed-992c-df9f5b834fb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2161c94a-724f-4efa-8683-3dfb9b013f3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.372 2 INFO nova.virt.libvirt.driver [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deleting instance files /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160_del#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.373 2 INFO nova.virt.libvirt.driver [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deletion of /var/lib/nova/instances/a071857d-db87-4931-95ad-f8c627f74160_del complete#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.378 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[289855a0-e317-4084-9b98-015e2a2c4487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 NetworkManager[44885]: <info>  [1760432314.3907] manager: (tap2534100b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d3c135-a109-4bf0-abb9-fd72fe485161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 systemd-udevd[291569]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.424 2 INFO nova.compute.manager [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.424 2 DEBUG oslo.service.loopingcall [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.425 2 DEBUG nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.425 2 DEBUG nova.network.neutron [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.430 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a19da300-fb18-4b8b-97bd-c352f327dd35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.432 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[719048e5-b21a-4efe-9072-501d8a5b2c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 NetworkManager[44885]: <info>  [1760432314.4559] device (tap2534100b-a0): carrier: link connected
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd46360-c756-47ac-9d33-d4a2fbbdd42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab7c79e-432f-4c61-8843-4830d820ed6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2534100b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:72:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609321, 'reachable_time': 44734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291588, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[867ab156-52ef-41fd-a7ba-9f896d7487d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:72ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609321, 'tstamp': 609321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291589, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22c86e79-3aa1-4fc8-a6ee-f6a0d4c1ade7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2534100b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:72:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609321, 'reachable_time': 44734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291590, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 04:58:34 np0005486808 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000016.scope: Consumed 12.239s CPU time.
Oct 14 04:58:34 np0005486808 systemd-machined[214636]: Machine qemu-23-instance-00000016 terminated.
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.550 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14c44308-5eb0-432d-a567-81b15bf29c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.568 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432314.568571, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.569 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.592 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432314.5687027, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.609 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d48bf08-356a-4f72-9a4c-cfeab513dc49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.611 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2534100b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.611 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.612 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2534100b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:34 np0005486808 kernel: tap2534100b-a0: entered promiscuous mode
Oct 14 04:58:34 np0005486808 NetworkManager[44885]: <info>  [1760432314.6145] manager: (tap2534100b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.619 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2534100b-a0, col_values=(('external_ids', {'iface-id': 'a903215b-fae6-434d-8681-fcd07d014218'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.619 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:34 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:34Z|00173|binding|INFO|Releasing lport a903215b-fae6-434d-8681-fcd07d014218 from this chassis (sb_readonly=0)
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.638 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:34 np0005486808 nova_compute[259627]: 2025-10-14 08:58:34.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.646 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.647 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22e37e2c-e2a3-4713-8f3d-9095cf3b8a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.649 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-2534100b-a4f5-4f68-9f75-a1af37008664
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/2534100b-a4f5-4f68-9f75-a1af37008664.pid.haproxy
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 2534100b-a4f5-4f68-9f75-a1af37008664
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:34.653 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'env', 'PROCESS_TAG=haproxy-2534100b-a4f5-4f68-9f75-a1af37008664', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2534100b-a4f5-4f68-9f75-a1af37008664.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:35 np0005486808 podman[291625]: 2025-10-14 08:58:35.072930794 +0000 UTC m=+0.064353311 container create 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:58:35 np0005486808 podman[291625]: 2025-10-14 08:58:35.035917405 +0000 UTC m=+0.027339962 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:35 np0005486808 systemd[1]: Started libpod-conmon-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope.
Oct 14 04:58:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:35 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7d9746db43f2554dc647176d5a6e0cd2f0b471a69b65de77f8535040dad214/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:35 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:58:35 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 04:58:35 np0005486808 podman[291625]: 2025-10-14 08:58:35.192612133 +0000 UTC m=+0.184034690 container init 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 04:58:35 np0005486808 podman[291625]: 2025-10-14 08:58:35.20308324 +0000 UTC m=+0.194505747 container start 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : New worker (291647) forked
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : Loading success.
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.263 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9db6eef3-e4da-4c17-91ea-1c3124906f61 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.267 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6d3716-e1a8-409b-90d6-6576b65168e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.313 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.319 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.325 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.328 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f1704f-0807-443a-8e2a-29debe029961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.333 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff17cb8-2be7-48e7-bc17-afc683d09d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.375 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c44ad4d-ca4f-4a5c-a88b-36ed80f60e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.399 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42db1afc-871b-4173-a88d-262de1bba854]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 742, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 742, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 644, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 644, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291679, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.421 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ffda2c-8442-41b0-aeea-dee9b9b75301]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609237, 'tstamp': 609237}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609239, 'tstamp': 609239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.422 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.425 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.426 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.426 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.427 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.427 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.428 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fd335735-b88a-42f7-911e-af4b2b9396fb in datapath 8e06007a-4993-4328-9612-b43b931e2e3b unbound from our chassis#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.429 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e06007a-4993-4328-9612-b43b931e2e3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.430 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[672a799f-39c5-4a5f-8687-7fbede3cca39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b namespace which is not needed anymore#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.434 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.434 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG oslo_concurrency.lockutils [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.435 2 DEBUG nova.compute.manager [req-48a7a9ad-ba2a-40ba-b94a-45740af0dc81 req-61abbd2f-91c0-449d-9944-d267d5a5c2ac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:58:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 333 op/s
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : haproxy version is 2.8.14-c23fe91
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [NOTICE]   (290946) : path to executable is /usr/sbin/haproxy
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : Exiting Master process...
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : Exiting Master process...
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [ALERT]    (290946) : Current worker (290955) exited with code 143 (Terminated)
Oct 14 04:58:35 np0005486808 neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b[290916]: [WARNING]  (290946) : All workers exited. Exiting... (0)
Oct 14 04:58:35 np0005486808 systemd[1]: libpod-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope: Deactivated successfully.
Oct 14 04:58:35 np0005486808 conmon[290916]: conmon 05b21863dda394c5167a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope/container/memory.events
Oct 14 04:58:35 np0005486808 podman[291698]: 2025-10-14 08:58:35.61519894 +0000 UTC m=+0.068270857 container died 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:58:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49-userdata-shm.mount: Deactivated successfully.
Oct 14 04:58:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1dbf6ecce0931ae581de6497fdff55fce469de3179658359b1e470c5e9f85c3e-merged.mount: Deactivated successfully.
Oct 14 04:58:35 np0005486808 podman[291698]: 2025-10-14 08:58:35.664556932 +0000 UTC m=+0.117628849 container cleanup 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 04:58:35 np0005486808 systemd[1]: libpod-conmon-05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49.scope: Deactivated successfully.
Oct 14 04:58:35 np0005486808 podman[291730]: 2025-10-14 08:58:35.752525492 +0000 UTC m=+0.062170497 container remove 05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.756 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.757 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.763 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No event matching network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 in dict_keys([('network-vif-plugged', '57b4d441-0c29-4419-b20b-3b5c4223b7a8'), ('network-vif-plugged', '9db6eef3-e4da-4c17-91ea-1c3124906f61')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.764 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No event matching network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 in dict_keys([('network-vif-plugged', '9db6eef3-e4da-4c17-91ea-1c3124906f61')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.765 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Processing event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.766 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG oslo_concurrency.lockutils [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 WARNING nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.767 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-deleted-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.768 2 INFO nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Neutron deleted interface fd335735-b88a-42f7-911e-af4b2b9396fb; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.768 2 DEBUG nova.network.neutron [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [{"id": "cada6b6a-e534-4cd7-8abf-e402059d6964", "address": "fa:16:3e:49:47:df", "network": {"id": "8e06007a-4993-4328-9612-b43b931e2e3b", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1415539227-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99649891054745d8a5186a1ad099e5a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcada6b6a-e5", "ovs_interfaceid": "cada6b6a-e534-4cd7-8abf-e402059d6964", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88f0201c-dd07-4373-b6ed-c5da5466a548]: (4, ('Tue Oct 14 08:58:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b (05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49)\n05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49\nTue Oct 14 08:58:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b (05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49)\n05b21863dda394c5167a0d5e1c6ea769a47954b64e8048136f02537898d58d49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.769 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.770 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[386c765e-020e-4cc1-8bf8-bd8aa788deda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e06007a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.774 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432315.7737002, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.774 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:35 np0005486808 kernel: tap8e06007a-40: left promiscuous mode
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.776 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.783 2 INFO nova.virt.libvirt.driver [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance spawned successfully.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.784 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.798 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0512634-9ab7-4a56-99ba-2846674fb117]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.829 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0320a5-e861-402b-ae1f-c5f572769e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe95ca25-d2e6-43af-9c1b-00ea5ba3a52c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.832 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57b5269d-dcab-463a-bd63-15b55b7ee157]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608351, 'reachable_time': 44285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291746, 'error': None, 'target': 'ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.849 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e06007a-4993-4328-9612-b43b931e2e3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.849 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[27541185-afb6-447e-b8f4-d58b58f4aa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.850 162547 INFO neutron.agent.ovn.metadata.agent [-] Port cada6b6a-e534-4cd7-8abf-e402059d6964 in datapath 8e06007a-4993-4328-9612-b43b931e2e3b unbound from our chassis#033[00m
Oct 14 04:58:35 np0005486808 systemd[1]: run-netns-ovnmeta\x2d8e06007a\x2d4993\x2d4328\x2d9612\x2db43b931e2e3b.mount: Deactivated successfully.
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.852 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e06007a-4993-4328-9612-b43b931e2e3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:58:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:35.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e95d6eb3-6c94-4547-8457-c0ff0d1fc524]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.861 2 DEBUG nova.compute.manager [req-1a9a852f-bd90-4c59-9bee-bde1186d8e88 req-ff8ed52c-8245-4a3c-8090-a925c05c266c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Detach interface failed, port_id=fd335735-b88a-42f7-911e-af4b2b9396fb, reason: Instance a071857d-db87-4931-95ad-f8c627f74160 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.881 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.900 2 DEBUG nova.network.neutron [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.912 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.912 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.913 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.914 2 DEBUG nova.virt.libvirt.driver [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:35 np0005486808 nova_compute[259627]: 2025-10-14 08:58:35.945 2 INFO nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.033 2 INFO nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 12.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.033 2 DEBUG nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.046 2 DEBUG nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.047 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.047 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.129 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.130 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.151 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.172 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.195 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.198 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.201 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.210 2 INFO nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] instance snapshotting#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.210 2 WARNING nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.221 2 INFO nova.compute.manager [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 13.87 seconds to build instance.#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.234 2 DEBUG oslo_concurrency.lockutils [None req-4f0d9dc6-fd45-4fd9-b988-a8286abc449e 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.256 2 DEBUG oslo_concurrency.processutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.424 2 INFO nova.virt.libvirt.driver [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Beginning live snapshot process#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.558 2 DEBUG nova.virt.libvirt.imagebackend [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/e2368e3e-f504-40e6-a9d3-67df18c845bb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/e2368e3e-f504-40e6-a9d3-67df18c845bb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.564 2 DEBUG nova.virt.libvirt.imagebackend [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 04:58:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454276195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.685 2 DEBUG oslo_concurrency.processutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.689 2 DEBUG nova.compute.provider_tree [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.707 2 DEBUG nova.scheduler.client.report [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.732 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.760 2 INFO nova.scheduler.client.report [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Deleted allocations for instance a071857d-db87-4931-95ad-f8c627f74160#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.768 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(b279fc5a287f476b829290bb8093336b) on rbd image(2826d9ce-d739-49a1-abfa-80cee62173fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.863 2 DEBUG oslo_concurrency.lockutils [None req-fb072aba-6358-4a63-9390-751f10a75ce9 664abc01a11d458d9644488bf31e47f4 99649891054745d8a5186a1ad099e5a7 - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:36 np0005486808 nova_compute[259627]: 2025-10-14 08:58:36.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.449 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.450 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.450 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.451 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.451 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.452 2 INFO nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Terminating instance#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.453 2 DEBUG nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:58:37 np0005486808 kernel: tap57b4d441-0c (unregistering): left promiscuous mode
Oct 14 04:58:37 np0005486808 NetworkManager[44885]: <info>  [1760432317.4913] device (tap57b4d441-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 244 op/s
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00174|binding|INFO|Releasing lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 from this chassis (sb_readonly=0)
Oct 14 04:58:37 np0005486808 kernel: tap685df8a6-7b (unregistering): left promiscuous mode
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00175|binding|INFO|Setting lport 57b4d441-0c29-4419-b20b-3b5c4223b7a8 down in Southbound
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00176|binding|INFO|Removing iface tap57b4d441-0c ovn-installed in OVS
Oct 14 04:58:37 np0005486808 NetworkManager[44885]: <info>  [1760432317.5626] device (tap685df8a6-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.563 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:7f:93 10.100.0.177'], port_security=['fa:16:3e:3c:7f:93 10.100.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.177/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=57b4d441-0c29-4419-b20b-3b5c4223b7a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.564 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 57b4d441-0c29-4419-b20b-3b5c4223b7a8 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.566 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03753014-b87c-4672-9d66-fdc254813b6e#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.578 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a246e4-0caa-4b2a-8cfc-fba9564842af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00177|binding|INFO|Releasing lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 from this chassis (sb_readonly=0)
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00178|binding|INFO|Setting lport 685df8a6-7b64-441e-9a56-4ede8db5faa9 down in Southbound
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00179|binding|INFO|Removing iface tap685df8a6-7b ovn-installed in OVS
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.598 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:86:b1 10.100.1.144'], port_security=['fa:16:3e:22:86:b1 10.100.1.144'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.144/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2534100b-a4f5-4f68-9f75-a1af37008664', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4c418fd-a28b-433f-be67-07c285fde4ec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=685df8a6-7b64-441e-9a56-4ede8db5faa9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.600 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:37 np0005486808 kernel: tap9db6eef3-e4 (unregistering): left promiscuous mode
Oct 14 04:58:37 np0005486808 NetworkManager[44885]: <info>  [1760432317.6116] device (tap9db6eef3-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.623 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70aab4f4-c4f8-49bb-ac63-13e5447eb858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.628 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8845f10-d645-42f0-84c3-9a5c657ded43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00180|binding|INFO|Releasing lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 from this chassis (sb_readonly=0)
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00181|binding|INFO|Setting lport 9db6eef3-e4da-4c17-91ea-1c3124906f61 down in Southbound
Oct 14 04:58:37 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:37Z|00182|binding|INFO|Removing iface tap9db6eef3-e4 ovn-installed in OVS
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.642 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:11:79 10.100.0.215'], port_security=['fa:16:3e:24:11:79 10.100.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.215/24', 'neutron:device_id': 'aefbf308-7f99-4a76-8d5e-54613f6bdf83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03753014-b87c-4672-9d66-fdc254813b6e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=494b06c3-b496-4326-9e04-09e435735a40, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9db6eef3-e4da-4c17-91ea-1c3124906f61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.641 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.642 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.643 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-fd335735-b88a-42f7-911e-af4b2b9396fb for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.643 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.644 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.644 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.645 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-unplugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a071857d-db87-4931-95ad-f8c627f74160-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.645 2 DEBUG oslo_concurrency.lockutils [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a071857d-db87-4931-95ad-f8c627f74160-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.646 2 DEBUG nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] No waiting events found dispatching network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.646 2 WARNING nova.compute.manager [req-f482c03a-861a-46bd-b9d7-d5c5d810cd88 req-6e33b000-2980-40b2-b691-085da5c0d984 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received unexpected event network-vif-plugged-cada6b6a-e534-4cd7-8abf-e402059d6964 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.661 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76e871eb-e06b-48b0-8e6d-981b38a2ebc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cef2f1f-e44e-4fa3-9144-3260a4bd399c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03753014-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:7a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609220, 'reachable_time': 43338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291896, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.678 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.679 2 DEBUG nova.virt.images [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] e2368e3e-f504-40e6-a9d3-67df18c845bb was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.680 2 DEBUG nova.privsep.utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.680 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:37 np0005486808 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 14 04:58:37 np0005486808 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000019.scope: Consumed 2.811s CPU time.
Oct 14 04:58:37 np0005486808 systemd-machined[214636]: Machine qemu-26-instance-00000019 terminated.
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[304e13f7-de77-4cfe-beb6-5fa1293eb186]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609237, 'tstamp': 609237}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291899, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap03753014-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609239, 'tstamp': 609239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291899, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.708 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03753014-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03753014-b0, col_values=(('external_ids', {'iface-id': '70e65942-3441-4aa4-b413-2595e7186410'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.710 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.710 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 685df8a6-7b64-441e-9a56-4ede8db5faa9 in datapath 2534100b-a4f5-4f68-9f75-a1af37008664 unbound from our chassis#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.711 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2534100b-a4f5-4f68-9f75-a1af37008664, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c23ef7b-881b-40cc-a859-f98349691eda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:37.713 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 namespace which is not needed anymore#033[00m
Oct 14 04:58:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 14 04:58:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 14 04:58:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 14 04:58:37 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : haproxy version is 2.8.14-c23fe91
Oct 14 04:58:37 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [NOTICE]   (291645) : path to executable is /usr/sbin/haproxy
Oct 14 04:58:37 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [WARNING]  (291645) : Exiting Master process...
Oct 14 04:58:37 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [ALERT]    (291645) : Current worker (291647) exited with code 143 (Terminated)
Oct 14 04:58:37 np0005486808 neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664[291640]: [WARNING]  (291645) : All workers exited. Exiting... (0)
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.863 2 DEBUG nova.compute.manager [req-b6e282d1-7f1c-4ed2-bd12-5a822b46bc4c req-171990bd-83ad-4329-9b49-f2e5f14b79a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a071857d-db87-4931-95ad-f8c627f74160] Received event network-vif-deleted-cada6b6a-e534-4cd7-8abf-e402059d6964 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:37 np0005486808 systemd[1]: libpod-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope: Deactivated successfully.
Oct 14 04:58:37 np0005486808 podman[291932]: 2025-10-14 08:58:37.87081673 +0000 UTC m=+0.054609342 container died 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:37 np0005486808 NetworkManager[44885]: <info>  [1760432317.8845] manager: (tap685df8a6-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct 14 04:58:37 np0005486808 NetworkManager[44885]: <info>  [1760432317.8980] manager: (tap9db6eef3-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.901 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.part /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.906 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef-userdata-shm.mount: Deactivated successfully.
Oct 14 04:58:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9e7d9746db43f2554dc647176d5a6e0cd2f0b471a69b65de77f8535040dad214-merged.mount: Deactivated successfully.
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.937 2 INFO nova.virt.libvirt.driver [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Instance destroyed successfully.#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.938 2 DEBUG nova.objects.instance [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'resources' on Instance uuid aefbf308-7f99-4a76-8d5e-54613f6bdf83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:37 np0005486808 podman[291932]: 2025-10-14 08:58:37.943934465 +0000 UTC m=+0.127727047 container cleanup 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.954 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.954 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.958 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.958 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57b4d441-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:37 np0005486808 systemd[1]: libpod-conmon-187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef.scope: Deactivated successfully.
Oct 14 04:58:37 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.974 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/2826d9ce-d739-49a1-abfa-80cee62173fb_disk@b279fc5a287f476b829290bb8093336b to images/58a309a9-ebdf-4853-9550-ca13b12b33e8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:37.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.001 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf.converted --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.002 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:38 np0005486808 podman[291991]: 2025-10-14 08:58:38.017222795 +0000 UTC m=+0.045066368 container remove 187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.024 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.024 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d84dbda-e118-46f7-9cac-730e1ccbaa74]: (4, ('Tue Oct 14 08:58:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 (187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef)\n187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef\nTue Oct 14 08:58:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 (187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef)\n187f372d56c0a9991013d921378d7301575b5440939582de4f4e7f81321a34ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.026 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7450ee30-eb43-4605-95c2-aedc6783fa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.027 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2534100b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.028 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:38 np0005486808 kernel: tap2534100b-a0: left promiscuous mode
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.054 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8b789d-db55-4629-b95d-7901af98bbc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.055 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:7f:93,bridge_name='br-int',has_traffic_filtering=True,id=57b4d441-0c29-4419-b20b-3b5c4223b7a8,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57b4d441-0c')#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.056 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.057 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.058 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.058 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685df8a6-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.072 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:86:b1,bridge_name='br-int',has_traffic_filtering=True,id=685df8a6-7b64-441e-9a56-4ede8db5faa9,network=Network(2534100b-a4f5-4f68-9f75-a1af37008664),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685df8a6-7b')#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.073 2 DEBUG nova.virt.libvirt.vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-620013762',display_name='tempest-ServersTestMultiNic-server-620013762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-620013762',id=25,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-xsoyr0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:36Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=aefbf308-7f99-4a76-8d5e-54613f6bdf83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "address": "fa:16:3e:24:11:79", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9db6eef3-e4", "ovs_interfaceid": "9db6eef3-e4da-4c17-91ea-1c3124906f61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG nova.network.os_vif_util [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.074 2 DEBUG os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9db6eef3-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.085 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56bd13aa-e34d-4ca2-a7fa-b1fa34b46125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83ad917b-f0fc-42a9-8449-f7995bad0172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.086 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/58a309a9-ebdf-4853-9550-ca13b12b33e8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3ee7e1-94f7-41dc-a122-1b462b413530]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609313, 'reachable_time': 38909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292071, 'error': None, 'target': 'ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.103 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2534100b-a4f5-4f68-9f75-a1af37008664 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.103 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b5df470d-15fb-49b7-8a6f-7a3a0bd4eae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.104 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9db6eef3-e4da-4c17-91ea-1c3124906f61 in datapath 03753014-b87c-4672-9d66-fdc254813b6e unbound from our chassis#033[00m
Oct 14 04:58:38 np0005486808 systemd[1]: run-netns-ovnmeta\x2d2534100b\x2da4f5\x2d4f68\x2d9f75\x2da1af37008664.mount: Deactivated successfully.
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.106 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03753014-b87c-4672-9d66-fdc254813b6e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.107 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35ffd825-e041-429d-97a8-e654bb096543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.108 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e namespace which is not needed anymore#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.126 2 INFO os_vif [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:11:79,bridge_name='br-int',has_traffic_filtering=True,id=9db6eef3-e4da-4c17-91ea-1c3124906f61,network=Network(03753014-b87c-4672-9d66-fdc254813b6e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9db6eef3-e4')#033[00m
Oct 14 04:58:38 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : haproxy version is 2.8.14-c23fe91
Oct 14 04:58:38 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [NOTICE]   (291551) : path to executable is /usr/sbin/haproxy
Oct 14 04:58:38 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [WARNING]  (291551) : Exiting Master process...
Oct 14 04:58:38 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [ALERT]    (291551) : Current worker (291553) exited with code 143 (Terminated)
Oct 14 04:58:38 np0005486808 neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e[291545]: [WARNING]  (291551) : All workers exited. Exiting... (0)
Oct 14 04:58:38 np0005486808 systemd[1]: libpod-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope: Deactivated successfully.
Oct 14 04:58:38 np0005486808 conmon[291545]: conmon 67a451c8f393c839e08a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope/container/memory.events
Oct 14 04:58:38 np0005486808 podman[292142]: 2025-10-14 08:58:38.290825013 +0000 UTC m=+0.070543693 container died 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:58:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca-userdata-shm.mount: Deactivated successfully.
Oct 14 04:58:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-82e1eac5cfa0dfaeaa47633dd3bc85d51f3de862c522038183f2763ceaaa4205-merged.mount: Deactivated successfully.
Oct 14 04:58:38 np0005486808 podman[292142]: 2025-10-14 08:58:38.364915842 +0000 UTC m=+0.144634522 container cleanup 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 04:58:38 np0005486808 systemd[1]: libpod-conmon-67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca.scope: Deactivated successfully.
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.395 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:38 np0005486808 podman[292186]: 2025-10-14 08:58:38.431578399 +0000 UTC m=+0.043044588 container remove 67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.437 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0e04bf-618a-4e96-a93a-787a75aa1040]: (4, ('Tue Oct 14 08:58:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e (67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca)\n67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca\nTue Oct 14 08:58:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e (67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca)\n67a451c8f393c839e08ab7ffe9478e8a341478f6b97def760f88579d6647c2ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.438 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2d113bb0-b213-42ea-8865-36f69f137f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.439 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03753014-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:38 np0005486808 kernel: tap03753014-b0: left promiscuous mode
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18151158-fab8-4657-ba44-ab16599f5655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.467 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c18f7e0-d95d-408c-a8c5-d787be083de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.469 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f304684-93e2-4da1-b1d1-17d9584628dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.475 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(b279fc5a287f476b829290bb8093336b) on rbd image(2826d9ce-d739-49a1-abfa-80cee62173fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.481 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.485 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2c12e9-c104-4b45-8d79-4ed7025fa970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609212, 'reachable_time': 40198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292241, 'error': None, 'target': 'ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.488 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03753014-b87c-4672-9d66-fdc254813b6e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:58:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:38.488 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[60ecf702-6052-418b-8510-ce5202dcf638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.572 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.573 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.574 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.575 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.580 2 WARNING nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.584 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.585 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.587 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.libvirt.host [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.588 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.589 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.virt.hardware [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.590 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.609 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.691 2 INFO nova.virt.libvirt.driver [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deleting instance files /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83_del#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.693 2 INFO nova.virt.libvirt.driver [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deletion of /var/lib/nova/instances/aefbf308-7f99-4a76-8d5e-54613f6bdf83_del complete#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.754 2 INFO nova.compute.manager [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.755 2 DEBUG oslo.service.loopingcall [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.756 2 DEBUG nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.757 2 DEBUG nova.network.neutron [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:58:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 14 04:58:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 14 04:58:38 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 14 04:58:38 np0005486808 nova_compute[259627]: 2025-10-14 08:58:38.880 2 DEBUG nova.storage.rbd_utils [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(58a309a9-ebdf-4853-9550-ca13b12b33e8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:58:38 np0005486808 systemd[1]: run-netns-ovnmeta\x2d03753014\x2db87c\x2d4672\x2d9d66\x2dfdc254813b6e.mount: Deactivated successfully.
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480408461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.074 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.092 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.095 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 214 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 192 KiB/s wr, 193 op/s
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901476574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.577 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.582 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <name>instance-00000016</name>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:38</nova:creationTime>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:39 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:39 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.659 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.659 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.660 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.679 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.697 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'ec2_ids' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.767 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'keypairs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 14 04:58:39 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.987 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.988 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.989 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.989 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.990 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.990 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.991 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.991 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.992 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.992 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.993 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.993 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-57b4d441-0c29-4419-b20b-3b5c4223b7a8 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.994 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.994 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.995 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.995 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.996 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.996 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.997 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.997 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.998 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.998 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:39 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.999 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:39.999 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-685df8a6-7b64-441e-9a56-4ede8db5faa9 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.000 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.000 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.001 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.002 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-unplugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.002 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.003 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.003 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.004 2 DEBUG oslo_concurrency.lockutils [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.004 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] No waiting events found dispatching network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.005 2 WARNING nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received unexpected event network-vif-plugged-9db6eef3-e4da-4c17-91ea-1c3124906f61 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.005 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-9db6eef3-e4da-4c17-91ea-1c3124906f61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.006 2 INFO nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Neutron deleted interface 9db6eef3-e4da-4c17-91ea-1c3124906f61; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.006 2 DEBUG nova.network.neutron [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [{"id": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "address": "fa:16:3e:3c:7f:93", "network": {"id": "03753014-b87c-4672-9d66-fdc254813b6e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1943015237", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57b4d441-0c", "ovs_interfaceid": "57b4d441-0c29-4419-b20b-3b5c4223b7a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "address": "fa:16:3e:22:86:b1", "network": {"id": "2534100b-a4f5-4f68-9f75-a1af37008664", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1512787070", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.144", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685df8a6-7b", "ovs_interfaceid": "685df8a6-7b64-441e-9a56-4ede8db5faa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.011 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.021 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89uk52cc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.088 2 DEBUG nova.compute.manager [req-6e0d02ef-2ad4-485f-85fd-bb824baceade req-15cb7940-0a81-42ce-bf84-b8396785eaf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Detach interface failed, port_id=9db6eef3-e4da-4c17-91ea-1c3124906f61, reason: Instance aefbf308-7f99-4a76-8d5e-54613f6bdf83 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 04:58:40 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:40Z|00183|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.170 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89uk52cc" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.206 2 DEBUG nova.storage.rbd_utils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.209 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.393 2 DEBUG oslo_concurrency.processutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.394 2 INFO nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.483 2 DEBUG nova.network.neutron [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:40 np0005486808 systemd-machined[214636]: New machine qemu-27-instance-00000016.
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.503 2 INFO nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Took 1.75 seconds to deallocate network for instance.#033[00m
Oct 14 04:58:40 np0005486808 systemd[1]: Started Virtual Machine qemu-27-instance-00000016.
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.548 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.548 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:40 np0005486808 podman[292424]: 2025-10-14 08:58:40.573962517 +0000 UTC m=+0.085233474 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:58:40 np0005486808 podman[292423]: 2025-10-14 08:58:40.58304309 +0000 UTC m=+0.097404993 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 14 04:58:40 np0005486808 nova_compute[259627]: 2025-10-14 08:58:40.667 2 DEBUG oslo_concurrency.processutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943102798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.096 2 DEBUG oslo_concurrency.processutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.104 2 DEBUG nova.compute.provider_tree [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.125 2 DEBUG nova.scheduler.client.report [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.152 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.196 2 INFO nova.scheduler.client.report [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Deleted allocations for instance aefbf308-7f99-4a76-8d5e-54613f6bdf83#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.273 2 DEBUG oslo_concurrency.lockutils [None req-ba0c7700-0074-48e2-95ad-e9da501a1c0d 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "aefbf308-7f99-4a76-8d5e-54613f6bdf83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.408 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 51c76e0f-284d-4122-83b4-32c4518b9056 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.408 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432321.4078386, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.409 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.411 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.412 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.415 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.415 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.440 2 INFO nova.virt.libvirt.driver [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Snapshot image upload complete#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.441 2 INFO nova.compute.manager [None req-65cbd964-17d7-4e04-a3f0-2ccc6b77dda8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 5.23 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.446 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.449 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.456 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.456 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.457 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.457 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.458 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.458 2 DEBUG nova.virt.libvirt.driver [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.497 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432321.4106197, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.498 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.522 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.551 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 04:58:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.1 MiB/s wr, 413 op/s
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.568 2 DEBUG nova.compute.manager [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.626 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.627 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.628 2 DEBUG nova.objects.instance [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.689 2 DEBUG oslo_concurrency.lockutils [None req-9c317e18-1307-41b8-8441-f293a6089333 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:41 np0005486808 nova_compute[259627]: 2025-10-14 08:58:41.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 14 04:58:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 14 04:58:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 14 04:58:42 np0005486808 nova_compute[259627]: 2025-10-14 08:58:42.839 2 DEBUG nova.compute.manager [req-9f2af65b-a0cd-49b0-9266-d3dc1dce6562 req-fe034459-e12e-4a0a-b17b-b1b14ef5be84 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-57b4d441-0c29-4419-b20b-3b5c4223b7a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:42 np0005486808 nova_compute[259627]: 2025-10-14 08:58:42.840 2 DEBUG nova.compute.manager [req-9f2af65b-a0cd-49b0-9266-d3dc1dce6562 req-fe034459-e12e-4a0a-b17b-b1b14ef5be84 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Received event network-vif-deleted-685df8a6-7b64-441e-9a56-4ede8db5faa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:58:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.271 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.272 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.273 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.274 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.274 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.276 2 INFO nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Terminating instance#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.278 2 DEBUG nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:58:43 np0005486808 kernel: tap146ca52f-0b (unregistering): left promiscuous mode
Oct 14 04:58:43 np0005486808 NetworkManager[44885]: <info>  [1760432323.3184] device (tap146ca52f-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:58:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:43Z|00184|binding|INFO|Releasing lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e from this chassis (sb_readonly=0)
Oct 14 04:58:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:43Z|00185|binding|INFO|Setting lport 146ca52f-0b4f-46f0-9153-1120bf1c9e4e down in Southbound
Oct 14 04:58:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:43Z|00186|binding|INFO|Removing iface tap146ca52f-0b ovn-installed in OVS
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.342 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:a1:5d 10.100.0.10'], port_security=['fa:16:3e:c7:a1:5d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2826d9ce-d739-49a1-abfa-80cee62173fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=146ca52f-0b4f-46f0-9153-1120bf1c9e4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.343 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 146ca52f-0b4f-46f0-9153-1120bf1c9e4e in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.343 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c67909b3-3304-4995-9006-971f1dc6183e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.345 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore#033[00m
Oct 14 04:58:43 np0005486808 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000018.scope: Consumed 3.455s CPU time.
Oct 14 04:58:43 np0005486808 systemd-machined[214636]: Machine qemu-25-instance-00000018 terminated.
Oct 14 04:58:43 np0005486808 NetworkManager[44885]: <info>  [1760432323.4956] manager: (tap146ca52f-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.514 2 INFO nova.virt.libvirt.driver [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Instance destroyed successfully.#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.515 2 DEBUG nova.objects.instance [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 2826d9ce-d739-49a1-abfa-80cee62173fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.537 2 DEBUG nova.virt.libvirt.vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877571750',display_name='tempest-ImagesTestJSON-server-1877571750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877571750',id=24,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-3vi6yohy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:58:41Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=2826d9ce-d739-49a1-abfa-80cee62173fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.537 2 DEBUG nova.network.os_vif_util [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "address": "fa:16:3e:c7:a1:5d", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146ca52f-0b", "ovs_interfaceid": "146ca52f-0b4f-46f0-9153-1120bf1c9e4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.538 2 DEBUG nova.network.os_vif_util [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.538 2 DEBUG os_vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap146ca52f-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.548 2 INFO os_vif [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=146ca52f-0b4f-46f0-9153-1120bf1c9e4e,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146ca52f-0b')#033[00m
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : haproxy version is 2.8.14-c23fe91
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [NOTICE]   (291158) : path to executable is /usr/sbin/haproxy
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : Exiting Master process...
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : Exiting Master process...
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [ALERT]    (291158) : Current worker (291160) exited with code 143 (Terminated)
Oct 14 04:58:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[291154]: [WARNING]  (291158) : All workers exited. Exiting... (0)
Oct 14 04:58:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 180 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 432 op/s
Oct 14 04:58:43 np0005486808 systemd[1]: libpod-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope: Deactivated successfully.
Oct 14 04:58:43 np0005486808 podman[292563]: 2025-10-14 08:58:43.564895492 +0000 UTC m=+0.124840156 container died 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:58:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc-userdata-shm.mount: Deactivated successfully.
Oct 14 04:58:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-48c703f783893fc5cf0d9cbca7dbc962deb1b04986a44b3368c6b7cd5b11d993-merged.mount: Deactivated successfully.
Oct 14 04:58:43 np0005486808 podman[292563]: 2025-10-14 08:58:43.605851878 +0000 UTC m=+0.165796512 container cleanup 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 04:58:43 np0005486808 systemd[1]: libpod-conmon-66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc.scope: Deactivated successfully.
Oct 14 04:58:43 np0005486808 podman[292620]: 2025-10-14 08:58:43.686066438 +0000 UTC m=+0.053973777 container remove 66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.693 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc636168-fe0f-4a23-adf9-e1ea25ba9885]: (4, ('Tue Oct 14 08:58:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc)\n66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc\nTue Oct 14 08:58:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc)\n66379a62590509b0800e9c8e0e2d70b44414f672d17109219a0cfb5b1f7de2cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.695 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe0d9f4-a2dc-412c-8732-da9f87ea12de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.695 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb388c4-31a7-455f-ab53-ab7775c355fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[266ca164-e325-4f72-9629-8bbcc30397cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.760 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3feb87-13ac-469d-84b0-c9890f242c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.775 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fcc8c7-cc78-487c-a6e3-fae2778649de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608559, 'reachable_time': 18363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292635, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.780 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:58:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:43.780 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[45cbd338-99e1-4b13-9e48-b49f670811c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.936 2 INFO nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Rebuilding instance#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.945 2 INFO nova.virt.libvirt.driver [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deleting instance files /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb_del#033[00m
Oct 14 04:58:43 np0005486808 nova_compute[259627]: 2025-10-14 08:58:43.947 2 INFO nova.virt.libvirt.driver [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deletion of /var/lib/nova/instances/2826d9ce-d739-49a1-abfa-80cee62173fb_del complete#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.023 2 INFO nova.compute.manager [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG oslo.service.loopingcall [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.024 2 DEBUG nova.network.neutron [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.247 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.263 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.309 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.327 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.340 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.352 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'migration_context' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.366 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 04:58:44 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.369 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:44.999 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.000 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.000 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.001 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.001 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.002 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-unplugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.002 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.003 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.003 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.004 2 DEBUG oslo_concurrency.lockutils [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.004 2 DEBUG nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] No waiting events found dispatching network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.005 2 WARNING nova.compute.manager [req-b7eead56-ade6-4322-a167-c48fc7907a1c req-3dd05b49-00f2-4a1b-9ecc-7217130452e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received unexpected event network-vif-plugged-146ca52f-0b4f-46f0-9153-1120bf1c9e4e for instance with vm_state paused and task_state deleting.#033[00m
Oct 14 04:58:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:45.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 6.3 MiB/s wr, 602 op/s
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.675 2 DEBUG nova.network.neutron [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.707 2 INFO nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Took 1.68 seconds to deallocate network for instance.#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.765 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.766 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.777 2 DEBUG nova.compute.manager [req-a4c94432-1834-43a2-8202-15369689fbdc req-61dc2ddc-dbce-4055-825c-470b65b877b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Received event network-vif-deleted-146ca52f-0b4f-46f0-9153-1120bf1c9e4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:45 np0005486808 nova_compute[259627]: 2025-10-14 08:58:45.834 2 DEBUG oslo_concurrency.processutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192626390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.277 2 DEBUG oslo_concurrency.processutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.286 2 DEBUG nova.compute.provider_tree [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.309 2 DEBUG nova.scheduler.client.report [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.336 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.371 2 INFO nova.scheduler.client.report [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 2826d9ce-d739-49a1-abfa-80cee62173fb#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.448 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.448 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.453 2 DEBUG oslo_concurrency.lockutils [None req-5bc867d4-aac6-4834-bce6-e2d8a3112184 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "2826d9ce-d739-49a1-abfa-80cee62173fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.475 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.529 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.529 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.537 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.538 2 INFO nova.compute.claims [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.644 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:46 np0005486808 nova_compute[259627]: 2025-10-14 08:58:46.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626172182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.105 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.114 2 DEBUG nova.compute.provider_tree [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.133 2 DEBUG nova.scheduler.client.report [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.159 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.207 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.209 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.233 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.255 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.334 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.337 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.337 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating image(s)#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.370 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.405 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.438 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.452 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 14 04:58:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 14 04:58:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.5 MiB/s wr, 525 op/s
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.557 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.558 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.559 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.559 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.597 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.601 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.637 2 DEBUG nova.policy [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.873 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:47 np0005486808 nova_compute[259627]: 2025-10-14 08:58:47.925 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.017 2 DEBUG nova.objects.instance [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.032 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.032 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Ensure instance console log exists: /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.033 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.225 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Successfully created port: bcdd5079-efdb-47f7-99b0-21394b1d16e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432313.9370747, a071857d-db87-4931-95ad-f8c627f74160 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.938 2 INFO nova.compute.manager [-] [instance: a071857d-db87-4931-95ad-f8c627f74160] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:58:48 np0005486808 nova_compute[259627]: 2025-10-14 08:58:48.961 2 DEBUG nova.compute.manager [None req-dd6757a8-c68d-4c82-ad37-e2076fc276b3 - - - - - -] [instance: a071857d-db87-4931-95ad-f8c627f74160] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.500 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Successfully updated port: bcdd5079-efdb-47f7-99b0-21394b1d16e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.515 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.516 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.516 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 197 op/s
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.654 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.813 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.814 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.833 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.914 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.915 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.925 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:49 np0005486808 nova_compute[259627]: 2025-10-14 08:58:49.926 2 INFO nova.compute.claims [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.060 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.424 2 DEBUG nova.compute.manager [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-changed-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.425 2 DEBUG nova.compute.manager [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Refreshing instance network info cache due to event network-changed-bcdd5079-efdb-47f7-99b0-21394b1d16e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.426 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160813913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.507 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.517 2 DEBUG nova.compute.provider_tree [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.533 2 DEBUG nova.scheduler.client.report [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.556 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.558 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.602 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.603 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.621 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.638 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.711 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.713 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.713 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating image(s)#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.737 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.759 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.784 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.787 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.849 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.850 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.850 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.851 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.871 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.874 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.926 2 DEBUG nova.network.neutron [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.930 2 DEBUG nova.policy [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.947 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.948 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance network_info: |[{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.949 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.949 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Refreshing network info cache for port bcdd5079-efdb-47f7-99b0-21394b1d16e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.952 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start _get_guest_xml network_info=[{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.956 2 WARNING nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.963 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.964 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.969 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.969 2 DEBUG nova.virt.libvirt.host [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.970 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.970 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.971 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.972 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.973 2 DEBUG nova.virt.hardware [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:50 np0005486808 nova_compute[259627]: 2025-10-14 08:58:50.976 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.119 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.185 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.273 2 DEBUG nova.objects.instance [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.301 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.302 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Ensure instance console log exists: /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.302 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.303 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.303 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.400 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.401 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447736462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.418 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.426 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.449 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.454 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.516 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.517 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.526 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.526 2 INFO nova.compute.claims [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.673 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.897 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Successfully created port: 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673038839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.983 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.985 2 DEBUG nova.virt.libvirt.vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:47Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.985 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.986 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:51 np0005486808 nova_compute[259627]: 2025-10-14 08:58:51.987 2 DEBUG nova.objects.instance [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.009 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <uuid>3f3d9640-8200-45d8-ac25-bbc5d016d49f</uuid>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <name>instance-0000001a</name>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesTestJSON-server-107832270</nova:name>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:50</nova:creationTime>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <nova:port uuid="bcdd5079-efdb-47f7-99b0-21394b1d16e2">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="serial">3f3d9640-8200-45d8-ac25-bbc5d016d49f</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="uuid">3f3d9640-8200-45d8-ac25-bbc5d016d49f</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:da:fb:42"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <target dev="tapbcdd5079-ef"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/console.log" append="off"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:52 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:52 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:52 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:52 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.015 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Preparing to wait for external event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.016 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.017 2 DEBUG nova.virt.libvirt.vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:47Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.017 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.019 2 DEBUG nova.network.os_vif_util [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.019 2 DEBUG os_vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcdd5079-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcdd5079-ef, col_values=(('external_ids', {'iface-id': 'bcdd5079-efdb-47f7-99b0-21394b1d16e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:fb:42', 'vm-uuid': '3f3d9640-8200-45d8-ac25-bbc5d016d49f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:52 np0005486808 NetworkManager[44885]: <info>  [1760432332.0278] manager: (tapbcdd5079-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.034 2 INFO os_vif [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef')#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.106 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:da:fb:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.107 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Using config drive#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.130 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022107505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.222 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.222 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.238 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.244 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.249 2 DEBUG nova.compute.provider_tree [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.270 2 DEBUG nova.scheduler.client.report [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.293 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.294 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.320 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.321 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.328 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.328 2 INFO nova.compute.claims [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.343 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.344 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.371 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.391 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.424 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updated VIF entry in instance network info cache for port bcdd5079-efdb-47f7-99b0-21394b1d16e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.424 2 DEBUG nova.network.neutron [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [{"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.446 2 DEBUG oslo_concurrency.lockutils [req-2ceacc9e-f9ab-4e1a-a93d-ba939f4ece37 req-e070dd0f-c5a1-412a-bf99-55b9e31a08d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3f3d9640-8200-45d8-ac25-bbc5d016d49f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.491 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.493 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.493 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating image(s)#033[00m
Oct 14 04:58:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.515 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.539 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.563 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.566 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.608 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.646 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.647 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.647 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.648 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.668 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.671 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654413e6-01cd-4e54-a271-6b515a8561e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.881 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Creating config drive at /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.885 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg2gle5n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.920 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 654413e6-01cd-4e54-a271-6b515a8561e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.965 2 DEBUG nova.policy [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aacb60ad29c418c9161e71bb72da036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.971 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432317.9163482, aefbf308-7f99-4a76-8d5e-54613f6bdf83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:52 np0005486808 nova_compute[259627]: 2025-10-14 08:58:52.972 2 INFO nova.compute.manager [-] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.012 2 DEBUG nova.compute.manager [None req-6dab8613-8a65-4fe2-81d4-3029e3fe75b3 - - - - - -] [instance: aefbf308-7f99-4a76-8d5e-54613f6bdf83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.016 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] resizing rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.041 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxg2gle5n" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.062 2 DEBUG nova.storage.rbd_utils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.065 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188578432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.097 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.104 2 DEBUG nova.compute.provider_tree [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.140 2 DEBUG nova.scheduler.client.report [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.148 2 DEBUG nova.objects.instance [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'migration_context' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.168 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.169 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.171 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Ensure instance console log exists: /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.172 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.245 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.245 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.249 2 DEBUG oslo_concurrency.processutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config 3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.249 2 INFO nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deleting local config drive /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.288 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.307 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:53 np0005486808 kernel: tapbcdd5079-ef: entered promiscuous mode
Oct 14 04:58:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:53Z|00187|binding|INFO|Claiming lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 for this chassis.
Oct 14 04:58:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:53Z|00188|binding|INFO|bcdd5079-efdb-47f7-99b0-21394b1d16e2: Claiming fa:16:3e:da:fb:42 10.100.0.3
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.3168] manager: (tapbcdd5079-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.330 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.339 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Successfully updated port: 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:53Z|00189|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 ovn-installed in OVS
Oct 14 04:58:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:53Z|00190|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 up in Southbound
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.346 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16e34ab0-d4c0-4a69-955c-2801729ecb54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.346 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.349 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5f4a2f-7ff9-4941-b948-af37e2b6fd33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f4adc0-119d-4b86-8ebc-ea39f72d1f71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.358 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:53 np0005486808 systemd-machined[214636]: New machine qemu-28-instance-0000001a.
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.365 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b993204b-6414-48ab-a08d-56075fcb1f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 systemd[1]: Started Virtual Machine qemu-28-instance-0000001a.
Oct 14 04:58:53 np0005486808 systemd-udevd[293384]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.3910] device (tapbcdd5079-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.3918] device (tapbcdd5079-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.395 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b70e17e1-d948-42cf-a043-33985fa75561]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.400 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.402 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.402 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating image(s)#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.434 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4340e5fd-c94a-418d-bf78-de6018400c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.434 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:53 np0005486808 systemd-udevd[293386]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5011ca39-1b82-4cb0-84ab-52c45e79f492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.4423] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.490 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[24bc9399-46f8-4bb0-966c-c7cd1448dbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.493 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.499 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dab3a45d-5398-4b36-8655-2dfce75e9f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.5339] device (tap2322cf7a-00): carrier: link connected
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.547 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.547 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7d102e-ffc5-43da-af42-78bd3bbb332c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.559 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c55d118-0756-4c77-ace9-60ca6d3d3cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611229, 'reachable_time': 23663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293468, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.597 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da71f4a1-7d4e-447f-8ef4-0957b8d81259]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611229, 'tstamp': 611229}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293470, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.606 2 DEBUG nova.policy [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.617 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6cd382-b36a-4805-8700-316e2e7e1141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611229, 'reachable_time': 23663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293471, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG nova.compute.manager [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-changed-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG nova.compute.manager [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Refreshing instance network info cache due to event network-changed-0b16cd6a-fe42-4a54-8bbe-810915fcaa93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.627 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.629 2 DEBUG nova.compute.manager [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.630 2 DEBUG oslo_concurrency.lockutils [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.631 2 DEBUG nova.compute.manager [req-9ec8912d-27ad-4ffc-9ac5-900c44e8958c req-11d3b67e-3c48-4374-9104-c54e1f71f306 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Processing event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.646 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.647 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.648 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.648 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.654 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[252727b9-fa8f-45df-936e-48d2bc9683eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.674 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.684 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf eb820455-d45c-4331-9363-124f11537f52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73310eb9-5682-469a-8458-4557da0bbf1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.717 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:53 np0005486808 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 04:58:53 np0005486808 NetworkManager[44885]: <info>  [1760432333.7205] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.725 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:53Z|00191|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.729 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.730 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b40cebe-efd4-465b-9633-9db72da7170e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.731 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:53.731 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.923 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf eb820455-d45c-4331-9363-124f11537f52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:53 np0005486808 nova_compute[259627]: 2025-10-14 08:58:53.976 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image eb820455-d45c-4331-9363-124f11537f52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.062 2 DEBUG nova.objects.instance [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Ensure instance console log exists: /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.088 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.089 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.089 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:54 np0005486808 podman[293655]: 2025-10-14 08:58:54.145856027 +0000 UTC m=+0.047585689 container create 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.182 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully created port: 902a062a-858b-4495-936b-47a675567467 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:54 np0005486808 systemd[1]: Started libpod-conmon-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope.
Oct 14 04:58:54 np0005486808 podman[293655]: 2025-10-14 08:58:54.120238508 +0000 UTC m=+0.021968220 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7281a4ba77e5cabb29255d89d41089125e8d4a234511b13cf81fb479f2c3742c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:54 np0005486808 podman[293655]: 2025-10-14 08:58:54.241173878 +0000 UTC m=+0.142903540 container init 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 04:58:54 np0005486808 podman[293655]: 2025-10-14 08:58:54.246617512 +0000 UTC m=+0.148347174 container start 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.262 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.262 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:54 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : New worker (293676) forked
Oct 14 04:58:54 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : Loading success.
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.283 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.492 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.493 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.501 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.502 2 INFO nova.compute.claims [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.536 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.535851, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.536 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.537 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.542 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.545 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance spawned successfully.#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.545 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.555 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.558 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.570 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.570 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.571 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.572 2 DEBUG nova.virt.libvirt.driver [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.5375664, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.622 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.625 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432334.5402305, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.625 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.644 2 INFO nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 7.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.645 2 DEBUG nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.646 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.656 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.698 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.701 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.753 2 INFO nova.compute.manager [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 8.24 seconds to build instance.#033[00m
Oct 14 04:58:54 np0005486808 nova_compute[259627]: 2025-10-14 08:58:54.784 2 DEBUG oslo_concurrency.lockutils [None req-f7dc8dcd-609d-4acb-85cb-413d894cb14b 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.121 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Successfully created port: acc7c80f-8812-4bbf-93f8-cc3f1556b62a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:58:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523924861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.207 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.211 2 DEBUG nova.compute.provider_tree [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.225 2 DEBUG nova.scheduler.client.report [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.291 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.292 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.362 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.363 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.392 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.425 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.467 2 DEBUG nova.network.neutron [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.505 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.505 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance network_info: |[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.506 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.506 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Refreshing network info cache for port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.509 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start _get_guest_xml network_info=[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.514 2 WARNING nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.527 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.528 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.531 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.libvirt.host [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.532 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.533 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.534 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.535 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.535 2 DEBUG nova.virt.hardware [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.538 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 803 KiB/s rd, 11 MiB/s wr, 230 op/s
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.572 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.574 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.575 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating image(s)#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.601 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.633 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.658 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.666 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.695 2 DEBUG nova.policy [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56f2f9bf9b064a208d9ce5fe732c4ff7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.726 2 DEBUG nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.726 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG oslo_concurrency.lockutils [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.727 2 DEBUG nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.727 2 WARNING nova.compute.manager [req-d25a0955-4b63-4c9d-88f0-f8e9c5f41ae9 req-95ad4836-e442-4626-93c9-a60404487d25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.741 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.742 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.764 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.767 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.887 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully created port: fa5f1925-a535-45ee-b96e-f79c725d7960 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:55 np0005486808 nova_compute[259627]: 2025-10-14 08:58:55.993 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769225019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.045 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] resizing rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.072 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.093 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.096 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.162 2 DEBUG nova.objects.instance [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'migration_context' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.177 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Ensure instance console log exists: /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.178 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.179 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.193 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Successfully created port: 2da46865-98ea-42a7-a5cc-44b5bef36a3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:58:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557437425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.501 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.503 2 DEBUG nova.virt.libvirt.vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:50Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.504 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.505 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.507 2 DEBUG nova.objects.instance [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <uuid>27fa4cf8-c08c-46a2-af8f-17c8980a2317</uuid>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <name>instance-0000001b</name>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1883268496</nova:name>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:55</nova:creationTime>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <nova:port uuid="0b16cd6a-fe42-4a54-8bbe-810915fcaa93">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="serial">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="uuid">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c3:07:ec"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <target dev="tap0b16cd6a-fe"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log" append="off"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:56 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:56 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:56 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:56 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Preparing to wait for external event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.521 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG nova.virt.libvirt.vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:50Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.522 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG nova.network.os_vif_util [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG os_vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b16cd6a-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b16cd6a-fe, col_values=(('external_ids', {'iface-id': '0b16cd6a-fe42-4a54-8bbe-810915fcaa93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:07:ec', 'vm-uuid': '27fa4cf8-c08c-46a2-af8f-17c8980a2317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:56 np0005486808 NetworkManager[44885]: <info>  [1760432336.5291] manager: (tap0b16cd6a-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.540 2 INFO os_vif [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.595 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.596 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.596 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:c3:07:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.597 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Using config drive#033[00m
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.614 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:56 np0005486808 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 04:58:56 np0005486808 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000016.scope: Consumed 12.571s CPU time.
Oct 14 04:58:56 np0005486808 systemd-machined[214636]: Machine qemu-27-instance-00000016 terminated.
Oct 14 04:58:56 np0005486808 nova_compute[259627]: 2025-10-14 08:58:56.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.093 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Creating config drive at /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.103 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuxj8b9dc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.171 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Successfully updated port: 2da46865-98ea-42a7-a5cc-44b5bef36a3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.191 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.192 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.192 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.259 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuxj8b9dc" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.282 2 DEBUG nova.storage.rbd_utils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.285 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.312 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Successfully updated port: acc7c80f-8812-4bbf-93f8-cc3f1556b62a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.339 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.368 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.427 2 DEBUG oslo_concurrency.processutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config 27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.430 2 INFO nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deleting local config drive /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/disk.config because it was imported into RBD.#033[00m
Oct 14 04:58:57 np0005486808 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.4854] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct 14 04:58:57 np0005486808 systemd-udevd[293956]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:57Z|00192|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 04:58:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:57Z|00193|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.498 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis#033[00m
Oct 14 04:58:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.503 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.5038] device (tap0b16cd6a-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.5057] device (tap0b16cd6a-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.522 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1f4e90-6f1a-4f33-852e-26c9b8a82877]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.523 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4d50d6a-61 in ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:58:57 np0005486808 systemd-machined[214636]: New machine qemu-29-instance-0000001b.
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.527 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4d50d6a-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5f2656-9c5d-40d1-8320-a3b90db3302d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[855ad7ca-0b67-4ebc-89e4-3e7c09cd4ad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.547 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b52e6efe-3ac2-4d54-a612-5f0b1b9f9226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 systemd[1]: Started Virtual Machine qemu-29-instance-0000001b.
Oct 14 04:58:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 798 KiB/s rd, 11 MiB/s wr, 229 op/s
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.563 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:57Z|00194|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 04:58:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:57Z|00195|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.569 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f104c0fd-f235-428e-bb7a-6241c14643c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.584 2 DEBUG nova.compute.manager [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-changed-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.585 2 DEBUG nova.compute.manager [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Refreshing instance network info cache due to event network-changed-acc7c80f-8812-4bbf-93f8-cc3f1556b62a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.585 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4622c8b-6ce6-4ad1-b414-785f3003f1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.607 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19927cf6-ed44-4b6d-987b-2cf0a8852614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.6096] manager: (tapc4d50d6a-60): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.615 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully updated port: 902a062a-858b-4495-936b-47a675567467 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.631 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updated VIF entry in instance network info cache for port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.632 2 DEBUG nova.network.neutron [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.640 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[063ef4b5-8f1d-4e7a-af5a-af8188cee13e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.642 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[be6ffd68-c068-45d6-ae61-e42a6b6b0386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.660 2 DEBUG oslo_concurrency.lockutils [req-7056293d-66ed-4272-8141-6bdaa61933f3 req-015447ea-74e5-4434-b184-7bffb28ff84f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.6734] device (tapc4d50d6a-60): carrier: link connected
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.680 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6413c2-3820-47f8-a920-e9846fb6751c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d650797d-c83d-4647-9874-2bfc40d55cfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294044, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.709 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.710 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.710 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.711 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.714 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.715 2 DEBUG nova.objects.instance [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'flavor' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f29c75-de64-4fbc-95e6-3564fc01ba6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:914e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611643, 'tstamp': 611643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294045, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.721 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.727 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7abb9af9-1d7b-4554-9541-12893098fa64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294046, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.767 2 DEBUG nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.781 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0204a7a8-5d17-4644-a6c2-c05f888c54f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.820 2 DEBUG nova.compute.manager [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-changed-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.821 2 DEBUG nova.compute.manager [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Refreshing instance network info cache due to event network-changed-2da46865-98ea-42a7-a5cc-44b5bef36a3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.821 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[50161bc8-3a5e-4d49-8338-e3f6875772c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.869 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 NetworkManager[44885]: <info>  [1760432337.8718] manager: (tapc4d50d6a-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 14 04:58:57 np0005486808 kernel: tapc4d50d6a-60: entered promiscuous mode
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.878 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c72a49b-11fc-433f-9c83-983d38f8ae07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.880 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.pid.haproxy
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID c4d50d6a-6686-4b50-b1e5-9f71bae17a99
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:58:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:58:57.881 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'env', 'PROCESS_TAG=haproxy-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4d50d6a-6686-4b50-b1e5-9f71bae17a99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:58:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:58:57Z|00196|binding|INFO|Releasing lport 650f034e-5333-49ba-9907-b0409944aee7 from this chassis (sb_readonly=0)
Oct 14 04:58:57 np0005486808 nova_compute[259627]: 2025-10-14 08:58:57.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.197 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.200 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete#033[00m
Oct 14 04:58:58 np0005486808 podman[294097]: 2025-10-14 08:58:58.312063803 +0000 UTC m=+0.060282082 container create 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:58:58 np0005486808 systemd[1]: Started libpod-conmon-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope.
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.356 2 DEBUG nova.network.neutron [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.379 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:58 np0005486808 podman[294097]: 2025-10-14 08:58:58.287377176 +0000 UTC m=+0.035595495 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.380 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance network_info: |[{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.380 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.381 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Refreshing network info cache for port 2da46865-98ea-42a7-a5cc-44b5bef36a3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.384 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start _get_guest_xml network_info=[{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7915092ded73a0a1e93e8fda5fd9538a3414c7bf089afe2c761ba10dc82e95d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.389 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.390 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating image(s)#033[00m
Oct 14 04:58:58 np0005486808 podman[294097]: 2025-10-14 08:58:58.404192745 +0000 UTC m=+0.152411054 container init 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 04:58:58 np0005486808 podman[294097]: 2025-10-14 08:58:58.411428483 +0000 UTC m=+0.159646772 container start 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.426 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:58 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : New worker (294143) forked
Oct 14 04:58:58 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : Loading success.
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.462 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.488 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.492 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.527 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432323.5125923, 2826d9ce-d739-49a1-abfa-80cee62173fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.528 2 INFO nova.compute.manager [-] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.540 2 WARNING nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.547 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.548 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.562 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.563 2 DEBUG nova.virt.libvirt.host [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.564 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.564 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='446f2537-86e9-41eb-8a0d-254e85da4be1',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.565 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.565 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.566 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.566 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.567 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.567 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.568 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.569 2 DEBUG nova.virt.hardware [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.573 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.603 2 DEBUG nova.compute.manager [None req-c9c60327-175e-4545-9dc0-eaecc3c56944 - - - - - -] [instance: 2826d9ce-d739-49a1-abfa-80cee62173fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.605 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.606 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.607 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.607 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.632 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.636 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.718 2 DEBUG nova.network.neutron [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.848 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.849 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance network_info: |[{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.850 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.850 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Refreshing network info cache for port acc7c80f-8812-4bbf-93f8-cc3f1556b62a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.855 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start _get_guest_xml network_info=[{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'e2368e3e-f504-40e6-a9d3-67df18c845bb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.861 2 WARNING nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.865 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.866 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.873 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.874 2 DEBUG nova.virt.libvirt.host [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.874 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.875 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.875 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.876 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.877 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.877 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.878 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.879 2 DEBUG nova.virt.hardware [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.883 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.913 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 51c76e0f-284d-4122-83b4-32c4518b9056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.971 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] resizing rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:58:58 np0005486808 nova_compute[259627]: 2025-10-14 08:58:58.995 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Successfully updated port: fa5f1925-a535-45ee-b96e-f79c725d7960 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.009 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375667791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.062 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.081 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.084 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.127 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.127 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Ensure instance console log exists: /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.128 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.128 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.129 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.131 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.134 2 WARNING nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.140 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.141 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.145 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.145 2 DEBUG nova.virt.libvirt.host [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.146 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.146 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.147 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.147 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.148 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.149 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.virt.hardware [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.150 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.166 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.192 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171146991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.390 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.424 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.429 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.475 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432339.475054, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.476 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Started (Lifecycle Event)#033[00m
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816079181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.513 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.515 2 DEBUG nova.virt.libvirt.vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:55Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.516 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.517 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.519 2 DEBUG nova.objects.instance [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 9.2 MiB/s wr, 192 op/s
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629039864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.616 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.640 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.646 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.678 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <uuid>82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</uuid>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <name>instance-0000001e</name>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <memory>196608</memory>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-208119549</nova:name>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:58</nova:creationTime>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.micro">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:memory>192</nova:memory>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:port uuid="2da46865-98ea-42a7-a5cc-44b5bef36a3d">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="serial">82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="uuid">82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:28:d2:4a"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="tap2da46865-98"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/console.log" append="off"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:59 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:59 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.678 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Preparing to wait for external event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.679 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.680 2 DEBUG nova.virt.libvirt.vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:55Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.680 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.681 2 DEBUG nova.network.os_vif_util [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.682 2 DEBUG os_vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.683 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2da46865-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2da46865-98, col_values=(('external_ids', {'iface-id': '2da46865-98ea-42a7-a5cc-44b5bef36a3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:d2:4a', 'vm-uuid': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.689 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432339.4768302, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:58:59 np0005486808 NetworkManager[44885]: <info>  [1760432339.6906] manager: (tap2da46865-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.697 2 INFO os_vif [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98')#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.715 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.725 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.744 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.760 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.761 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.761 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:28:d2:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.761 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Using config drive#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.782 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-changed-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing instance network info cache due to event network-changed-902a062a-858b-4495-936b-47a675567467. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.815 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:58:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841665192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.881 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.883 2 DEBUG nova.virt.libvirt.vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:53Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.883 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.885 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.886 2 DEBUG nova.objects.instance [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.945 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <uuid>eb820455-d45c-4331-9363-124f11537f52</uuid>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <name>instance-0000001d</name>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-195518745</nova:name>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:58</nova:creationTime>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <nova:port uuid="acc7c80f-8812-4bbf-93f8-cc3f1556b62a">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="serial">eb820455-d45c-4331-9363-124f11537f52</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="uuid">eb820455-d45c-4331-9363-124f11537f52</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/eb820455-d45c-4331-9363-124f11537f52_disk">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/eb820455-d45c-4331-9363-124f11537f52_disk.config">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:da:46:6c"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <target dev="tapacc7c80f-88"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/console.log" append="off"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:58:59 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:58:59 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:58:59 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:58:59 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.945 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Preparing to wait for external event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.946 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.946 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.947 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.948 2 DEBUG nova.virt.libvirt.vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:53Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.949 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.950 2 DEBUG nova.network.os_vif_util [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.950 2 DEBUG os_vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacc7c80f-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacc7c80f-88, col_values=(('external_ids', {'iface-id': 'acc7c80f-8812-4bbf-93f8-cc3f1556b62a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:46:6c', 'vm-uuid': 'eb820455-d45c-4331-9363-124f11537f52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 NetworkManager[44885]: <info>  [1760432339.9641] manager: (tapacc7c80f-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:58:59 np0005486808 nova_compute[259627]: 2025-10-14 08:58:59.973 2 INFO os_vif [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88')#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.068 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.069 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.069 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] No VIF found with MAC fa:16:3e:da:46:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.070 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Using config drive#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.093 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235467809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.150 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.152 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <uuid>51c76e0f-284d-4122-83b4-32c4518b9056</uuid>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <name>instance-00000016</name>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdmin275Test-server-546094612</nova:name>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:58:59</nova:creationTime>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:user uuid="24a7b84f511340ae859b668a0e7becf6">tempest-ServersAdmin275Test-1795131452-project-member</nova:user>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <nova:project uuid="61066a48551647f18a4cfb7a7147e7ed">tempest-ServersAdmin275Test-1795131452</nova:project>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="serial">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="uuid">51c76e0f-284d-4122-83b4-32c4518b9056</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/51c76e0f-284d-4122-83b4-32c4518b9056_disk.config">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/console.log" append="off"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:00 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:00 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:00 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:00 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.220 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.221 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.221 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Using config drive#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.242 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.271 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.312 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lazy-loading 'keypairs' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.464 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updated VIF entry in instance network info cache for port 2da46865-98ea-42a7-a5cc-44b5bef36a3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.465 2 DEBUG nova.network.neutron [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [{"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.482 2 DEBUG oslo_concurrency.lockutils [req-c27f38a9-eaec-48a3-acb6-2b78db5f1ce2 req-31f11ef6-bf94-4c6a-81e6-ac80959e2f76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.593 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Creating config drive at /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.602 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7conj5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.669 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Creating config drive at /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.675 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1n0wesz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.713 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.715 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.716 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Processing event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.716 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG oslo_concurrency.lockutils [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.717 2 DEBUG nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.718 2 WARNING nova.compute.manager [req-baafe2d0-5c8c-4477-a7c1-ca72ffd648aa req-173555ac-bd87-4e1e-a45b-7b9c2f08c0a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.726 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.738 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432340.7378342, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.739 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.742 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.756 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance spawned successfully.#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.756 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.773 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.783 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.785 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7conj5k" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:00 np0005486808 podman[294605]: 2025-10-14 08:59:00.822829066 +0000 UTC m=+0.109988661 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.830 2 DEBUG nova.storage.rbd_utils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.835 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:00 np0005486808 podman[294609]: 2025-10-14 08:59:00.840872 +0000 UTC m=+0.132282530 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.875 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1n0wesz" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.901 2 DEBUG nova.storage.rbd_utils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] rbd image 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.904 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.930 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.934 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updated VIF entry in instance network info cache for port acc7c80f-8812-4bbf-93f8-cc3f1556b62a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.935 2 DEBUG nova.network.neutron [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [{"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.940 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Creating config drive at /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.946 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0isfzie8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.982 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.983 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.983 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.984 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.984 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:00 np0005486808 nova_compute[259627]: 2025-10-14 08:59:00.987 2 DEBUG nova.virt.libvirt.driver [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.020 2 DEBUG oslo_concurrency.processutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.021 2 INFO nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deleting local config drive /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.036 2 DEBUG oslo_concurrency.lockutils [req-5642a759-3aca-4f3e-a65d-200013c51ad9 req-c8125f40-1a11-4414-8b96-f256043b2c14 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-eb820455-d45c-4331-9363-124f11537f52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.064 2 INFO nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.065 2 DEBUG nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.0795] manager: (tap2da46865-98): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 14 04:59:01 np0005486808 kernel: tap2da46865-98: entered promiscuous mode
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.084 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0isfzie8" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00197|binding|INFO|Claiming lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d for this chassis.
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00198|binding|INFO|2da46865-98ea-42a7-a5cc-44b5bef36a3d: Claiming fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.109 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d2:4a 10.100.0.13'], port_security=['fa:16:3e:28:d2:4a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2da46865-98ea-42a7-a5cc-44b5bef36a3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.110 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2da46865-98ea-42a7-a5cc-44b5bef36a3d in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.111 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00199|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d ovn-installed in OVS
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00200|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d up in Southbound
Oct 14 04:59:01 np0005486808 systemd-udevd[294841]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.132 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc3f742-6d9f-44ca-b46c-4684d211826e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.140 2 DEBUG nova.storage.rbd_utils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] rbd image eb820455-d45c-4331-9363-124f11537f52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:01 np0005486808 systemd-machined[214636]: New machine qemu-30-instance-0000001e.
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.1468] device (tap2da46865-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.1476] device (tap2da46865-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:01 np0005486808 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.170 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config eb820455-d45c-4331-9363-124f11537f52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.172 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c44ddd1d-7c22-4d5b-b557-987078c340e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.175 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c7aa3fd7-3569-4cad-86aa-67d107e7887a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.220 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d12f8a-482e-43d7-b853-758b1b23c03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.221 2 DEBUG oslo_concurrency.processutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config 51c76e0f-284d-4122-83b4-32c4518b9056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.222 2 INFO nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting local config drive /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.224 2 INFO nova.compute.manager [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 11.34 seconds to build instance.#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[706d20ae-7e55-40cc-abfe-3bdeed30c15d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294860, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.242 2 DEBUG oslo_concurrency.lockutils [None req-f879b07c-8a65-40fc-8437-9ed973da565d 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.256 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12a41923-cd44-4dcd-aa8c-60c8d5234f7f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294867, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294867, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.258 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.266 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.267 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.267 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.268 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:01 np0005486808 systemd-machined[214636]: New machine qemu-31-instance-00000016.
Oct 14 04:59:01 np0005486808 systemd[1]: Started Virtual Machine qemu-31-instance-00000016.
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.388 2 DEBUG oslo_concurrency.processutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config eb820455-d45c-4331-9363-124f11537f52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.389 2 INFO nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deleting local config drive /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:01 np0005486808 kernel: tapacc7c80f-88: entered promiscuous mode
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.4347] manager: (tapacc7c80f-88): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 14 04:59:01 np0005486808 systemd-udevd[294850]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.4511] device (tapacc7c80f-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:01 np0005486808 NetworkManager[44885]: <info>  [1760432341.4522] device (tapacc7c80f-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00201|binding|INFO|Claiming lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a for this chassis.
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00202|binding|INFO|acc7c80f-8812-4bbf-93f8-cc3f1556b62a: Claiming fa:16:3e:da:46:6c 10.100.0.14
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.479 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:46:6c 10.100.0.14'], port_security=['fa:16:3e:da:46:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'eb820455-d45c-4331-9363-124f11537f52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=acc7c80f-8812-4bbf-93f8-cc3f1556b62a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.481 162547 INFO neutron.agent.ovn.metadata.agent [-] Port acc7c80f-8812-4bbf-93f8-cc3f1556b62a in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.482 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:01 np0005486808 systemd-machined[214636]: New machine qemu-32-instance-0000001d.
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00203|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a ovn-installed in OVS
Oct 14 04:59:01 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:01Z|00204|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a up in Southbound
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 systemd[1]: Started Virtual Machine qemu-32-instance-0000001d.
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.506 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2829edc-f706-4361-9d03-a94908a5764b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.550 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb11ff9-1d51-44b3-86d0-0cce81fc048c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2178aaf2-7369-4150-aafa-2d117be85124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 MiB/s wr, 336 op/s
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.606 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[35bcf593-1bca-4d3f-9daf-8d12d4f57abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 59204c16-b91d-44e5-95b9-9ae085772098 does not exist
Oct 14 04:59:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 01099060-9d07-4938-9e55-b92a8dfa4dd1 does not exist
Oct 14 04:59:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dd8fafc5-f278-4409-a8be-0056ff69e3cf does not exist
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[558f6d6b-b36a-46ba-aa4c-f99ae00a808c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294938, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0758118-aa99-47c1-a386-41b529a37904]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294940, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294940, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.670 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.677 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.677 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:01.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:01 np0005486808 nova_compute[259627]: 2025-10-14 08:59:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.232 2 DEBUG nova.compute.manager [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.232 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG oslo_concurrency.lockutils [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.233 2 DEBUG nova.compute.manager [req-4e170bb6-5e17-485b-a46a-2b27f7dfc15f req-e8179890-2d35-4296-9538-73c87588762a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Processing event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.242 2 DEBUG nova.network.neutron [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:02 np0005486808 podman[295198]: 2025-10-14 08:59:02.257553128 +0000 UTC m=+0.055656238 container create 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.267 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.267 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance network_info: |[{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.268 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.268 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing network info cache for port 902a062a-858b-4495-936b-47a675567467 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.271 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start _get_guest_xml network_info=[{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.275 2 WARNING nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.280 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.281 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.284 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.libvirt.host [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.285 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.286 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.287 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.288 2 DEBUG nova.virt.hardware [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.291 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:02 np0005486808 systemd[1]: Started libpod-conmon-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope.
Oct 14 04:59:02 np0005486808 podman[295198]: 2025-10-14 08:59:02.233810705 +0000 UTC m=+0.031913835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:02 np0005486808 podman[295198]: 2025-10-14 08:59:02.348887611 +0000 UTC m=+0.146990721 container init 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 04:59:02 np0005486808 podman[295198]: 2025-10-14 08:59:02.355857972 +0000 UTC m=+0.153961082 container start 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:59:02 np0005486808 podman[295198]: 2025-10-14 08:59:02.359923012 +0000 UTC m=+0.158026122 container attach 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:59:02 np0005486808 suspicious_lederberg[295212]: 167 167
Oct 14 04:59:02 np0005486808 systemd[1]: libpod-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope: Deactivated successfully.
Oct 14 04:59:02 np0005486808 conmon[295212]: conmon 71392b55835803ada86e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope/container/memory.events
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.403 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.4034772, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.405 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 podman[295217]: 2025-10-14 08:59:02.41238447 +0000 UTC m=+0.026570004 container died 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.433 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c39da517717033c9b396b95d8fb02eeec0da300b5892765df2772d051a872d46-merged.mount: Deactivated successfully.
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.440 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.403573, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.440 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 podman[295217]: 2025-10-14 08:59:02.453267344 +0000 UTC m=+0.067452858 container remove 71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:02 np0005486808 systemd[1]: libpod-conmon-71392b55835803ada86e5530f29d34cca73293991592a7a821173a7f09afdf71.scope: Deactivated successfully.
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.474 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.496 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.542 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.541677, eb820455-d45c-4331-9363-124f11537f52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.544 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.552 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.560 2 INFO nova.virt.libvirt.driver [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance spawned successfully.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.560 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.567 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.572 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.582 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.582 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.583 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.583 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.584 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.584 2 DEBUG nova.virt.libvirt.driver [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.618 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.5417562, eb820455-d45c-4331-9363-124f11537f52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.618 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.645 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.648 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.648 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.656 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.546942, eb820455-d45c-4331-9363-124f11537f52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.656 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.661 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance spawned successfully.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.661 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:02 np0005486808 podman[295255]: 2025-10-14 08:59:02.666666444 +0000 UTC m=+0.059453251 container create 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 04:59:02 np0005486808 systemd[1]: Started libpod-conmon-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope.
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.720 2 INFO nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 9.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.720 2 DEBUG nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.725 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.733 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.733 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.734 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.734 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.735 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.735 2 DEBUG nova.virt.libvirt.driver [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 podman[295255]: 2025-10-14 08:59:02.640910212 +0000 UTC m=+0.033697049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:02 np0005486808 podman[295255]: 2025-10-14 08:59:02.755778712 +0000 UTC m=+0.148565559 container init 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:02 np0005486808 podman[295255]: 2025-10-14 08:59:02.763576644 +0000 UTC m=+0.156363461 container start 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 04:59:02 np0005486808 podman[295255]: 2025-10-14 08:59:02.766933326 +0000 UTC m=+0.159720163 container attach 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.796 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Processing event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.797 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG oslo_concurrency.lockutils [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.798 2 DEBUG nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] No waiting events found dispatching network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.798 2 WARNING nova.compute.manager [req-842de978-abe3-4238-9891-3c2e6c6e7a91 req-c05965b9-47bc-457f-be04-9a5cce30387e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received unexpected event network-vif-plugged-2da46865-98ea-42a7-a5cc-44b5bef36a3d for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.802 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.803 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505099324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.805 2 INFO nova.compute.manager [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 10.51 seconds to build instance.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.807 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.812 2 DEBUG nova.compute.manager [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.814 2 INFO nova.virt.libvirt.driver [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance spawned successfully.#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.814 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.828 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 51c76e0f-284d-4122-83b4-32c4518b9056 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.828 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.646253, 51c76e0f-284d-4122-83b4-32c4518b9056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.828 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.848 2 DEBUG oslo_concurrency.lockutils [None req-95b78d7a-5ba9-4347-a8d4-831fe01db525 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.852 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.853 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.853 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.854 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.854 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.855 2 DEBUG nova.virt.libvirt.driver [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.861 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.864 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.872 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.939 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:02 np0005486808 nova_compute[259627]: 2025-10-14 08:59:02.964 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.016 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.64632, 51c76e0f-284d-4122-83b4-32c4518b9056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.016 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.024 2 INFO nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 7.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.024 2 DEBUG nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.028 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.030 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.030 2 DEBUG nova.objects.instance [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.098 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.156 2 DEBUG oslo_concurrency.lockutils [None req-0f45cfa2-6951-4e62-bd2a-5a273b8baab9 a58356102afc42ad8aa76978f3780a8d e9c558bc15be432f98b055b225999d31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.169 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432342.80423, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.169 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.210 2 INFO nova.compute.manager [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 8.88 seconds to build instance.#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.219 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.230 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.243 2 DEBUG oslo_concurrency.lockutils [None req-072578a6-1cab-4b26-a1c2-f8d88282c8d0 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794689739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.557 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.558 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.559 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.559 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 319 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 11 MiB/s wr, 309 op/s
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.562 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.565 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.567 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.572 2 DEBUG nova.objects.instance [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.597 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <uuid>654413e6-01cd-4e54-a271-6b515a8561e6</uuid>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <name>instance-0000001c</name>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestMultiNic-server-1217516639</nova:name>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:02</nova:creationTime>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:user uuid="5aacb60ad29c418c9161e71bb72da036">tempest-ServersTestMultiNic-840673976-project-member</nova:user>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:project uuid="3566f35c659a45bd9b9bbddf6552ed43">tempest-ServersTestMultiNic-840673976</nova:project>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:port uuid="902a062a-858b-4495-936b-47a675567467">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.125" ipVersion="4"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <nova:port uuid="fa5f1925-a535-45ee-b96e-f79c725d7960">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.1.203" ipVersion="4"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="serial">654413e6-01cd-4e54-a271-6b515a8561e6</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="uuid">654413e6-01cd-4e54-a271-6b515a8561e6</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/654413e6-01cd-4e54-a271-6b515a8561e6_disk">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/654413e6-01cd-4e54-a271-6b515a8561e6_disk.config">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:99:b0:d5"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <target dev="tap902a062a-85"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9d:59:d2"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <target dev="tapfa5f1925-a5"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/console.log" append="off"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:03 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:03 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:03 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:03 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.601 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Preparing to wait for external event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.602 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Preparing to wait for external event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.623 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.623 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.624 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.625 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.625 2 DEBUG os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap902a062a-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap902a062a-85, col_values=(('external_ids', {'iface-id': '902a062a-858b-4495-936b-47a675567467', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:b0:d5', 'vm-uuid': '654413e6-01cd-4e54-a271-6b515a8561e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 NetworkManager[44885]: <info>  [1760432343.6343] manager: (tap902a062a-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.645 2 INFO os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85')#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.646 2 DEBUG nova.virt.libvirt.vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:58:52Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.646 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG nova.network.os_vif_util [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa5f1925-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa5f1925-a5, col_values=(('external_ids', {'iface-id': 'fa5f1925-a535-45ee-b96e-f79c725d7960', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:59:d2', 'vm-uuid': '654413e6-01cd-4e54-a271-6b515a8561e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:03 np0005486808 NetworkManager[44885]: <info>  [1760432343.6526] manager: (tapfa5f1925-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.666 2 INFO os_vif [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5')#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.739 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:99:b0:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.740 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] No VIF found with MAC fa:16:3e:9d:59:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.740 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Using config drive#033[00m
Oct 14 04:59:03 np0005486808 nova_compute[259627]: 2025-10-14 08:59:03.844 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:03 np0005486808 sleepy_faraday[295271]: --> passed data devices: 0 physical, 3 LVM
Oct 14 04:59:03 np0005486808 sleepy_faraday[295271]: --> relative data size: 1.0
Oct 14 04:59:03 np0005486808 sleepy_faraday[295271]: --> All data devices are unavailable
Oct 14 04:59:04 np0005486808 systemd[1]: libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Deactivated successfully.
Oct 14 04:59:04 np0005486808 systemd[1]: libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Consumed 1.025s CPU time.
Oct 14 04:59:04 np0005486808 conmon[295271]: conmon 0fb67794b0f1e97e0b14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope/container/memory.events
Oct 14 04:59:04 np0005486808 podman[295255]: 2025-10-14 08:59:04.01650615 +0000 UTC m=+1.409292967 container died 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:59:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bf4f73a84c044caea4fd8237703057b294687d57311c5d195f65fb37aa6b7615-merged.mount: Deactivated successfully.
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.088 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.088 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.089 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.090 2 INFO nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Terminating instance#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.090 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.091 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquired lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.091 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:04 np0005486808 podman[295255]: 2025-10-14 08:59:04.09226068 +0000 UTC m=+1.485047487 container remove 0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:59:04 np0005486808 systemd[1]: libpod-conmon-0fb67794b0f1e97e0b1446786810f80be6bc764e0150b19a6964e9a1199223fe.scope: Deactivated successfully.
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.161 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updated VIF entry in instance network info cache for port 902a062a-858b-4495-936b-47a675567467. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.161 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.182 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.182 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-changed-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG nova.compute.manager [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing instance network info cache due to event network-changed-fa5f1925-a535-45ee-b96e-f79c725d7960. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.183 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Refreshing network info cache for port fa5f1925-a535-45ee-b96e-f79c725d7960 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.319 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.493 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Creating config drive at /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.498 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp050kbihl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.634 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp050kbihl" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.659 2 DEBUG nova.storage.rbd_utils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] rbd image 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.662 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.788335183 +0000 UTC m=+0.059906622 container create bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.806 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.809 2 DEBUG oslo_concurrency.lockutils [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.809 2 DEBUG nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] No waiting events found dispatching network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.809 2 WARNING nova.compute.manager [req-3d553a74-7424-4728-8819-1e7a4ca88bb9 req-f3b73f02-bd26-43f3-ab44-3399cfed09e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received unexpected event network-vif-plugged-acc7c80f-8812-4bbf-93f8-cc3f1556b62a for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:04 np0005486808 systemd[1]: Started libpod-conmon-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope.
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.857 2 DEBUG oslo_concurrency.processutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config 654413e6-01cd-4e54-a271-6b515a8561e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.859 2 INFO nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deleting local config drive /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.76703754 +0000 UTC m=+0.038608999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.879636665 +0000 UTC m=+0.151208124 container init bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.888760489 +0000 UTC m=+0.160331928 container start bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.893907776 +0000 UTC m=+0.165479235 container attach bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 04:59:04 np0005486808 nifty_yonath[295565]: 167 167
Oct 14 04:59:04 np0005486808 systemd[1]: libpod-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope: Deactivated successfully.
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.899239237 +0000 UTC m=+0.170810676 container died bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 04:59:04 np0005486808 kernel: tap902a062a-85: entered promiscuous mode
Oct 14 04:59:04 np0005486808 NetworkManager[44885]: <info>  [1760432344.9260] manager: (tap902a062a-85): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct 14 04:59:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d0b802cc55b58cbfe12901c139270c1a45029af642c1540c9ddc72bcecd43b56-merged.mount: Deactivated successfully.
Oct 14 04:59:04 np0005486808 podman[295533]: 2025-10-14 08:59:04.958400589 +0000 UTC m=+0.229972028 container remove bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 04:59:04 np0005486808 nova_compute[259627]: 2025-10-14 08:59:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:04 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:04Z|00205|binding|INFO|Claiming lport 902a062a-858b-4495-936b-47a675567467 for this chassis.
Oct 14 04:59:04 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:04Z|00206|binding|INFO|902a062a-858b-4495-936b-47a675567467: Claiming fa:16:3e:99:b0:d5 10.100.0.125
Oct 14 04:59:04 np0005486808 NetworkManager[44885]: <info>  [1760432344.9646] manager: (tapfa5f1925-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b0:d5 10.100.0.125'], port_security=['fa:16:3e:99:b0:d5 10.100.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.125/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba2a7d22-c618-4486-a804-fe221f5826d8, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=902a062a-858b-4495-936b-47a675567467) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.970 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 902a062a-858b-4495-936b-47a675567467 in datapath b65b17d8-22d8-41b1-aa72-fd93aefdff30 bound to our chassis#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.972 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b65b17d8-22d8-41b1-aa72-fd93aefdff30#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1bc40b-2223-4e11-b02b-3006f612455b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.986 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb65b17d8-21 in ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:04 np0005486808 systemd-udevd[295597]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.990 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb65b17d8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3de697-7650-451e-a81a-3dac4a9a40b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:04.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b24e65be-1a4b-46bc-ae65-af77185808e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:04 np0005486808 systemd-udevd[295600]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.0027] device (tap902a062a-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.0036] device (tap902a062a-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:05 np0005486808 systemd-machined[214636]: New machine qemu-33-instance-0000001c.
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.017 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad6c9b1-d1b6-4464-a424-ea231b8477a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.0196] device (tapfa5f1925-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:05 np0005486808 kernel: tapfa5f1925-a5: entered promiscuous mode
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.0209] device (tapfa5f1925-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00207|binding|INFO|Claiming lport fa5f1925-a535-45ee-b96e-f79c725d7960 for this chassis.
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00208|binding|INFO|fa5f1925-a535-45ee-b96e-f79c725d7960: Claiming fa:16:3e:9d:59:d2 10.100.1.203
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Oct 14 04:59:05 np0005486808 systemd[1]: libpod-conmon-bf2631684d8409ac2f551509b8bff1bce180a8ab633abe358dd55ebaf4535d7e.scope: Deactivated successfully.
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00209|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 ovn-installed in OVS
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00210|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 up in Southbound
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.040 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:59:d2 10.100.1.203'], port_security=['fa:16:3e:9d:59:d2 10.100.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.203/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26d3dea5-d3a1-43cc-a801-df7cba99d5e2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fa5f1925-a535-45ee-b96e-f79c725d7960) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ba57cc-97df-4191-b376-596c617b221b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00211|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 ovn-installed in OVS
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00212|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 up in Southbound
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.084 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab9c019-be07-4389-b46d-28c928ad0e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.0962] manager: (tapb65b17d8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.095 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8f0134-1d02-4027-8105-169b6f1c3113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 systemd-udevd[295604]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.146 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5690b5-d5a9-4db3-a613-b07532f0d5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a489eb21-0b7e-42ee-bb43-c77d8f270d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.1924] device (tapb65b17d8-20): carrier: link connected
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.202 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9b8f79-2c85-4f1b-8036-a55d26fd4755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 podman[295624]: 2025-10-14 08:59:05.209042854 +0000 UTC m=+0.075289430 container create ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e40b419-245a-40b4-b8b3-c2133d7fc0f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb65b17d8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:78:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612395, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295649, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 systemd[1]: Started libpod-conmon-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope.
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67dfa044-9007-46ff-9389-356addff73b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:7891'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612395, 'tstamp': 612395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295652, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 podman[295624]: 2025-10-14 08:59:05.180450432 +0000 UTC m=+0.046697038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.293 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79288cb1-bb27-46ba-ae0f-7e233eb2843a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb65b17d8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:78:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612395, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295656, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 podman[295624]: 2025-10-14 08:59:05.316518693 +0000 UTC m=+0.182765299 container init ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 04:59:05 np0005486808 podman[295624]: 2025-10-14 08:59:05.32613902 +0000 UTC m=+0.192385596 container start ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 04:59:05 np0005486808 podman[295624]: 2025-10-14 08:59:05.330102567 +0000 UTC m=+0.196349143 container attach ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8eba1017-77a3-4c47-861a-a301ca2901a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.438 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b13c4a14-5b90-47ec-8c13-c8b5ed220a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.441 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb65b17d8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.441 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.442 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb65b17d8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:05 np0005486808 kernel: tapb65b17d8-20: entered promiscuous mode
Oct 14 04:59:05 np0005486808 NetworkManager[44885]: <info>  [1760432345.4761] manager: (tapb65b17d8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.488 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb65b17d8-20, col_values=(('external_ids', {'iface-id': 'a9cc546a-2b36-4c1a-bae3-27bbea3be016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:05 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:05Z|00213|binding|INFO|Releasing lport a9cc546a-2b36-4c1a-bae3-27bbea3be016 from this chassis (sb_readonly=0)
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.494 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3684156a-302b-400b-ae9c-875e5e63594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.516 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-b65b17d8-22d8-41b1-aa72-fd93aefdff30
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/b65b17d8-22d8-41b1-aa72-fd93aefdff30.pid.haproxy
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID b65b17d8-22d8-41b1-aa72-fd93aefdff30
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:05.517 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'env', 'PROCESS_TAG=haproxy-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b65b17d8-22d8-41b1-aa72-fd93aefdff30.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 04:59:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 04:59:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 04:59:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1573849996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 04:59:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 11 MiB/s wr, 598 op/s
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.841 2 DEBUG nova.network.neutron [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.862 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Releasing lock "refresh_cache-51c76e0f-284d-4122-83b4-32c4518b9056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:05 np0005486808 nova_compute[259627]: 2025-10-14 08:59:05.862 2 DEBUG nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:05 np0005486808 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 14 04:59:05 np0005486808 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000016.scope: Consumed 3.820s CPU time.
Oct 14 04:59:05 np0005486808 systemd-machined[214636]: Machine qemu-31-instance-00000016 terminated.
Oct 14 04:59:06 np0005486808 podman[295729]: 2025-10-14 08:59:06.042223474 +0000 UTC m=+0.057670667 container create eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 04:59:06 np0005486808 systemd[1]: Started libpod-conmon-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope.
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.089 2 INFO nova.virt.libvirt.driver [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance destroyed successfully.#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.091 2 DEBUG nova.objects.instance [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lazy-loading 'resources' on Instance uuid 51c76e0f-284d-4122-83b4-32c4518b9056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:06 np0005486808 podman[295729]: 2025-10-14 08:59:06.015896537 +0000 UTC m=+0.031343760 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:59:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2261a989d2df2c2391ffadc57ac59269fc091a983c6619c6804ba5d6a17e8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:06 np0005486808 podman[295729]: 2025-10-14 08:59:06.130475441 +0000 UTC m=+0.145922624 container init eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 04:59:06 np0005486808 podman[295729]: 2025-10-14 08:59:06.136157221 +0000 UTC m=+0.151604404 container start eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]: {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    "0": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "devices": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "/dev/loop3"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            ],
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_name": "ceph_lv0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_size": "21470642176",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "name": "ceph_lv0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "tags": {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_name": "ceph",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.crush_device_class": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.encrypted": "0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_id": "0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.vdo": "0"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            },
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "vg_name": "ceph_vg0"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        }
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    ],
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    "1": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "devices": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "/dev/loop4"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            ],
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_name": "ceph_lv1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_size": "21470642176",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "name": "ceph_lv1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "tags": {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_name": "ceph",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.crush_device_class": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.encrypted": "0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_id": "1",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.vdo": "0"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            },
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "vg_name": "ceph_vg1"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        }
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    ],
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    "2": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "devices": [
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "/dev/loop5"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            ],
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_name": "ceph_lv2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_size": "21470642176",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "name": "ceph_lv2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "tags": {
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cephx_lockbox_secret": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.cluster_name": "ceph",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.crush_device_class": "",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.encrypted": "0",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osd_id": "2",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:                "ceph.vdo": "0"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            },
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "type": "block",
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:            "vg_name": "ceph_vg2"
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:        }
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]:    ]
Oct 14 04:59:06 np0005486808 cranky_vaughan[295653]: }
Oct 14 04:59:06 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : New worker (295770) forked
Oct 14 04:59:06 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : Loading success.
Oct 14 04:59:06 np0005486808 systemd[1]: libpod-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope: Deactivated successfully.
Oct 14 04:59:06 np0005486808 conmon[295653]: conmon ff3296f6e4bc3cd61276 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope/container/memory.events
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fa5f1925-a535-45ee-b96e-f79c725d7960 in datapath 7b0bcfe7-c41f-42d1-9739-3dab148a181f unbound from our chassis#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.207 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0bcfe7-c41f-42d1-9739-3dab148a181f#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3a880936-0638-4877-8834-4de9c3fb7e65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.218 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0bcfe7-c1 in ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.220 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0bcfe7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc54952-ee94-4598-aeec-3134d16d9595]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33f6ced0-0357-46ca-b6a1-235483e5da36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.233 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432346.2328403, 654413e6-01cd-4e54-a271-6b515a8561e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.233 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.234 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[582bb1c7-a013-48b2-bf51-91e2877b37fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 podman[295779]: 2025-10-14 08:59:06.236559136 +0000 UTC m=+0.037798209 container died ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.250 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3acf2d-9db5-4fb1-837d-d071af32e114]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.257 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.263 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432346.2346213, 654413e6-01cd-4e54-a271-6b515a8561e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.263 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9b01175819c7b1e2eaee5fac569d9c73e78e8e4f6a01bdfbcab1a48d6bced2fd-merged.mount: Deactivated successfully.
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.277 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[91785e8e-6d34-40ba-a21e-bc15fe0b42ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1fa541-f215-4cf8-b5fe-c16b66876b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 NetworkManager[44885]: <info>  [1760432346.2894] manager: (tap7b0bcfe7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.289 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:06 np0005486808 systemd-udevd[295640]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.296 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:06 np0005486808 podman[295779]: 2025-10-14 08:59:06.326272769 +0000 UTC m=+0.127511832 container remove ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:06 np0005486808 systemd[1]: libpod-conmon-ff3296f6e4bc3cd61276ec42eff8487ba7e2908d0361a6b623decb7489db1a0a.scope: Deactivated successfully.
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2183cc1e-0c63-495b-bb35-c7bcb991949b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.357 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fb78f660-4207-4d7c-89bb-6a5562a4171f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 NetworkManager[44885]: <info>  [1760432346.3906] device (tap7b0bcfe7-c0): carrier: link connected
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.397 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[120e53de-090f-4516-9e00-ea3d4496eb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.416 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb5c219-5f56-4e7f-9aa6-91826382f2c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0bcfe7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:a0:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612515, 'reachable_time': 41221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295819, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f00095e6-4661-4f08-9be4-c50bf951ac33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:a0ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 612515, 'tstamp': 612515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295830, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.458 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd986a3d-e95b-41eb-96ca-a03c4bb80ca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0bcfe7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:a0:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612515, 'reachable_time': 41221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295832, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d981d1-6755-454f-9e59-f19960be35c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[201eeb0b-b771-43af-86f2-a15f86a88923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0bcfe7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0bcfe7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:06 np0005486808 NetworkManager[44885]: <info>  [1760432346.5595] manager: (tap7b0bcfe7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:06 np0005486808 kernel: tap7b0bcfe7-c0: entered promiscuous mode
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.563 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0bcfe7-c0, col_values=(('external_ids', {'iface-id': '807b5c61-142e-48b3-a22e-e804f0499ef7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:06 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:06Z|00214|binding|INFO|Releasing lport 807b5c61-142e-48b3-a22e-e804f0499ef7 from this chassis (sb_readonly=0)
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.593 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20374efa-4ab3-476c-9e26-469dcc116428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.596 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7b0bcfe7-c41f-42d1-9739-3dab148a181f
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7b0bcfe7-c41f-42d1-9739-3dab148a181f.pid.haproxy
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7b0bcfe7-c41f-42d1-9739-3dab148a181f
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:06.597 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'env', 'PROCESS_TAG=haproxy-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0bcfe7-c41f-42d1-9739-3dab148a181f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.705 2 INFO nova.virt.libvirt.driver [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deleting instance files /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.706 2 INFO nova.virt.libvirt.driver [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deletion of /var/lib/nova/instances/51c76e0f-284d-4122-83b4-32c4518b9056_del complete#033[00m
Oct 14 04:59:06 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:06Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:fb:42 10.100.0.3
Oct 14 04:59:06 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:06Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:fb:42 10.100.0.3
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.796 2 DEBUG nova.compute.manager [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.797 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.797 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.798 2 DEBUG oslo_concurrency.lockutils [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.798 2 DEBUG nova.compute.manager [req-d75ff9b9-6b67-49fd-b910-10c0b8cee8eb req-7a81d7d7-c420-4017-a454-d40254e4db70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Processing event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.799 2 INFO nova.compute.manager [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.799 2 DEBUG oslo.service.loopingcall [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.800 2 DEBUG nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.800 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:06 np0005486808 nova_compute[259627]: 2025-10-14 08:59:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.017 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.018 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:07.019 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:07 np0005486808 podman[295973]: 2025-10-14 08:59:07.078201193 +0000 UTC m=+0.062327142 container create 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.092 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.109 2 DEBUG nova.network.neutron [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.111025569 +0000 UTC m=+0.081743649 container create c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 04:59:07 np0005486808 systemd[1]: Started libpod-conmon-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope.
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.127 2 INFO nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Took 0.33 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:07 np0005486808 podman[295973]: 2025-10-14 08:59:07.053718961 +0000 UTC m=+0.037844930 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:59:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5fee482ca97cde7b330eef7b447de260a6448dd9b51c26764ac8cff20927e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:07 np0005486808 systemd[1]: Started libpod-conmon-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope.
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.073770424 +0000 UTC m=+0.044488524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:07 np0005486808 podman[295973]: 2025-10-14 08:59:07.169599867 +0000 UTC m=+0.153725826 container init 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:59:07 np0005486808 podman[295973]: 2025-10-14 08:59:07.178655059 +0000 UTC m=+0.162781018 container start 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 04:59:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.184 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.185 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.201040229 +0000 UTC m=+0.171758299 container init c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 04:59:07 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : New worker (296017) forked
Oct 14 04:59:07 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : Loading success.
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.207788635 +0000 UTC m=+0.178506715 container start c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.211715191 +0000 UTC m=+0.182433271 container attach c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:07 np0005486808 jolly_lehmann[296010]: 167 167
Oct 14 04:59:07 np0005486808 systemd[1]: libpod-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope: Deactivated successfully.
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.21533054 +0000 UTC m=+0.186048620 container died c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.226 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 04:59:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ae668dbdc9ed47c922b5b15d592565495b9ebd0afa22ab974257c776b40a004e-merged.mount: Deactivated successfully.
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.245 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.245 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Processing event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.246 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG oslo_concurrency.lockutils [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.247 2 DEBUG nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.248 2 WARNING nova.compute.manager [req-5ae20624-bf06-49c5-8bd0-d8ff9fe73995 req-c5ad0c7f-11b7-4b94-bc69-91b1ef9d2cf7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.249 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 04:59:07 np0005486808 podman[295979]: 2025-10-14 08:59:07.252666447 +0000 UTC m=+0.223384527 container remove c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.252 2 DEBUG nova.compute.provider_tree [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.254 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.261 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432347.2601912, 654413e6-01cd-4e54-a271-6b515a8561e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.262 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.264 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.269 2 INFO nova.virt.libvirt.driver [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance spawned successfully.#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.270 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.274 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.294 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.299 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.302 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 systemd[1]: libpod-conmon-c8e9fe974979f1bd0bb09550d097bc5d386b401898a73d18de2b62016ba26a16.scope: Deactivated successfully.
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.304 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.305 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.307 2 DEBUG nova.virt.libvirt.driver [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.314 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.331 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updated VIF entry in instance network info cache for port fa5f1925-a535-45ee-b96e-f79c725d7960. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.332 2 DEBUG nova.network.neutron [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [{"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.336 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.348 2 DEBUG oslo_concurrency.lockutils [req-c1f41de1-d690-485a-9d7f-90d5a70768fb req-942170bb-ca1f-4d0c-a928-a03a8032539a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-654413e6-01cd-4e54-a271-6b515a8561e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:07 np0005486808 podman[296044]: 2025-10-14 08:59:07.45356313 +0000 UTC m=+0.040658479 container create f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:07 np0005486808 systemd[1]: Started libpod-conmon-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope.
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.503 2 INFO nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 15.01 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.504 2 DEBUG nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:07 np0005486808 podman[296044]: 2025-10-14 08:59:07.439479464 +0000 UTC m=+0.026574833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 04:59:07 np0005486808 podman[296044]: 2025-10-14 08:59:07.55331944 +0000 UTC m=+0.140414809 container init f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 04:59:07 np0005486808 podman[296044]: 2025-10-14 08:59:07.561161062 +0000 UTC m=+0.148256411 container start f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 04:59:07 np0005486808 podman[296044]: 2025-10-14 08:59:07.56473297 +0000 UTC m=+0.151828319 container attach f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 04:59:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.574 2 DEBUG oslo_concurrency.processutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.614 2 INFO nova.compute.manager [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 16.12 seconds to build instance.#033[00m
Oct 14 04:59:07 np0005486808 nova_compute[259627]: 2025-10-14 08:59:07.631 2 DEBUG oslo_concurrency.lockutils [None req-c1c542f0-41a0-4b53-8e8b-74085e19411c 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381741753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.067 2 DEBUG oslo_concurrency.processutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.075 2 DEBUG nova.compute.provider_tree [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.100 2 DEBUG nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.102 2 DEBUG nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.252 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.405 2 INFO nova.scheduler.client.report [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Deleted allocations for instance 51c76e0f-284d-4122-83b4-32c4518b9056#033[00m
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.562 2 DEBUG oslo_concurrency.lockutils [None req-65f2d792-7733-415b-8344-254bb261382b 24a7b84f511340ae859b668a0e7becf6 61066a48551647f18a4cfb7a7147e7ed - - default default] Lock "51c76e0f-284d-4122-83b4-32c4518b9056" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:08 np0005486808 awesome_cray[296058]: {
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_id": 2,
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "type": "bluestore"
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    },
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_id": 1,
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "type": "bluestore"
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    },
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_id": 0,
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:        "type": "bluestore"
Oct 14 04:59:08 np0005486808 awesome_cray[296058]:    }
Oct 14 04:59:08 np0005486808 awesome_cray[296058]: }
Oct 14 04:59:08 np0005486808 systemd[1]: libpod-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope: Deactivated successfully.
Oct 14 04:59:08 np0005486808 conmon[296058]: conmon f6fc0e86a3a5b0c8eacf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope/container/memory.events
Oct 14 04:59:08 np0005486808 podman[296044]: 2025-10-14 08:59:08.642300371 +0000 UTC m=+1.229395720 container died f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 04:59:08 np0005486808 nova_compute[259627]: 2025-10-14 08:59:08.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fe99259d5987fbc37765d9822372bbaf9f5af48857c4abfa71da3098c06674f7-merged.mount: Deactivated successfully.
Oct 14 04:59:08 np0005486808 podman[296044]: 2025-10-14 08:59:08.694085773 +0000 UTC m=+1.281181122 container remove f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 04:59:08 np0005486808 systemd[1]: libpod-conmon-f6fc0e86a3a5b0c8eacfd5a4f739cd36a5d1043392ec01ff7b8b1140dd811a35.scope: Deactivated successfully.
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 04:59:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:08 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 18036fd2-3b0e-4a50-bfae-ea198d20e47d does not exist
Oct 14 04:59:08 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e9b7a5ef-ca1c-4d02-a34b-d3f81914edf9 does not exist
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.014 2 DEBUG oslo_concurrency.lockutils [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.015 2 DEBUG nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:09 np0005486808 nova_compute[259627]: 2025-10-14 08:59:09.015 2 WARNING nova.compute.manager [req-e26d17a1-19d5-4e07-97c0-c4c699b6e66a req-1abae74d-ca85-473e-a07f-d06d7e0578cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 321 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 3.9 MiB/s wr, 433 op/s
Oct 14 04:59:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.131 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.132 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.133 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.134 2 INFO nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Terminating instance#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.135 2 DEBUG nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:10 np0005486808 kernel: tap902a062a-85 (unregistering): left promiscuous mode
Oct 14 04:59:10 np0005486808 NetworkManager[44885]: <info>  [1760432350.1951] device (tap902a062a-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00215|binding|INFO|Releasing lport 902a062a-858b-4495-936b-47a675567467 from this chassis (sb_readonly=0)
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00216|binding|INFO|Setting lport 902a062a-858b-4495-936b-47a675567467 down in Southbound
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00217|binding|INFO|Removing iface tap902a062a-85 ovn-installed in OVS
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.212 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:b0:d5 10.100.0.125'], port_security=['fa:16:3e:99:b0:d5 10.100.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.125/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba2a7d22-c618-4486-a804-fe221f5826d8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=902a062a-858b-4495-936b-47a675567467) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.214 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 902a062a-858b-4495-936b-47a675567467 in datapath b65b17d8-22d8-41b1-aa72-fd93aefdff30 unbound from our chassis#033[00m
Oct 14 04:59:10 np0005486808 kernel: tapfa5f1925-a5 (unregistering): left promiscuous mode
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.216 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b65b17d8-22d8-41b1-aa72-fd93aefdff30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:10 np0005486808 NetworkManager[44885]: <info>  [1760432350.2214] device (tapfa5f1925-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec9f351-a8c4-49b9-bb9d-1162a25efb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.219 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 namespace which is not needed anymore#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00218|binding|INFO|Releasing lport fa5f1925-a535-45ee-b96e-f79c725d7960 from this chassis (sb_readonly=0)
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00219|binding|INFO|Setting lport fa5f1925-a535-45ee-b96e-f79c725d7960 down in Southbound
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00220|binding|INFO|Removing iface tapfa5f1925-a5 ovn-installed in OVS
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.263 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:59:d2 10.100.1.203'], port_security=['fa:16:3e:9d:59:d2 10.100.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.203/24', 'neutron:device_id': '654413e6-01cd-4e54-a271-6b515a8561e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3566f35c659a45bd9b9bbddf6552ed43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a10e31-1c91-4c44-89d5-dfe41c81ad3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26d3dea5-d3a1-43cc-a801-df7cba99d5e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=fa5f1925-a535-45ee-b96e-f79c725d7960) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 3.688s CPU time.
Oct 14 04:59:10 np0005486808 systemd-machined[214636]: Machine qemu-33-instance-0000001c terminated.
Oct 14 04:59:10 np0005486808 NetworkManager[44885]: <info>  [1760432350.3687] manager: (tapfa5f1925-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct 14 04:59:10 np0005486808 kernel: tapbcdd5079-ef (unregistering): left promiscuous mode
Oct 14 04:59:10 np0005486808 NetworkManager[44885]: <info>  [1760432350.3799] device (tapbcdd5079-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:10 np0005486808 systemd[1]: libpod-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.404 2 INFO nova.virt.libvirt.driver [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Instance destroyed successfully.#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.404 2 DEBUG nova.objects.instance [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lazy-loading 'resources' on Instance uuid 654413e6-01cd-4e54-a271-6b515a8561e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.430 2 DEBUG nova.virt.libvirt.vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:07Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.431 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "902a062a-858b-4495-936b-47a675567467", "address": "fa:16:3e:99:b0:d5", "network": {"id": "b65b17d8-22d8-41b1-aa72-fd93aefdff30", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1724838337", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.125", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap902a062a-85", "ovs_interfaceid": "902a062a-858b-4495-936b-47a675567467", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.431 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.432 2 DEBUG os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap902a062a-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [NOTICE]   (295760) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:10 np0005486808 conmon[295747]: conmon eefa4b1ee19cb0960184 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope/container/memory.events
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [WARNING]  (295760) : Exiting Master process...
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [ALERT]    (295760) : Current worker (295770) exited with code 143 (Terminated)
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30[295747]: [WARNING]  (295760) : All workers exited. Exiting... (0)
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00221|binding|INFO|Releasing lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 from this chassis (sb_readonly=0)
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00222|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 down in Southbound
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00223|binding|INFO|Removing iface tapbcdd5079-ef ovn-installed in OVS
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:10 np0005486808 podman[296203]: 2025-10-14 08:59:10.447241403 +0000 UTC m=+0.109815837 container died eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.450 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.480 2 INFO os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:b0:d5,bridge_name='br-int',has_traffic_filtering=True,id=902a062a-858b-4495-936b-47a675567467,network=Network(b65b17d8-22d8-41b1-aa72-fd93aefdff30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap902a062a-85')#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.481 2 DEBUG nova.virt.libvirt.vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1217516639',display_name='tempest-ServersTestMultiNic-server-1217516639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1217516639',id=28,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3566f35c659a45bd9b9bbddf6552ed43',ramdisk_id='',reservation_id='r-1ec16n6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-840673976',owner_user_name='tempest-ServersTestMultiNic-840673976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:07Z,user_data=None,user_id='5aacb60ad29c418c9161e71bb72da036',uuid=654413e6-01cd-4e54-a271-6b515a8561e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.481 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converting VIF {"id": "fa5f1925-a535-45ee-b96e-f79c725d7960", "address": "fa:16:3e:9d:59:d2", "network": {"id": "7b0bcfe7-c41f-42d1-9739-3dab148a181f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1841633563", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3566f35c659a45bd9b9bbddf6552ed43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa5f1925-a5", "ovs_interfaceid": "fa5f1925-a535-45ee-b96e-f79c725d7960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Consumed 12.554s CPU time.
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.482 2 DEBUG nova.network.os_vif_util [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.483 2 DEBUG os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1b2261a989d2df2c2391ffadc57ac59269fc091a983c6619c6804ba5d6a17e8d-merged.mount: Deactivated successfully.
Oct 14 04:59:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa5f1925-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:10 np0005486808 systemd-machined[214636]: Machine qemu-28-instance-0000001a terminated.
Oct 14 04:59:10 np0005486808 podman[296203]: 2025-10-14 08:59:10.493556291 +0000 UTC m=+0.156130725 container cleanup eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.497 2 INFO os_vif [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:59:d2,bridge_name='br-int',has_traffic_filtering=True,id=fa5f1925-a535-45ee-b96e-f79c725d7960,network=Network(7b0bcfe7-c41f-42d1-9739-3dab148a181f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa5f1925-a5')#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: libpod-conmon-eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 podman[296255]: 2025-10-14 08:59:10.563426965 +0000 UTC m=+0.046836260 container remove eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.575 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cdffe2-6d07-4b11-87d1-7c9dafe1a6ad]: (4, ('Tue Oct 14 08:59:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 (eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8)\neefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8\nTue Oct 14 08:59:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 (eefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8)\neefa4b1ee19cb0960184e92bdb0afadfe041d2360042b40b2c6602181b36f7a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[321c6f7c-21c6-47cb-bf3b-a100cdb721c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.581 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb65b17d8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 kernel: tapb65b17d8-20: left promiscuous mode
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cc2ebc-5cf2-426a-81e6-253e2a1306a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[240dfe69-7471-4418-9a62-cf7b19a0563a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.635 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1b2d38-017b-46a0-bafe-52612be1b306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fda534e3-0578-407c-b434-5d4a789ae11d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612383, 'reachable_time': 27790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296305, 'error': None, 'target': 'ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 systemd[1]: run-netns-ovnmeta\x2db65b17d8\x2d22d8\x2d41b1\x2daa72\x2dfd93aefdff30.mount: Deactivated successfully.
Oct 14 04:59:10 np0005486808 kernel: tapbcdd5079-ef: entered promiscuous mode
Oct 14 04:59:10 np0005486808 kernel: tapbcdd5079-ef (unregistering): left promiscuous mode
Oct 14 04:59:10 np0005486808 NetworkManager[44885]: <info>  [1760432350.6584] manager: (tapbcdd5079-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.655 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b65b17d8-22d8-41b1-aa72-fd93aefdff30 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.655 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8acc7d0f-60e3-4ccf-9bf5-d87d04ee3ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.656 162547 INFO neutron.agent.ovn.metadata.agent [-] Port fa5f1925-a535-45ee-b96e-f79c725d7960 in datapath 7b0bcfe7-c41f-42d1-9739-3dab148a181f unbound from our chassis#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.657 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0bcfe7-c41f-42d1-9739-3dab148a181f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00224|binding|INFO|Claiming lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 for this chassis.
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00225|binding|INFO|bcdd5079-efdb-47f7-99b0-21394b1d16e2: Claiming fa:16:3e:da:fb:42 10.100.0.3
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d73894e2-b20e-4933-9be7-3237dd8d5f00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.660 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f namespace which is not needed anymore#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00226|binding|INFO|Setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 ovn-installed in OVS
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00227|if_status|INFO|Dropped 2 log messages in last 219 seconds (most recently, 219 seconds ago) due to excessive rate
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00228|if_status|INFO|Not setting lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 down as sb is readonly
Oct 14 04:59:10 np0005486808 podman[296284]: 2025-10-14 08:59:10.716255068 +0000 UTC m=+0.106038665 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 04:59:10 np0005486808 podman[296285]: 2025-10-14 08:59:10.76396304 +0000 UTC m=+0.148585390 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:59:10 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:10Z|00229|binding|INFO|Releasing lport bcdd5079-efdb-47f7-99b0-21394b1d16e2 from this chassis (sb_readonly=0)
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.777 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.783 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:fb:42 10.100.0.3'], port_security=['fa:16:3e:da:fb:42 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3f3d9640-8200-45d8-ac25-bbc5d016d49f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bcdd5079-efdb-47f7-99b0-21394b1d16e2) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [NOTICE]   (296014) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [WARNING]  (296014) : Exiting Master process...
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [ALERT]    (296014) : Current worker (296017) exited with code 143 (Terminated)
Oct 14 04:59:10 np0005486808 neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f[296003]: [WARNING]  (296014) : All workers exited. Exiting... (0)
Oct 14 04:59:10 np0005486808 systemd[1]: libpod-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 podman[296357]: 2025-10-14 08:59:10.854308928 +0000 UTC m=+0.049577868 container died 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8ee5fee482ca97cde7b330eef7b447de260a6448dd9b51c26764ac8cff20927e-merged.mount: Deactivated successfully.
Oct 14 04:59:10 np0005486808 podman[296357]: 2025-10-14 08:59:10.892445455 +0000 UTC m=+0.087714395 container cleanup 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 04:59:10 np0005486808 systemd[1]: libpod-conmon-2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa.scope: Deactivated successfully.
Oct 14 04:59:10 np0005486808 podman[296384]: 2025-10-14 08:59:10.956619671 +0000 UTC m=+0.041808798 container remove 2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1488e198-503a-4a80-a9d6-cd9167eadccd]: (4, ('Tue Oct 14 08:59:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f (2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa)\n2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa\nTue Oct 14 08:59:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f (2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa)\n2cc860753fb25594021f0f771e8c4dfbb790500328fbb70bf769e6bd3ed172aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.967 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[66b8c958-3027-46cf-9a76-fd64f4a49f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.968 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0bcfe7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 kernel: tap7b0bcfe7-c0: left promiscuous mode
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.983 2 INFO nova.virt.libvirt.driver [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deleting instance files /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6_del#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.984 2 INFO nova.virt.libvirt.driver [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deletion of /var/lib/nova/instances/654413e6-01cd-4e54-a271-6b515a8561e6_del complete#033[00m
Oct 14 04:59:10 np0005486808 nova_compute[259627]: 2025-10-14 08:59:10.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:10.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[211518f3-7f03-4421-be0c-34024eaad328]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77e38c97-5ba2-4c36-b3e4-577daa888dd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.021 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc32272b-74a6-4222-85e5-7b3378c073ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.029 2 INFO nova.compute.manager [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG oslo.service.loopingcall [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.030 2 DEBUG nova.network.neutron [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bdfb47-15ab-44f6-a7fe-e5aa03165f6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 612503, 'reachable_time': 29925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296403, 'error': None, 'target': 'ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.036 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0bcfe7-c41f-42d1-9739-3dab148a181f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.036 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[72ea90f1-71ec-4b3c-bc23-f87cc5c2bcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.037 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73ebc064-a313-4f6c-a167-db4a6a33d2ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.038 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.065 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.066 2 DEBUG oslo_concurrency.lockutils [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.067 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.067 2 DEBUG nova.compute.manager [req-2cf86504-33d9-413e-b0e9-22159e84f00a req-067a612e-c852-43f6-97e4-8dd2a68e4554 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.121 2 INFO nova.virt.libvirt.driver [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.125 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance destroyed successfully.#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.126 2 DEBUG nova.objects.instance [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.139 2 DEBUG nova.compute.manager [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [NOTICE]   (293674) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : Exiting Master process...
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : Exiting Master process...
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [ALERT]    (293674) : Current worker (293676) exited with code 143 (Terminated)
Oct 14 04:59:11 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[293670]: [WARNING]  (293674) : All workers exited. Exiting... (0)
Oct 14 04:59:11 np0005486808 systemd[1]: libpod-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope: Deactivated successfully.
Oct 14 04:59:11 np0005486808 podman[296421]: 2025-10-14 08:59:11.18179827 +0000 UTC m=+0.063390867 container died 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.181 2 DEBUG oslo_concurrency.lockutils [None req-86cf384c-5445-45f4-946d-8333c57c1990 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:11 np0005486808 podman[296421]: 2025-10-14 08:59:11.221239229 +0000 UTC m=+0.102831776 container cleanup 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:59:11 np0005486808 systemd[1]: libpod-conmon-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb.scope: Deactivated successfully.
Oct 14 04:59:11 np0005486808 podman[296446]: 2025-10-14 08:59:11.287197408 +0000 UTC m=+0.043867268 container remove 7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.292 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2835110b-0560-48fd-8a13-6d9bb61eef8b]: (4, ('Tue Oct 14 08:59:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb)\n7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb\nTue Oct 14 08:59:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb)\n7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.294 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34b72f21-d6a6-4444-bc69-7677058f51f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.294 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:11 np0005486808 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[189c130d-a14a-46ba-a5cb-49a2e581d181]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8af72eac-5bd1-47ce-9d27-16da14685cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d60c3bcf-dd46-4ebc-82ee-9403fcc96a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68c9eec7-480a-4e5e-8e72-2c63d4f99542]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611218, 'reachable_time': 18994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296464, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b5faf9-200a-44c8-aeac-62c42047cbed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.362 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.363 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.364 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f836f2a-4055-460a-b810-dd1187b82f6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.364 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bcdd5079-efdb-47f7-99b0-21394b1d16e2 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.365 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:11.366 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1682d087-7129-4a5e-8b64-53f55e24a7cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7b0bcfe7\x2dc41f\x2d42d1\x2d9739\x2d3dab148a181f.mount: Deactivated successfully.
Oct 14 04:59:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7281a4ba77e5cabb29255d89d41089125e8d4a234511b13cf81fb479f2c3742c-merged.mount: Deactivated successfully.
Oct 14 04:59:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e46f18b81f63e244f31a3118e4318c7bff3c71cc70198b8fbea4950265415bb-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 04:59:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 5.8 MiB/s wr, 595 op/s
Oct 14 04:59:11 np0005486808 nova_compute[259627]: 2025-10-14 08:59:11.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.200 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.200 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.203 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.204 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.204 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.205 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-unplugged-902a062a-858b-4495-936b-47a675567467 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.205 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.206 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.207 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.207 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.208 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-902a062a-858b-4495-936b-47a675567467 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.209 2 WARNING nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-902a062a-858b-4495-936b-47a675567467 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.210 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.210 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.211 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.211 2 DEBUG oslo_concurrency.lockutils [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.212 2 DEBUG nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.212 2 WARNING nova.compute.manager [req-058fbe61-b25c-44de-96c3-894274dc7499 req-09f8642d-cae3-43f1-8640-f3b8195ade82 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-unplugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.813 2 DEBUG nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.865 2 INFO nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] instance snapshotting#033[00m
Oct 14 04:59:12 np0005486808 nova_compute[259627]: 2025-10-14 08:59:12.866 2 WARNING nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.107 2 DEBUG nova.network.neutron [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.126 2 INFO nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Took 2.10 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.171 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.172 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.187 2 INFO nova.virt.libvirt.driver [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Beginning cold snapshot process#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.380 2 DEBUG nova.virt.libvirt.imagebackend [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.401 2 DEBUG oslo_concurrency.processutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.437 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.437 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG oslo_concurrency.lockutils [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] No waiting events found dispatching network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.438 2 WARNING nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received unexpected event network-vif-plugged-fa5f1925-a535-45ee-b96e-f79c725d7960 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.438 2 DEBUG nova.compute.manager [req-d6b1cfdc-2a0a-463f-9f06-84622c3db46c req-9dc45364-077f-4312-a845-93844797c240 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-deleted-902a062a-858b-4495-936b-47a675567467 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 306 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 2.2 MiB/s wr, 451 op/s
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.757 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(3d0fd75ab46944e19a6ca6507c025d43) on rbd image(3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:59:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769614021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.887 2 DEBUG oslo_concurrency.processutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.893 2 DEBUG nova.compute.provider_tree [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.917 2 DEBUG nova.scheduler.client.report [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.938 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:13 np0005486808 nova_compute[259627]: 2025-10-14 08:59:13.961 2 INFO nova.scheduler.client.report [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Deleted allocations for instance 654413e6-01cd-4e54-a271-6b515a8561e6#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.016 2 DEBUG oslo_concurrency.lockutils [None req-037cd8ec-61e1-4bb4-a7a2-15a770219af1 5aacb60ad29c418c9161e71bb72da036 3566f35c659a45bd9b9bbddf6552ed43 - - default default] Lock "654413e6-01cd-4e54-a271-6b515a8561e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.321 2 DEBUG nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.321 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG oslo_concurrency.lockutils [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.322 2 DEBUG nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] No waiting events found dispatching network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.323 2 WARNING nova.compute.manager [req-349abad4-c214-4742-8ead-fe2929230585 req-c2b054a4-aa8d-4ee5-bc64-ad7a9c12b4c4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received unexpected event network-vif-plugged-bcdd5079-efdb-47f7-99b0-21394b1d16e2 for instance with vm_state stopped and task_state image_uploading.#033[00m
Oct 14 04:59:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 14 04:59:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 14 04:59:14 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.817 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.818 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.819 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.825 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.826 2 DEBUG nova.objects.instance [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'flavor' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.834 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk@3d0fd75ab46944e19a6ca6507c025d43 to images/0c9ef4ae-621d-4e37-afca-e810a018589c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.895 2 DEBUG nova.virt.libvirt.driver [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 04:59:14 np0005486808 nova_compute[259627]: 2025-10-14 08:59:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:15 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:15Z|00230|binding|INFO|Releasing lport 650f034e-5333-49ba-9907-b0409944aee7 from this chassis (sb_readonly=0)
Oct 14 04:59:15 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:15Z|00231|binding|INFO|Removing iface tapbcdd5079-ef ovn-installed in OVS
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.011 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/0c9ef4ae-621d-4e37-afca-e810a018589c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.297 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(3d0fd75ab46944e19a6ca6507c025d43) on rbd image(3f3d9640-8200-45d8-ac25-bbc5d016d49f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.515 2 DEBUG nova.compute.manager [req-6e3464f6-6dfd-4b21-8b00-a3fb9f253dce req-26dfb640-8060-47e4-ad2a-dde3d6b7f5ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Received event network-vif-deleted-fa5f1925-a535-45ee-b96e-f79c725d7960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 371 op/s
Oct 14 04:59:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 14 04:59:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 14 04:59:15 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.854 2 DEBUG nova.storage.rbd_utils [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(0c9ef4ae-621d-4e37-afca-e810a018589c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:15 np0005486808 nova_compute[259627]: 2025-10-14 08:59:15.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 04:59:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 14 04:59:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 14 04:59:16 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 14 04:59:16 np0005486808 nova_compute[259627]: 2025-10-14 08:59:16.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:16 np0005486808 nova_compute[259627]: 2025-10-14 08:59:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:17 np0005486808 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 04:59:17 np0005486808 NetworkManager[44885]: <info>  [1760432357.1737] device (tap0b16cd6a-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00232|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00233|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00234|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.195 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.197 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.213 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59652a25-5f43-493f-b2ea-2ebfa3c48c85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.248 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5083a639-0194-4340-89c7-b6d300ac4d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfd525b-9bb8-44dd-8643-5edd2e8528af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 14 04:59:17 np0005486808 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Consumed 14.253s CPU time.
Oct 14 04:59:17 np0005486808 systemd-machined[214636]: Machine qemu-29-instance-0000001b terminated.
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[11e0d4e8-e457-4d2f-b841-393ae9591ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:d2:4a 10.100.0.13
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.307 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3183170d-eb3f-42b1-b37c-bb3724f1916a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296646, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb34aa8-4fd0-47ec-93c6-bf594d67e99a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296647, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296647, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.339 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 04:59:17 np0005486808 NetworkManager[44885]: <info>  [1760432357.4155] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct 14 04:59:17 np0005486808 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00235|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00236|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.432 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.433 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.434 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00237|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00238|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00239|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=1)
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00240|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.451 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83dd0496-85ce-4e7c-80b0-b87ec206c367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00241|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00242|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.461 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.481 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[029682cb-350c-4281-8204-fc79dca7b2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.484 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c26fa53-dbd2-4664-84ff-5096237f2305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.501 2 DEBUG nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.501 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG oslo_concurrency.lockutils [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.502 2 DEBUG nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.502 2 WARNING nova.compute.manager [req-5667cab3-adab-44a6-b01a-312c37a2130d req-1b89d1a9-e579-4754-a420-63036fa00685 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 04:59:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.514 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2c07ceb7-5ae3-42b5-9527-1f71969c5a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2102c0-e624-4f55-b694-980f3b12e5e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296661, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.580 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bed7a836-39af-425f-8c83-a2328089c1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296662, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296662, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.582 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.632 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.633 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.635 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.636 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.641 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9104b2-8b8b-4bb0-a418-d253ce545431]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.690 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02675c9c-9ed2-44cd-af65-42f5c1f1c7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.692 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d73719ef-4eb2-49ca-90a1-0b640c5b9038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.730 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a85c07a-24bc-4bc8-955f-da1fdcaf5fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3381c7-8c1b-4a0c-9350-35d35c269f3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296670, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc760652-7eb3-452a-92dc-9f2e9e9236b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296671, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296671, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.767 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:17.776 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.934 2 INFO nova.virt.libvirt.driver [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance shutdown successfully after 3 seconds.#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.941 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.#033[00m
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.942 2 DEBUG nova.objects.instance [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'numa_topology' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:46:6c 10.100.0.14
Oct 14 04:59:17 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:17Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:46:6c 10.100.0.14
Oct 14 04:59:17 np0005486808 nova_compute[259627]: 2025-10-14 08:59:17.987 2 DEBUG nova.compute.manager [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:18 np0005486808 nova_compute[259627]: 2025-10-14 08:59:18.065 2 DEBUG oslo_concurrency.lockutils [None req-05b2a378-dafa-4172-b8f9-ea0700bd5367 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:18 np0005486808 nova_compute[259627]: 2025-10-14 08:59:18.364 2 INFO nova.virt.libvirt.driver [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Snapshot image upload complete#033[00m
Oct 14 04:59:18 np0005486808 nova_compute[259627]: 2025-10-14 08:59:18.364 2 INFO nova.compute.manager [None req-2377b0ed-12e1-483e-bd83-fdcfa046722c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 5.50 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 04:59:18 np0005486808 nova_compute[259627]: 2025-10-14 08:59:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190223049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 348 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 10 MiB/s wr, 294 op/s
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.584 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.584 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.588 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 14 04:59:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 14 04:59:19 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.836 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.840 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3899MB free_disk=59.85548782348633GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.840 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.841 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3f3d9640-8200-45d8-ac25-bbc5d016d49f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 27fa4cf8-c08c-46a2-af8f-17c8980a2317 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance eb820455-d45c-4331-9363-124f11537f52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.946 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 04:59:19 np0005486808 nova_compute[259627]: 2025-10-14 08:59:19.947 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.402 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.403 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.404 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.404 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.405 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-unplugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.406 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG oslo_concurrency.lockutils [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.407 2 DEBUG nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.407 2 WARNING nova.compute.manager [req-f03e30ad-40ea-449c-ac20-3416ebe8acba req-8583a09e-cfd0-4179-9893-ee04af3e695e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 04:59:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163746131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.451 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.457 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.471 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.493 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.494 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.581 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.582 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.583 2 INFO nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Terminating instance#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.583 2 DEBUG nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.589 2 INFO nova.virt.libvirt.driver [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Instance destroyed successfully.#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.589 2 DEBUG nova.objects.instance [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid 3f3d9640-8200-45d8-ac25-bbc5d016d49f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.612 2 DEBUG nova.virt.libvirt.vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-107832270',display_name='tempest-ImagesTestJSON-server-107832270',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-107832270',id=26,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:58:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-bhq8zg2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=3f3d9640-8200-45d8-ac25-bbc5d016d49f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.613 2 DEBUG nova.network.os_vif_util [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "address": "fa:16:3e:da:fb:42", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcdd5079-ef", "ovs_interfaceid": "bcdd5079-efdb-47f7-99b0-21394b1d16e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.613 2 DEBUG nova.network.os_vif_util [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.614 2 DEBUG os_vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcdd5079-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.624 2 INFO os_vif [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:fb:42,bridge_name='br-int',has_traffic_filtering=True,id=bcdd5079-efdb-47f7-99b0-21394b1d16e2,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcdd5079-ef')#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.674 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'flavor' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG nova.network.neutron [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:20 np0005486808 nova_compute[259627]: 2025-10-14 08:59:20.693 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'info_cache' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.043 2 INFO nova.virt.libvirt.driver [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deleting instance files /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f_del#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.045 2 INFO nova.virt.libvirt.driver [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deletion of /var/lib/nova/instances/3f3d9640-8200-45d8-ac25-bbc5d016d49f_del complete#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.086 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432346.0813866, 51c76e0f-284d-4122-83b4-32c4518b9056 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.086 2 INFO nova.compute.manager [-] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.112 2 DEBUG nova.compute.manager [None req-64299c29-53d2-42c0-9b93-89ff9f5a7685 - - - - - -] [instance: 51c76e0f-284d-4122-83b4-32c4518b9056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.117 2 INFO nova.compute.manager [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG oslo.service.loopingcall [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.118 2 DEBUG nova.network.neutron [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.495 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 10 MiB/s wr, 336 op/s
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 04:59:21 np0005486808 nova_compute[259627]: 2025-10-14 08:59:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.094 2 DEBUG nova.network.neutron [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.110 2 INFO nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.172 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.173 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.196 2 DEBUG nova.network.neutron [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.212 2 DEBUG oslo_concurrency.lockutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.213 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.247 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.247 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'numa_topology' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.258 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.269 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.270 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.271 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.271 2 DEBUG os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b16cd6a-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.278 2 INFO os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.280 2 DEBUG oslo_concurrency.processutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.318 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start _get_guest_xml network_info=[{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.324 2 WARNING nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.331 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.332 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.343 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.344 2 DEBUG nova.virt.libvirt.host [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.344 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.345 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.346 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.347 2 DEBUG nova.virt.hardware [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.348 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.365 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155859392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.713 2 DEBUG oslo_concurrency.processutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.720 2 DEBUG nova.compute.provider_tree [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.736 2 DEBUG nova.scheduler.client.report [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.762 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708326235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.793 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.879 2 INFO nova.scheduler.client.report [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance 3f3d9640-8200-45d8-ac25-bbc5d016d49f#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.883 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:22 np0005486808 nova_compute[259627]: 2025-10-14 08:59:22.995 2 DEBUG oslo_concurrency.lockutils [None req-850df5a0-5534-4fa4-917b-9a493e36e452 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "3f3d9640-8200-45d8-ac25-bbc5d016d49f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.086 2 DEBUG nova.compute.manager [req-0aa4a025-0914-432a-8b44-21fc71da2109 req-9a3995f9-f6c8-4a68-9e9f-1b3e2e786e33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Received event network-vif-deleted-bcdd5079-efdb-47f7-99b0-21394b1d16e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183545449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.316 2 DEBUG oslo_concurrency.processutils [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.318 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.318 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.319 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.321 2 DEBUG nova.objects.instance [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.362 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <uuid>27fa4cf8-c08c-46a2-af8f-17c8980a2317</uuid>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <name>instance-0000001b</name>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1883268496</nova:name>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:22</nova:creationTime>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:user uuid="56f2f9bf9b064a208d9ce5fe732c4ff7">tempest-ListServerFiltersTestJSON-1842486796-project-member</nova:user>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:project uuid="3d3a647aa3914555a8a2c5fd6fe7a543">tempest-ListServerFiltersTestJSON-1842486796</nova:project>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <nova:port uuid="0b16cd6a-fe42-4a54-8bbe-810915fcaa93">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="serial">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="uuid">27fa4cf8-c08c-46a2-af8f-17c8980a2317</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/27fa4cf8-c08c-46a2-af8f-17c8980a2317_disk.config">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c3:07:ec"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <target dev="tap0b16cd6a-fe"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317/console.log" append="off"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:23 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:23 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:23 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:23 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.363 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.363 2 DEBUG nova.virt.libvirt.driver [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.364 2 DEBUG nova.virt.libvirt.vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:18Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.365 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.365 2 DEBUG nova.network.os_vif_util [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.366 2 DEBUG os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b16cd6a-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b16cd6a-fe, col_values=(('external_ids', {'iface-id': '0b16cd6a-fe42-4a54-8bbe-810915fcaa93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:07:ec', 'vm-uuid': '27fa4cf8-c08c-46a2-af8f-17c8980a2317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 NetworkManager[44885]: <info>  [1760432363.3782] manager: (tap0b16cd6a-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.384 2 INFO os_vif [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')#033[00m
Oct 14 04:59:23 np0005486808 kernel: tap0b16cd6a-fe: entered promiscuous mode
Oct 14 04:59:23 np0005486808 NetworkManager[44885]: <info>  [1760432363.4552] manager: (tap0b16cd6a-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct 14 04:59:23 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:23Z|00243|binding|INFO|Claiming lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for this chassis.
Oct 14 04:59:23 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:23Z|00244|binding|INFO|0b16cd6a-fe42-4a54-8bbe-810915fcaa93: Claiming fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.464 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.465 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 bound to our chassis#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.466 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:23 np0005486808 systemd-udevd[296836]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7c767-0718-47aa-a19d-3f42d5af8ac1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:23Z|00245|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 ovn-installed in OVS
Oct 14 04:59:23 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:23Z|00246|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 up in Southbound
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 NetworkManager[44885]: <info>  [1760432363.5008] device (tap0b16cd6a-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:23 np0005486808 NetworkManager[44885]: <info>  [1760432363.5023] device (tap0b16cd6a-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:23 np0005486808 systemd-machined[214636]: New machine qemu-34-instance-0000001b.
Oct 14 04:59:23 np0005486808 systemd[1]: Started Virtual Machine qemu-34-instance-0000001b.
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.525 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4635890-bd05-451d-9d14-02d7617453b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.528 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1e08b011-8b0e-4a4f-b29c-6a741e500317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.559 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[736d29be-abbb-417c-be0a-1cfdf8f69a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 372 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.2 MiB/s wr, 298 op/s
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e448dda1-e1d7-41af-a140-3f9b3bb8ffa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 874, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 874, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296851, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe3e1aa-59c1-4a06-9e78-734c8088ce05]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296853, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296853, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.592 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 nova_compute[259627]: 2025-10-14 08:59:23.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:23.595 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.056 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.057 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.096 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.161 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.162 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.168 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.169 2 INFO nova.compute.claims [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.316 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [{"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.319 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.363 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-27fa4cf8-c08c-46a2-af8f-17c8980a2317" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.363 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.364 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157234727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.785 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.793 2 DEBUG nova.compute.provider_tree [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.815 2 DEBUG nova.scheduler.client.report [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.826 2 DEBUG nova.compute.manager [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.827 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 27fa4cf8-c08c-46a2-af8f-17c8980a2317 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.828 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432364.8235388, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.829 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.837 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance rebooted successfully.#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.838 2 DEBUG nova.compute.manager [None req-74cacb42-d544-42a0-aa4b-a810c5c1c513 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.852 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.856 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.857 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.866 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.911 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.912 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432364.8241057, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.913 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.940 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.944 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.955 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.955 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:59:24 np0005486808 nova_compute[259627]: 2025-10-14 08:59:24.982 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.002 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.096 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.097 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.098 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating image(s)#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.120 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.148 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.171 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.175 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.273 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.276 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.277 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.278 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.307 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.310 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f3dafba3-6472-4921-9ece-b6076172365e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.383 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432350.3816154, 654413e6-01cd-4e54-a271-6b515a8561e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.383 2 INFO nova.compute.manager [-] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.406 2 DEBUG nova.compute.manager [None req-cc76175c-4e61-4e2d-ae2e-e4b8d39581d0 - - - - - -] [instance: 654413e6-01cd-4e54-a271-6b515a8561e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.559 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f3dafba3-6472-4921-9ece-b6076172365e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 329 op/s
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.621 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.717 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432350.6832848, 3f3d9640-8200-45d8-ac25-bbc5d016d49f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.717 2 INFO nova.compute.manager [-] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.725 2 DEBUG nova.objects.instance [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.753 2 DEBUG nova.compute.manager [None req-168f0166-41c9-4407-841c-1f1f2bf95ad7 - - - - - -] [instance: 3f3d9640-8200-45d8-ac25-bbc5d016d49f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.755 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.756 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Ensure instance console log exists: /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.756 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.757 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.758 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:25 np0005486808 nova_compute[259627]: 2025-10-14 08:59:25.951 2 DEBUG nova.policy [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:59:26 np0005486808 nova_compute[259627]: 2025-10-14 08:59:26.594 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Successfully created port: ecb526b7-d1ad-4a75-b851-482702018258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:59:26 np0005486808 nova_compute[259627]: 2025-10-14 08:59:26.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.498 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Successfully updated port: ecb526b7-d1ad-4a75-b851-482702018258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:59:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 14 04:59:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.524 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 14 04:59:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 340 op/s
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.593 2 DEBUG nova.compute.manager [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-changed-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.593 2 DEBUG nova.compute.manager [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Refreshing instance network info cache due to event network-changed-ecb526b7-d1ad-4a75-b851-482702018258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.594 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:27 np0005486808 nova_compute[259627]: 2025-10-14 08:59:27.712 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.360 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.956 2 DEBUG nova.network.neutron [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Releasing lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance network_info: |[{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.983 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Refreshing network info cache for port ecb526b7-d1ad-4a75-b851-482702018258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.986 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start _get_guest_xml network_info=[{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.989 2 WARNING nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.995 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:28 np0005486808 nova_compute[259627]: 2025-10-14 08:59:28.995 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.000 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.libvirt.host [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.001 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.002 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.003 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.004 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.004 2 DEBUG nova.virt.hardware [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.006 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293226448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.426 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.460 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.465 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 283 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 53 KiB/s wr, 76 op/s
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.679 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.680 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.681 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.681 2 WARNING nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.682 2 DEBUG oslo_concurrency.lockutils [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.683 2 DEBUG nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] No waiting events found dispatching network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.683 2 WARNING nova.compute.manager [req-4c5b0d95-f053-474f-8f71-7394a52e2a25 req-14b40617-79ca-4d20-8272-07ee53dc2bd1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received unexpected event network-vif-plugged-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911530406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.879 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.881 2 DEBUG nova.virt.libvirt.vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:25Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.882 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.883 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.885 2 DEBUG nova.objects.instance [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.904 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <uuid>f3dafba3-6472-4921-9ece-b6076172365e</uuid>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <name>instance-0000001f</name>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesTestJSON-server-1482418899</nova:name>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:28</nova:creationTime>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:user uuid="3a217215c39e41fea2323ff7b3b4e6aa">tempest-ImagesTestJSON-168259448-project-member</nova:user>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:project uuid="0d87d2d744db48dc8b32bb4bf6847fce">tempest-ImagesTestJSON-168259448</nova:project>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <nova:port uuid="ecb526b7-d1ad-4a75-b851-482702018258">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="serial">f3dafba3-6472-4921-9ece-b6076172365e</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="uuid">f3dafba3-6472-4921-9ece-b6076172365e</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f3dafba3-6472-4921-9ece-b6076172365e_disk">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f3dafba3-6472-4921-9ece-b6076172365e_disk.config">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:e5:8b:13"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <target dev="tapecb526b7-d1"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/console.log" append="off"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:29 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:29 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:29 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:29 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.907 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Preparing to wait for external event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.907 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.910 2 DEBUG nova.virt.libvirt.vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:25Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.910 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.911 2 DEBUG nova.network.os_vif_util [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.912 2 DEBUG os_vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecb526b7-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapecb526b7-d1, col_values=(('external_ids', {'iface-id': 'ecb526b7-d1ad-4a75-b851-482702018258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:8b:13', 'vm-uuid': 'f3dafba3-6472-4921-9ece-b6076172365e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:29 np0005486808 NetworkManager[44885]: <info>  [1760432369.9259] manager: (tapecb526b7-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:29 np0005486808 nova_compute[259627]: 2025-10-14 08:59:29.938 2 INFO os_vif [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1')#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.024 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.025 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No VIF found with MAC fa:16:3e:e5:8b:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.026 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Using config drive#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.061 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.191 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updated VIF entry in instance network info cache for port ecb526b7-d1ad-4a75-b851-482702018258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.192 2 DEBUG nova.network.neutron [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [{"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.221 2 DEBUG oslo_concurrency.lockutils [req-63c10960-af23-40db-bd0c-3e4b13064f38 req-abb1cbef-3270-4679-aa12-c317869b69bc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f3dafba3-6472-4921-9ece-b6076172365e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.529 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Creating config drive at /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.535 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoj38svzi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.672 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoj38svzi" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.696 2 DEBUG nova.storage.rbd_utils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image f3dafba3-6472-4921-9ece-b6076172365e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.699 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config f3dafba3-6472-4921-9ece-b6076172365e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.856 2 DEBUG oslo_concurrency.processutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config f3dafba3-6472-4921-9ece-b6076172365e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.857 2 INFO nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deleting local config drive /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:30 np0005486808 kernel: tapecb526b7-d1: entered promiscuous mode
Oct 14 04:59:30 np0005486808 NetworkManager[44885]: <info>  [1760432370.9149] manager: (tapecb526b7-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Oct 14 04:59:30 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:30Z|00247|binding|INFO|Claiming lport ecb526b7-d1ad-4a75-b851-482702018258 for this chassis.
Oct 14 04:59:30 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:30Z|00248|binding|INFO|ecb526b7-d1ad-4a75-b851-482702018258: Claiming fa:16:3e:e5:8b:13 10.100.0.9
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:30 np0005486808 nova_compute[259627]: 2025-10-14 08:59:30.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.941 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:8b:13 10.100.0.9'], port_security=['fa:16:3e:e5:8b:13 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f3dafba3-6472-4921-9ece-b6076172365e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ecb526b7-d1ad-4a75-b851-482702018258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.943 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ecb526b7-d1ad-4a75-b851-482702018258 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a bound to our chassis#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.954 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a#033[00m
Oct 14 04:59:30 np0005486808 systemd-machined[214636]: New machine qemu-35-instance-0000001f.
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.971 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf15dd9-c83b-45b9-940c-e1ed0fddc2e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.973 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2322cf7a-01 in ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.976 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2322cf7a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.976 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85a73d3a-ae43-4901-9283-9d305ed9be01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07753aaf-2347-4627-a03a-1587c9b33a6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:30 np0005486808 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct 14 04:59:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:30.990 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d89ea0b5-20b4-4e94-a0b7-2cfd336e8e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.017 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb70221-83b5-47e5-88d7-fcb34f1dd7c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 systemd-udevd[297243]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:31Z|00249|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 ovn-installed in OVS
Oct 14 04:59:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:31Z|00250|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 up in Southbound
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:31 np0005486808 NetworkManager[44885]: <info>  [1760432371.0368] device (tapecb526b7-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:31 np0005486808 NetworkManager[44885]: <info>  [1760432371.0376] device (tapecb526b7-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.050 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cd3ba2-c4ef-4f37-8f3b-a0a7431fd6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 NetworkManager[44885]: <info>  [1760432371.0596] manager: (tap2322cf7a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/124)
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f541bf-12c8-46d7-94e6-f16e2d822556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 podman[297216]: 2025-10-14 08:59:31.070324592 +0000 UTC m=+0.100775036 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.091 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[db3b85c6-bc80-477b-9b9b-2bdd3ee3621d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.093 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd61bbb-9fe2-432d-9f2a-d6d36964a369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 podman[297217]: 2025-10-14 08:59:31.099891378 +0000 UTC m=+0.130903436 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:59:31 np0005486808 NetworkManager[44885]: <info>  [1760432371.1185] device (tap2322cf7a-00): carrier: link connected
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.124 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d40835f5-dff0-428a-87d2-c10f98dadd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.138 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d915ba8c-8cd7-4fe7-9cd3-faee787d7c2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614988, 'reachable_time': 21752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297290, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.153 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89f0de31-255f-4af4-9910-6159c984f213]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:956c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614988, 'tstamp': 614988}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297291, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.166 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e18a1a8-7263-427b-ae8a-41e30952ee71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2322cf7a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:95:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614988, 'reachable_time': 21752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297292, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96b7e563-4c76-4aac-98cb-eee325b8d0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.254 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[405230d6-1dce-4b63-b2a5-64b3f4436210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.255 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.256 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2322cf7a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:31 np0005486808 NetworkManager[44885]: <info>  [1760432371.3106] manager: (tap2322cf7a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Oct 14 04:59:31 np0005486808 kernel: tap2322cf7a-00: entered promiscuous mode
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.313 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2322cf7a-00, col_values=(('external_ids', {'iface-id': '0616bbde-729a-4cd4-ba39-5fcdf59ece5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:31 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:31Z|00251|binding|INFO|Releasing lport 0616bbde-729a-4cd4-ba39-5fcdf59ece5e from this chassis (sb_readonly=0)
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.334 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.335 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9585f4-8d8e-41e0-a7f5-9528059882af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.336 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/2322cf7a-0090-40fa-a558-42d84cc6fc2a.pid.haproxy
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 2322cf7a-0090-40fa-a558-42d84cc6fc2a
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:31.337 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'env', 'PROCESS_TAG=haproxy-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2322cf7a-0090-40fa-a558-42d84cc6fc2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 169 op/s
Oct 14 04:59:31 np0005486808 podman[297365]: 2025-10-14 08:59:31.709958279 +0000 UTC m=+0.048049851 container create d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:59:31 np0005486808 systemd[1]: Started libpod-conmon-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope.
Oct 14 04:59:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83f526bc4bd45d25326b47f32c13811952e9e26bf37d62c64b21ce07a2169ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:31 np0005486808 podman[297365]: 2025-10-14 08:59:31.685435887 +0000 UTC m=+0.023527479 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:59:31 np0005486808 podman[297365]: 2025-10-14 08:59:31.788946148 +0000 UTC m=+0.127037810 container init d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:31 np0005486808 podman[297365]: 2025-10-14 08:59:31.802230275 +0000 UTC m=+0.140321887 container start d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 04:59:31 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : New worker (297386) forked
Oct 14 04:59:31 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : Loading success.
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.967 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432371.9670866, f3dafba3-6472-4921-9ece-b6076172365e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.968 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:31 np0005486808 nova_compute[259627]: 2025-10-14 08:59:31.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:32 np0005486808 nova_compute[259627]: 2025-10-14 08:59:32.000 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432371.9682617, f3dafba3-6472-4921-9ece-b6076172365e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:32 np0005486808 nova_compute[259627]: 2025-10-14 08:59:32.000 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:32 np0005486808 nova_compute[259627]: 2025-10-14 08:59:32.021 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:32 np0005486808 nova_compute[259627]: 2025-10-14 08:59:32.024 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:32 np0005486808 nova_compute[259627]: 2025-10-14 08:59:32.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_08:59:32
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:59:32 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 04:59:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 325 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.784 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.785 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.786 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.787 2 INFO nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Terminating instance#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.788 2 DEBUG nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:33 np0005486808 kernel: tap2da46865-98 (unregistering): left promiscuous mode
Oct 14 04:59:33 np0005486808 NetworkManager[44885]: <info>  [1760432373.8487] device (tap2da46865-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:33Z|00252|binding|INFO|Releasing lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d from this chassis (sb_readonly=0)
Oct 14 04:59:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:33Z|00253|binding|INFO|Setting lport 2da46865-98ea-42a7-a5cc-44b5bef36a3d down in Southbound
Oct 14 04:59:33 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:33Z|00254|binding|INFO|Removing iface tap2da46865-98 ovn-installed in OVS
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.867 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d2:4a 10.100.0.13'], port_security=['fa:16:3e:28:d2:4a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2da46865-98ea-42a7-a5cc-44b5bef36a3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.868 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2da46865-98ea-42a7-a5cc-44b5bef36a3d in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.870 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f481b5-6877-4192-a24f-8d27b008f68e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:33 np0005486808 nova_compute[259627]: 2025-10-14 08:59:33.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:33 np0005486808 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 14 04:59:33 np0005486808 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 15.044s CPU time.
Oct 14 04:59:33 np0005486808 systemd-machined[214636]: Machine qemu-30-instance-0000001e terminated.
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.927 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1532e2c-86ed-4232-9924-242c102eb91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.930 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1af90aac-58ef-4935-9dfb-2e4fc7590ced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.968 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70e39bb1-8957-48f2-8086-753121bb153f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:33.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3558fc08-6ed5-4374-b846-02f532ae2097]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 18, 'rx_bytes': 1000, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297404, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.019 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d84689-b16d-4b96-9339-9345c03dd01d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297406, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297406, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.025 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.031 2 INFO nova.virt.libvirt.driver [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Instance destroyed successfully.#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.031 2 DEBUG nova.objects.instance [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.036 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:34.037 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.043 2 DEBUG nova.virt.libvirt.vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-208119549',display_name='tempest-ListServerFiltersTestJSON-instance-208119549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-208119549',id=30,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-ldwr4ls0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:03Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.044 2 DEBUG nova.network.os_vif_util [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "address": "fa:16:3e:28:d2:4a", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da46865-98", "ovs_interfaceid": "2da46865-98ea-42a7-a5cc-44b5bef36a3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.044 2 DEBUG nova.network.os_vif_util [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.045 2 DEBUG os_vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da46865-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.052 2 INFO os_vif [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d2:4a,bridge_name='br-int',has_traffic_filtering=True,id=2da46865-98ea-42a7-a5cc-44b5bef36a3d,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2da46865-98')#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.431 2 INFO nova.virt.libvirt.driver [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deleting instance files /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_del#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.433 2 INFO nova.virt.libvirt.driver [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deletion of /var/lib/nova/instances/82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9_del complete#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.485 2 INFO nova.compute.manager [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.486 2 DEBUG oslo.service.loopingcall [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.486 2 DEBUG nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:34 np0005486808 nova_compute[259627]: 2025-10-14 08:59:34.487 2 DEBUG nova.network.neutron [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:35.339 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:35 np0005486808 nova_compute[259627]: 2025-10-14 08:59:35.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:35.340 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 04:59:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 04:59:36 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:36Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 04:59:36 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:36Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:07:ec 10.100.0.11
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.879 2 DEBUG nova.network.neutron [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.900 2 INFO nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Took 2.41 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.956 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.957 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:36 np0005486808 nova_compute[259627]: 2025-10-14 08:59:36.980 2 DEBUG nova.compute.manager [req-295c0c76-6dcc-4348-80f2-9b32b046a165 req-e96e52d0-1e20-4b1f-9e47-148f058bafb4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Received event network-vif-deleted-2da46865-98ea-42a7-a5cc-44b5bef36a3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.089 2 DEBUG oslo_concurrency.processutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898397418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.545 2 DEBUG oslo_concurrency.processutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.554 2 DEBUG nova.compute.provider_tree [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.574 2 DEBUG nova.scheduler.client.report [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.600 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.622 2 INFO nova.scheduler.client.report [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9#033[00m
Oct 14 04:59:37 np0005486808 nova_compute[259627]: 2025-10-14 08:59:37.703 2 DEBUG oslo_concurrency.lockutils [None req-538be77a-8cbb-417f-8264-4417e6727ffe 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.205 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.206 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.207 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "eb820455-d45c-4331-9363-124f11537f52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.207 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.208 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.210 2 INFO nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Terminating instance#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.211 2 DEBUG nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:38 np0005486808 kernel: tapacc7c80f-88 (unregistering): left promiscuous mode
Oct 14 04:59:38 np0005486808 NetworkManager[44885]: <info>  [1760432378.2700] device (tapacc7c80f-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:38 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:38Z|00255|binding|INFO|Releasing lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a from this chassis (sb_readonly=0)
Oct 14 04:59:38 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:38Z|00256|binding|INFO|Setting lport acc7c80f-8812-4bbf-93f8-cc3f1556b62a down in Southbound
Oct 14 04:59:38 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:38Z|00257|binding|INFO|Removing iface tapacc7c80f-88 ovn-installed in OVS
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:46:6c 10.100.0.14'], port_security=['fa:16:3e:da:46:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'eb820455-d45c-4331-9363-124f11537f52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=acc7c80f-8812-4bbf-93f8-cc3f1556b62a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.297 162547 INFO neutron.agent.ovn.metadata.agent [-] Port acc7c80f-8812-4bbf-93f8-cc3f1556b62a in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52a0cf4d-e06d-4480-b9f4-da3bfcc203c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 14 04:59:38 np0005486808 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001d.scope: Consumed 14.992s CPU time.
Oct 14 04:59:38 np0005486808 systemd-machined[214636]: Machine qemu-32-instance-0000001d terminated.
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.356 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[28e0e229-a065-4da8-a592-3222682b5db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[138e83e0-dcae-4008-836d-5d207a1c0a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[18d40bad-ba20-458d-ae43-a5b638d7d564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14862412-4949-4674-86f5-32cc57069159]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4d50d6a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:91:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 1000, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 20, 'rx_bytes': 1000, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611643, 'reachable_time': 22346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297472, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.427 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a29042c-65d6-46e7-b628-ab9dec82238d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611657, 'tstamp': 611657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297473, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4d50d6a-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611661, 'tstamp': 611661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297473, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.433 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4d50d6a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4d50d6a-60, col_values=(('external_ids', {'iface-id': '650f034e-5333-49ba-9907-b0409944aee7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:38.434 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.450 2 INFO nova.virt.libvirt.driver [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Instance destroyed successfully.#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.451 2 DEBUG nova.objects.instance [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid eb820455-d45c-4331-9363-124f11537f52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.467 2 DEBUG nova.virt.libvirt.vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-195518745',display_name='tempest-ListServerFiltersTestJSON-instance-195518745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-195518745',id=29,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-0kd7c49h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:02Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=eb820455-d45c-4331-9363-124f11537f52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.468 2 DEBUG nova.network.os_vif_util [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "address": "fa:16:3e:da:46:6c", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacc7c80f-88", "ovs_interfaceid": "acc7c80f-8812-4bbf-93f8-cc3f1556b62a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.469 2 DEBUG nova.network.os_vif_util [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.469 2 DEBUG os_vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc7c80f-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.477 2 INFO os_vif [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:46:6c,bridge_name='br-int',has_traffic_filtering=True,id=acc7c80f-8812-4bbf-93f8-cc3f1556b62a,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacc7c80f-88')#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.841 2 INFO nova.virt.libvirt.driver [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deleting instance files /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52_del#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.842 2 INFO nova.virt.libvirt.driver [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Deletion of /var/lib/nova/instances/eb820455-d45c-4331-9363-124f11537f52_del complete#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.908 2 INFO nova.compute.manager [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG oslo.service.loopingcall [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.909 2 DEBUG nova.network.neutron [-] [instance: eb820455-d45c-4331-9363-124f11537f52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.983 2 DEBUG nova.compute.manager [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG oslo_concurrency.lockutils [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.984 2 DEBUG nova.compute.manager [req-f82e92a1-ce29-48c6-b6f5-0b050536a0f0 req-b0d62a3d-c667-4047-86c2-6d5cb0fca915 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Processing event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.985 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.990 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432378.9899826, f3dafba3-6472-4921-9ece-b6076172365e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.990 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.992 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.997 2 INFO nova.virt.libvirt.driver [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance spawned successfully.#033[00m
Oct 14 04:59:38 np0005486808 nova_compute[259627]: 2025-10-14 08:59:38.997 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.011 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.018 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.021 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.021 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.022 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.022 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.023 2 DEBUG nova.virt.libvirt.driver [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.057 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.097 2 INFO nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 14.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.097 2 DEBUG nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.176 2 INFO nova.compute.manager [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 15.04 seconds to build instance.#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.193 2 DEBUG oslo_concurrency.lockutils [None req-fb8bea6e-d678-4cf3-8237-bcc9bb9911c1 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.540 2 DEBUG nova.network.neutron [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.559 2 INFO nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 246 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.596 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.597 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.640 2 DEBUG nova.compute.manager [req-6fe65eb0-0eb0-4b32-9175-75c40c74e297 req-186fafee-5646-4f3c-8502-30bf4106e92e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: eb820455-d45c-4331-9363-124f11537f52] Received event network-vif-deleted-acc7c80f-8812-4bbf-93f8-cc3f1556b62a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:39 np0005486808 nova_compute[259627]: 2025-10-14 08:59:39.703 2 DEBUG oslo_concurrency.processutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694684315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.202 2 DEBUG oslo_concurrency.processutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.208 2 DEBUG nova.compute.provider_tree [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.237 2 DEBUG nova.scheduler.client.report [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.304 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.338 2 INFO nova.scheduler.client.report [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance eb820455-d45c-4331-9363-124f11537f52#033[00m
Oct 14 04:59:40 np0005486808 nova_compute[259627]: 2025-10-14 08:59:40.415 2 DEBUG oslo_concurrency.lockutils [None req-6f81a48d-cf5a-4748-9ee2-f67e7c6b1865 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "eb820455-d45c-4331-9363-124f11537f52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:41.342 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 04:59:41 np0005486808 podman[297527]: 2025-10-14 08:59:41.687309592 +0000 UTC m=+0.075813803 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 04:59:41 np0005486808 podman[297526]: 2025-10-14 08:59:41.77725872 +0000 UTC m=+0.176621418 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:59:41 np0005486808 nova_compute[259627]: 2025-10-14 08:59:41.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.407 2 DEBUG nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.408 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.408 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.409 2 DEBUG oslo_concurrency.lockutils [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.409 2 DEBUG nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.409 2 WARNING nova.compute.manager [req-25dadb79-c3ec-4c8e-815f-3ed163628b03 req-8f876e97-8c20-4fa3-a853-2e295847d81b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.446 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.447 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.464 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:59:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.540 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.541 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.561 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.562 2 INFO nova.compute.claims [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.568 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.569 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.570 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.571 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.572 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.573 2 INFO nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Terminating instance#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.575 2 DEBUG nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:42 np0005486808 kernel: tap0b16cd6a-fe (unregistering): left promiscuous mode
Oct 14 04:59:42 np0005486808 NetworkManager[44885]: <info>  [1760432382.6275] device (tap0b16cd6a-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:42 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:42Z|00258|binding|INFO|Releasing lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 from this chassis (sb_readonly=0)
Oct 14 04:59:42 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:42Z|00259|binding|INFO|Setting lport 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 down in Southbound
Oct 14 04:59:42 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:42Z|00260|binding|INFO|Removing iface tap0b16cd6a-fe ovn-installed in OVS
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.646 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:07:ec 10.100.0.11'], port_security=['fa:16:3e:c3:07:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27fa4cf8-c08c-46a2-af8f-17c8980a2317', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d3a647aa3914555a8a2c5fd6fe7a543', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'bf06e246-cec9-4c0e-acc7-df3f274be7f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfac5894-b2bd-47a3-835a-de3d7c1134b1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b16cd6a-fe42-4a54-8bbe-810915fcaa93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.647 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b16cd6a-fe42-4a54-8bbe-810915fcaa93 in datapath c4d50d6a-6686-4b50-b1e5-9f71bae17a99 unbound from our chassis#033[00m
Oct 14 04:59:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.649 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4d50d6a-6686-4b50-b1e5-9f71bae17a99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc4e2ef-a626-460a-8152-098706dd1430]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:42.651 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 namespace which is not needed anymore#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 14 04:59:42 np0005486808 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001b.scope: Consumed 13.046s CPU time.
Oct 14 04:59:42 np0005486808 systemd-machined[214636]: Machine qemu-34-instance-0000001b terminated.
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.727 2 DEBUG nova.objects.instance [None req-af3d57db-6051-4643-905e-4c951fe186b9 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'pci_devices' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.740 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.791 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432382.791429, f3dafba3-6472-4921-9ece-b6076172365e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.792 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:42 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:42 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [NOTICE]   (294131) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:42 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [WARNING]  (294131) : Exiting Master process...
Oct 14 04:59:42 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [ALERT]    (294131) : Current worker (294143) exited with code 143 (Terminated)
Oct 14 04:59:42 np0005486808 neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99[294112]: [WARNING]  (294131) : All workers exited. Exiting... (0)
Oct 14 04:59:42 np0005486808 systemd[1]: libpod-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope: Deactivated successfully.
Oct 14 04:59:42 np0005486808 podman[297593]: 2025-10-14 08:59:42.811001924 +0000 UTC m=+0.062033583 container died 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.814 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.828 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.831 2 INFO nova.virt.libvirt.driver [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Instance destroyed successfully.#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.832 2 DEBUG nova.objects.instance [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lazy-loading 'resources' on Instance uuid 27fa4cf8-c08c-46a2-af8f-17c8980a2317 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a7915092ded73a0a1e93e8fda5fd9538a3414c7bf089afe2c761ba10dc82e95d-merged.mount: Deactivated successfully.
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.852 2 DEBUG nova.virt.libvirt.vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:58:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1883268496',display_name='tempest-ListServerFiltersTestJSON-instance-1883268496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1883268496',id=27,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d3a647aa3914555a8a2c5fd6fe7a543',ramdisk_id='',reservation_id='r-7u6hc4e7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1842486796',owner_user_name='tempest-ListServerFiltersTestJSON-1842486796-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:24Z,user_data=None,user_id='56f2f9bf9b064a208d9ce5fe732c4ff7',uuid=27fa4cf8-c08c-46a2-af8f-17c8980a2317,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.852 2 DEBUG nova.network.os_vif_util [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converting VIF {"id": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "address": "fa:16:3e:c3:07:ec", "network": {"id": "c4d50d6a-6686-4b50-b1e5-9f71bae17a99", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-414540769-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d3a647aa3914555a8a2c5fd6fe7a543", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b16cd6a-fe", "ovs_interfaceid": "0b16cd6a-fe42-4a54-8bbe-810915fcaa93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.853 2 DEBUG nova.network.os_vif_util [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.853 2 DEBUG os_vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b16cd6a-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.858 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:42 np0005486808 nova_compute[259627]: 2025-10-14 08:59:42.862 2 INFO os_vif [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:07:ec,bridge_name='br-int',has_traffic_filtering=True,id=0b16cd6a-fe42-4a54-8bbe-810915fcaa93,network=Network(c4d50d6a-6686-4b50-b1e5-9f71bae17a99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b16cd6a-fe')#033[00m
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 04:59:42 np0005486808 podman[297593]: 2025-10-14 08:59:42.886388905 +0000 UTC m=+0.137420564 container cleanup 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106859860324282 of space, bias 1.0, pg target 0.3320579580972846 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 04:59:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 04:59:42 np0005486808 systemd[1]: libpod-conmon-212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782.scope: Deactivated successfully.
Oct 14 04:59:43 np0005486808 podman[297662]: 2025-10-14 08:59:43.014952122 +0000 UTC m=+0.098590552 container remove 212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef373034-ae03-44c0-9b9a-da327b0a0b96]: (4, ('Tue Oct 14 08:59:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 (212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782)\n212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782\nTue Oct 14 08:59:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 (212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782)\n212876b675a73a1ca920ec438915fd467b1dd5dbeb4fe632879e077d977c7782\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.023 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8ed956-fb48-43c6-996e-bee168394d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.023 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4d50d6a-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:43 np0005486808 kernel: tapc4d50d6a-60: left promiscuous mode
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46ce3038-3162-436e-874b-9e2f3343a39b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.067 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8f1f4b-2597-4ff3-a1e7-2bc94b92d182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.068 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f60605-05fe-411f-9357-fedff9a1b730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9c246a-9c81-4d3a-acfb-83d22adfdb06]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611635, 'reachable_time': 40378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297684, 'error': None, 'target': 'ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 systemd[1]: run-netns-ovnmeta\x2dc4d50d6a\x2d6686\x2d4b50\x2db1e5\x2d9f71bae17a99.mount: Deactivated successfully.
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.092 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4d50d6a-6686-4b50-b1e5-9f71bae17a99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.092 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9b22b370-f459-439b-a91f-96f50969cabc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.113 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.113 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:43 np0005486808 kernel: tapecb526b7-d1 (unregistering): left promiscuous mode
Oct 14 04:59:43 np0005486808 NetworkManager[44885]: <info>  [1760432383.1474] device (tapecb526b7-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.147 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:43Z|00261|binding|INFO|Releasing lport ecb526b7-d1ad-4a75-b851-482702018258 from this chassis (sb_readonly=0)
Oct 14 04:59:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:43Z|00262|binding|INFO|Setting lport ecb526b7-d1ad-4a75-b851-482702018258 down in Southbound
Oct 14 04:59:43 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:43Z|00263|binding|INFO|Removing iface tapecb526b7-d1 ovn-installed in OVS
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.178 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:8b:13 10.100.0.9'], port_security=['fa:16:3e:e5:8b:13 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f3dafba3-6472-4921-9ece-b6076172365e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45598031-691e-407d-b099-ed9702c4e6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53047605-2412-4fda-a69f-45911eab5576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ecb526b7-d1ad-4a75-b851-482702018258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.180 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ecb526b7-d1ad-4a75-b851-482702018258 in datapath 2322cf7a-0090-40fa-a558-42d84cc6fc2a unbound from our chassis#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.182 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2322cf7a-0090-40fa-a558-42d84cc6fc2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.183 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[25dcbd94-66f2-49bb-a2c7-35cd5efbc599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.184 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a namespace which is not needed anymore#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 14 04:59:43 np0005486808 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 4.728s CPU time.
Oct 14 04:59:43 np0005486808 systemd-machined[214636]: Machine qemu-35-instance-0000001f terminated.
Oct 14 04:59:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953465074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.228 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.233 2 DEBUG nova.compute.provider_tree [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.235 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.247 2 DEBUG nova.scheduler.client.report [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.273 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.274 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.276 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.286 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.287 2 INFO nova.compute.claims [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:59:43 np0005486808 NetworkManager[44885]: <info>  [1760432383.3065] manager: (tapecb526b7-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [NOTICE]   (297384) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : Exiting Master process...
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : Exiting Master process...
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [ALERT]    (297384) : Current worker (297386) exited with code 143 (Terminated)
Oct 14 04:59:43 np0005486808 neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a[297380]: [WARNING]  (297384) : All workers exited. Exiting... (0)
Oct 14 04:59:43 np0005486808 systemd[1]: libpod-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope: Deactivated successfully.
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.320 2 DEBUG nova.compute.manager [None req-af3d57db-6051-4643-905e-4c951fe186b9 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:43 np0005486808 podman[297709]: 2025-10-14 08:59:43.323286704 +0000 UTC m=+0.050051710 container died d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.338 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.339 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:59:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b83f526bc4bd45d25326b47f32c13811952e9e26bf37d62c64b21ce07a2169ec-merged.mount: Deactivated successfully.
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.366 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:59:43 np0005486808 podman[297709]: 2025-10-14 08:59:43.368875793 +0000 UTC m=+0.095640799 container cleanup d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 04:59:43 np0005486808 systemd[1]: libpod-conmon-d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1.scope: Deactivated successfully.
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.398 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:59:43 np0005486808 podman[297750]: 2025-10-14 08:59:43.436176046 +0000 UTC m=+0.043205702 container remove d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12f2f189-796a-4039-b6f7-aafca1756f51]: (4, ('Tue Oct 14 08:59:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1)\nd8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1\nTue Oct 14 08:59:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a (d8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1)\nd8667928f99fbf2e40ad974ca6d51789a3d04a7a18882e08977011ab8d83cce1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f746ef4a-80be-414b-baa8-60df8e37b995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2322cf7a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.450 2 INFO nova.virt.libvirt.driver [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deleting instance files /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317_del#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.450 2 INFO nova.virt.libvirt.driver [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deletion of /var/lib/nova/instances/27fa4cf8-c08c-46a2-af8f-17c8980a2317_del complete#033[00m
Oct 14 04:59:43 np0005486808 kernel: tap2322cf7a-00: left promiscuous mode
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.480 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.481 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.481 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating image(s)#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.490 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[723f6a83-03ee-4014-aea3-5ec38a4f6e3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.509 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a827162c-7ae1-41de-8450-f1685bd67851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83dc47b2-d6e1-425a-98c2-ccb820e73cc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.534 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[41912d64-a70a-4098-b90f-fb5ad0a31c8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614980, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297798, 'error': None, 'target': 'ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.536 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2322cf7a-0090-40fa-a558-42d84cc6fc2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:43.537 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[837c011d-1c8b-4767-aaa1-69dd779bca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.540 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.561 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.564 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 167 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 165 op/s
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.594 2 DEBUG nova.policy [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.598 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.647 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.648 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.649 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.649 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.672 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.675 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.715 2 INFO nova.compute.manager [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.716 2 DEBUG oslo.service.loopingcall [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.717 2 DEBUG nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.718 2 DEBUG nova.network.neutron [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:43 np0005486808 systemd[1]: run-netns-ovnmeta\x2d2322cf7a\x2d0090\x2d40fa\x2da558\x2d42d84cc6fc2a.mount: Deactivated successfully.
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.949 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:43 np0005486808 nova_compute[259627]: 2025-10-14 08:59:43.997 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:59:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547583162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.086 2 DEBUG nova.objects.instance [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.090 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.095 2 DEBUG nova.compute.provider_tree [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.101 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Ensure instance console log exists: /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.102 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.103 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.108 2 DEBUG nova.scheduler.client.report [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.137 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.138 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.185 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.185 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.202 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.219 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.329 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.332 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.332 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating image(s)#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.361 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.386 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.406 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.410 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.437 2 DEBUG nova.network.neutron [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.441 2 DEBUG nova.policy [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ec2f781b62446cb98129707144b9d37', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd273d79854242779e57eece9a65f7c0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.447 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Successfully created port: 60379992-d75d-4eff-a6bb-5d1615f35475 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.455 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.455 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.486 2 INFO nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Took 0.77 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.492 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.494 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.495 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.519 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.522 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1af6c158-005b-4f3c-9044-87158e57378d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.564 2 DEBUG nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.570 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.571 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.592 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.614 2 INFO nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] instance snapshotting#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.615 2 WARNING nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.667 2 DEBUG nova.compute.manager [req-f27054b8-bc5c-41bd-b0b7-23084921ea31 req-23d8353f-0b88-4e67-9e33-fe698629c3f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Received event network-vif-deleted-0b16cd6a-fe42-4a54-8bbe-810915fcaa93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.698 2 DEBUG oslo_concurrency.processutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.752 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1af6c158-005b-4f3c-9044-87158e57378d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.799 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] resizing rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.831 2 INFO nova.virt.libvirt.driver [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Beginning cold snapshot process#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.883 2 DEBUG nova.objects.instance [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.916 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.917 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Ensure instance console log exists: /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.917 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.918 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:44 np0005486808 nova_compute[259627]: 2025-10-14 08:59:44.918 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.011 2 DEBUG nova.virt.libvirt.imagebackend [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.030 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Successfully created port: 36158ae1-8367-4859-a407-565fde315649 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211805633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.108 2 DEBUG oslo_concurrency.processutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.113 2 DEBUG nova.compute.provider_tree [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.153 2 DEBUG nova.scheduler.client.report [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.172 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Successfully updated port: 60379992-d75d-4eff-a6bb-5d1615f35475 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.178 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.180 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.187 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.187 2 INFO nova.compute.claims [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.191 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.211 2 INFO nova.scheduler.client.report [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Deleted allocations for instance 27fa4cf8-c08c-46a2-af8f-17c8980a2317#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.278 2 DEBUG nova.compute.manager [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-changed-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.279 2 DEBUG nova.compute.manager [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Refreshing instance network info cache due to event network-changed-60379992-d75d-4eff-a6bb-5d1615f35475. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.279 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.285 2 DEBUG oslo_concurrency.lockutils [None req-5ab10450-7f71-4eeb-b74d-abe9456fa876 56f2f9bf9b064a208d9ce5fe732c4ff7 3d3a647aa3914555a8a2c5fd6fe7a543 - - default default] Lock "27fa4cf8-c08c-46a2-af8f-17c8980a2317" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.298 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(83423e3ebdc24f9ea1f9981cd320a086) on rbd image(f3dafba3-6472-4921-9ece-b6076172365e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.366 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 256 op/s
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.806 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] cloning vms/f3dafba3-6472-4921-9ece-b6076172365e_disk@83423e3ebdc24f9ea1f9981cd320a086 to images/2762376e-9539-4ed8-bf9b-be2decee774f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599579862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.847 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.854 2 DEBUG nova.compute.provider_tree [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.875 2 DEBUG nova.scheduler.client.report [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.907 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] flattening images/2762376e-9539-4ed8-bf9b-be2decee774f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.952 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.956 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:45 np0005486808 nova_compute[259627]: 2025-10-14 08:59:45.957 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.025 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.025 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.055 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.079 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.119 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] removing snapshot(83423e3ebdc24f9ea1f9981cd320a086) on rbd image(f3dafba3-6472-4921-9ece-b6076172365e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.179 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.180 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.180 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating image(s)#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.204 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.239 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.273 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.278 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.323 2 DEBUG nova.policy [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56001fe1c9fc432e923f8c57058754db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.378 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.380 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.380 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.397 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.400 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b932e3d1-4cf6-4934-9eec-c93284b17b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.676 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b932e3d1-4cf6-4934-9eec-c93284b17b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 14 04:59:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.730 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] resizing rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:59:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.770 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.771 2 WARNING nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-unplugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state suspended and task_state image_uploading.#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.771 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG oslo_concurrency.lockutils [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.772 2 DEBUG nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] No waiting events found dispatching network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.772 2 WARNING nova.compute.manager [req-d6d819a1-3df2-4eae-aef0-b55a196aa155 req-d1235c52-3500-4def-8f16-f480604c530f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received unexpected event network-vif-plugged-ecb526b7-d1ad-4a75-b851-482702018258 for instance with vm_state suspended and task_state image_uploading.#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.796 2 DEBUG nova.storage.rbd_utils [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] creating snapshot(snap) on rbd image(2762376e-9539-4ed8-bf9b-be2decee774f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.860 2 DEBUG nova.objects.instance [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'migration_context' on Instance uuid b932e3d1-4cf6-4934-9eec-c93284b17b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.877 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.877 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Ensure instance console log exists: /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.878 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:46 np0005486808 nova_compute[259627]: 2025-10-14 08:59:46.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.255 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Successfully updated port: 36158ae1-8367-4859-a407-565fde315649 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.274 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.492 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.616 2 DEBUG nova.network.neutron [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updating instance_info_cache with network_info: [{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.635 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance network_info: |[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.636 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Refreshing network info cache for port 60379992-d75d-4eff-a6bb-5d1615f35475 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.638 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Start _get_guest_xml network_info=[{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.642 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Successfully created port: 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.647 2 WARNING nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.653 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.654 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.host [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.660 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.661 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.662 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.663 2 DEBUG nova.virt.hardware [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.665 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 14 04:59:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 14 04:59:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 14 04:59:47 np0005486808 nova_compute[259627]: 2025-10-14 08:59:47.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442874738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.204 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.235 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.240 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924961389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.692 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.694 2 DEBUG nova.virt.libvirt.vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:43Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.694 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.695 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.696 2 DEBUG nova.objects.instance [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid 310ebd88-5fe0-40ad-99dd-c3a1b410d357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.711 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <uuid>310ebd88-5fe0-40ad-99dd-c3a1b410d357</uuid>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <name>instance-00000020</name>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdminTestJSON-server-490112967</nova:name>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:47</nova:creationTime>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <nova:port uuid="60379992-d75d-4eff-a6bb-5d1615f35475">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="serial">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="uuid">310ebd88-5fe0-40ad-99dd-c3a1b410d357</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:47:c2:c5"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <target dev="tap60379992-d7"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/console.log" append="off"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:48 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:48 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:48 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:48 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.712 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Preparing to wait for external event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.713 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.713 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.714 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.715 2 DEBUG nova.virt.libvirt.vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-490112967',display_name='tempest-ServersAdminTestJSON-server-490112967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-490112967',id=32,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-xf25yl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:43Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=310ebd88-5fe0-40ad-99dd-c3a1b410d357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.716 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.717 2 DEBUG nova.network.os_vif_util [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.718 2 DEBUG os_vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60379992-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60379992-d7, col_values=(('external_ids', {'iface-id': '60379992-d75d-4eff-a6bb-5d1615f35475', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:c2:c5', 'vm-uuid': '310ebd88-5fe0-40ad-99dd-c3a1b410d357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 NetworkManager[44885]: <info>  [1760432388.7339] manager: (tap60379992-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.741 2 INFO os_vif [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:c2:c5,bridge_name='br-int',has_traffic_filtering=True,id=60379992-d75d-4eff-a6bb-5d1615f35475,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60379992-d7')#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.879 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:47:c2:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.880 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Using config drive#033[00m
Oct 14 04:59:48 np0005486808 nova_compute[259627]: 2025-10-14 08:59:48.897 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.030 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432374.0295033, 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.030 2 INFO nova.compute.manager [-] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.051 2 DEBUG nova.compute.manager [None req-6319883c-1dc7-412a-83a0-b380cefd926e - - - - - -] [instance: 82e5f434-f1c1-4135-a5c1-ae24c7bb4fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.108 2 DEBUG nova.network.neutron [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.134 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.134 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance network_info: |[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.137 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start _get_guest_xml network_info=[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.142 2 WARNING nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.145 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updated VIF entry in instance network info cache for port 60379992-d75d-4eff-a6bb-5d1615f35475. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.146 2 DEBUG nova.network.neutron [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Updating instance_info_cache with network_info: [{"id": "60379992-d75d-4eff-a6bb-5d1615f35475", "address": "fa:16:3e:47:c2:c5", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60379992-d7", "ovs_interfaceid": "60379992-d75d-4eff-a6bb-5d1615f35475", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.156 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.157 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.160 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.160 2 DEBUG nova.virt.libvirt.host [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.161 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.161 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.162 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.163 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.164 2 DEBUG nova.virt.hardware [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.167 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.219 2 DEBUG nova.compute.manager [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-changed-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG nova.compute.manager [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Refreshing instance network info cache due to event network-changed-36158ae1-8367-4859-a407-565fde315649. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.220 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.221 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Refreshing network info cache for port 36158ae1-8367-4859-a407-565fde315649 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.223 2 DEBUG oslo_concurrency.lockutils [req-3c2eab46-0c5a-45f9-b940-900ffd2201b6 req-f0894da6-a494-4984-9a2e-0a209cfa6048 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-310ebd88-5fe0-40ad-99dd-c3a1b410d357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.567 2 INFO nova.virt.libvirt.driver [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Snapshot image upload complete#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.568 2 INFO nova.compute.manager [None req-da42a247-7895-4344-aafa-8e3b2e7e24e8 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 4.95 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 04:59:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 180 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 7.1 MiB/s wr, 183 op/s
Oct 14 04:59:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501644839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.650 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.675 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.679 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.725 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Creating config drive at /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.730 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxow3px_z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.774 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Successfully updated port: 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquired lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.796 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.816 2 DEBUG nova.compute.manager [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-changed-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.817 2 DEBUG nova.compute.manager [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Refreshing instance network info cache due to event network-changed-6fb13023-6749-4e1b-b7d9-235dff8e72d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.817 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.888 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxow3px_z" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.933 2 DEBUG nova.storage.rbd_utils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:49 np0005486808 nova_compute[259627]: 2025-10-14 08:59:49.940 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.002 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183158786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.160 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.162 2 DEBUG nova.virt.libvirt.vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:44Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.163 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.165 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.167 2 DEBUG nova.objects.instance [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.170 2 DEBUG oslo_concurrency.processutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config 310ebd88-5fe0-40ad-99dd-c3a1b410d357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.171 2 INFO nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Deleting local config drive /var/lib/nova/instances/310ebd88-5fe0-40ad-99dd-c3a1b410d357/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.194 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <uuid>1af6c158-005b-4f3c-9044-87158e57378d</uuid>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <name>instance-00000021</name>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:name>tempest-InstanceActionsTestJSON-server-461517423</nova:name>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:49</nova:creationTime>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:user uuid="9ec2f781b62446cb98129707144b9d37">tempest-InstanceActionsTestJSON-580993042-project-member</nova:user>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:project uuid="fd273d79854242779e57eece9a65f7c0">tempest-InstanceActionsTestJSON-580993042</nova:project>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <nova:port uuid="36158ae1-8367-4859-a407-565fde315649">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="serial">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="uuid">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk.config">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:a4:7e:78"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <target dev="tap36158ae1-83"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log" append="off"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:50 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:50 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:50 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:50 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.195 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Preparing to wait for external event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.195 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.196 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.196 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.197 2 DEBUG nova.virt.libvirt.vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:44Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.198 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.199 2 DEBUG nova.network.os_vif_util [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.200 2 DEBUG os_vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36158ae1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36158ae1-83, col_values=(('external_ids', {'iface-id': '36158ae1-8367-4859-a407-565fde315649', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:7e:78', 'vm-uuid': '1af6c158-005b-4f3c-9044-87158e57378d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.2688] manager: (tap36158ae1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:50 np0005486808 kernel: tap60379992-d7: entered promiscuous mode
Oct 14 04:59:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:50Z|00264|binding|INFO|Claiming lport 60379992-d75d-4eff-a6bb-5d1615f35475 for this chassis.
Oct 14 04:59:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:50Z|00265|binding|INFO|60379992-d75d-4eff-a6bb-5d1615f35475: Claiming fa:16:3e:47:c2:c5 10.100.0.8
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.2846] manager: (tap60379992-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.285 2 INFO os_vif [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:c2:c5 10.100.0.8'], port_security=['fa:16:3e:47:c2:c5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '310ebd88-5fe0-40ad-99dd-c3a1b410d357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=60379992-d75d-4eff-a6bb-5d1615f35475) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.297 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 60379992-d75d-4eff-a6bb-5d1615f35475 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.299 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.311 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a937b47e-1d75-44f5-a073-d7b6636eafbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.312 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea0c857a-d1 in ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.314 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea0c857a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afcd0e61-98f9-4a02-bc9d-fbad8a79f964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8406ec3d-5cd4-465b-8a3d-057b528eb220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.331 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a590140c-50ca-468d-abf0-c57d7348858e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 systemd-machined[214636]: New machine qemu-36-instance-00000020.
Oct 14 04:59:50 np0005486808 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.353 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.353 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.354 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] No VIF found with MAC fa:16:3e:a4:7e:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.354 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Using config drive#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0171ef07-7506-4201-9fbe-e9fc15a5ac56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 systemd-udevd[298686]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.3875] device (tap60379992-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.3886] device (tap60379992-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.403 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[87aee32a-776e-4f57-8dca-59ee7d185191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.4103] manager: (tapea0c857a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.409 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24413726-8a80-4e6c-a6ee-fb9f40fc013c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:50Z|00266|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 ovn-installed in OVS
Oct 14 04:59:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:50Z|00267|binding|INFO|Setting lport 60379992-d75d-4eff-a6bb-5d1615f35475 up in Southbound
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.462 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8ef56c-3490-4bcf-94fd-475eb7f06eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.464 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c500f-fc8b-4259-89d9-9c97a76afa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.4980] device (tapea0c857a-d0): carrier: link connected
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.507 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0c3e28-fe65-468f-8df3-5329e3455834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.526 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c229d27c-0e4a-4ca7-9a89-fc0a55445088]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298734, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[390937dd-6c84-4515-a6d5-b8c5134886c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:51c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616925, 'tstamp': 616925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298735, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd27b37c-d2c1-4c8d-8d45-665f033d7afe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298736, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d491508-7c98-4af2-a3ef-2b285db1d7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ef7ab1-d559-4988-a01a-89e31b0a1d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 NetworkManager[44885]: <info>  [1760432390.6977] manager: (tapea0c857a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 14 04:59:50 np0005486808 kernel: tapea0c857a-d0: entered promiscuous mode
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:50 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:50Z|00268|binding|INFO|Releasing lport 6baedd76-8a05-42d6-8356-18b586f58672 from this chassis (sb_readonly=0)
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.738 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6196c4-b953-4a69-bd8a-b9d1dc8ceb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.740 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/ea0c857a-d31a-43a0-b285-c89c1ddc920a.pid.haproxy
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID ea0c857a-d31a-43a0-b285-c89c1ddc920a
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:50.740 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'env', 'PROCESS_TAG=haproxy-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea0c857a-d31a-43a0-b285-c89c1ddc920a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.950 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Creating config drive at /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.955 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ah2_cug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.986 2 DEBUG nova.network.neutron [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updating instance_info_cache with network_info: [{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.997 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updated VIF entry in instance network info cache for port 36158ae1-8367-4859-a407-565fde315649. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:50 np0005486808 nova_compute[259627]: 2025-10-14 08:59:50.998 2 DEBUG nova.network.neutron [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.008 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Releasing lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.009 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance network_info: |[{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.010 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.010 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Refreshing network info cache for port 6fb13023-6749-4e1b-b7d9-235dff8e72d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.015 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Start _get_guest_xml network_info=[{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.016 2 DEBUG oslo_concurrency.lockutils [req-c68e433f-2902-43dd-ab84-9d1f79ad0238 req-a04690de-232a-4717-b286-3bd6edcd33b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.022 2 WARNING nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.030 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.030 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.034 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.libvirt.host [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.035 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.036 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.037 2 DEBUG nova.virt.hardware [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.039 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.090 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ah2_cug" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.127 2 DEBUG nova.storage.rbd_utils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] rbd image 1af6c158-005b-4f3c-9044-87158e57378d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.136 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config 1af6c158-005b-4f3c-9044-87158e57378d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:51 np0005486808 podman[298796]: 2025-10-14 08:59:51.220938028 +0000 UTC m=+0.071963028 container create 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 04:59:51 np0005486808 podman[298796]: 2025-10-14 08:59:51.179450189 +0000 UTC m=+0.030475239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:59:51 np0005486808 systemd[1]: Started libpod-conmon-8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea.scope.
Oct 14 04:59:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.324 2 DEBUG oslo_concurrency.processutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config 1af6c158-005b-4f3c-9044-87158e57378d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.325 2 INFO nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Deleting local config drive /var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f93cef988fd9fccba9ac6e4003db9a58536ec9d10518828f62640c8f85b271e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:51 np0005486808 podman[298796]: 2025-10-14 08:59:51.346976893 +0000 UTC m=+0.198001913 container init 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:51 np0005486808 podman[298796]: 2025-10-14 08:59:51.356307462 +0000 UTC m=+0.207332432 container start 8d733b7f9d8d52263e3adda59c4055effcbf5244a310580777aaef8bc012d5ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 04:59:51 np0005486808 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : New worker (298910) forked
Oct 14 04:59:51 np0005486808 neutron-haproxy-ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a[298887]: [NOTICE]   (298905) : Loading success.
Oct 14 04:59:51 np0005486808 kernel: tap36158ae1-83: entered promiscuous mode
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.3940] manager: (tap36158ae1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.4089] device (tap36158ae1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.4097] device (tap36158ae1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.417 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.418 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.418 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Processing event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.419 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG oslo_concurrency.lockutils [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.420 2 DEBUG nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] No waiting events found dispatching network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.421 2 WARNING nova.compute.manager [req-4ddc7bbc-afdf-459e-bdee-273a4f641887 req-81d9f47b-eab0-4e24-a2db-b51c9f2d5cb0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Received unexpected event network-vif-plugged-60379992-d75d-4eff-a6bb-5d1615f35475 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:51Z|00269|binding|INFO|Claiming lport 36158ae1-8367-4859-a407-565fde315649 for this chassis.
Oct 14 04:59:51 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:51Z|00270|binding|INFO|36158ae1-8367-4859-a407-565fde315649: Claiming fa:16:3e:a4:7e:78 10.100.0.7
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.465 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.466 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 bound to our chassis#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.469 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0f6e4de-522c-468f-8b55-9e5064a6cce8#033[00m
Oct 14 04:59:51 np0005486808 systemd-machined[214636]: New machine qemu-37-instance-00000021.
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f5c1b7-4332-42d9-8543-7f2f565b6df6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.479 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0f6e4de-51 in ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0f6e4de-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df58721d-8466-449d-8f8e-5bb7c3f5b370]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1547ac0-41a3-432d-963b-7e6fc9598430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.503 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1d5512-7ff4-4e1d-b6ee-aaf2064b0458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d37babc-a7f0-4b23-aa37-3e69f267bfac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817577410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:51Z|00271|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 ovn-installed in OVS
Oct 14 04:59:51 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:51Z|00272|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 up in Southbound
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.551 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab856e7a-c9f8-46b2-b16c-a01fd9ebcea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.5590] manager: (tapa0f6e4de-50): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.557 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67c97a47-0b6f-417f-8d6b-159edadf8afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.567 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e410052b-5aff-4378-9c06-198a6f1c08f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.602 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27b83a0e-54c7-4f21-b2c0-37932dfdd0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.602 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.612 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.6247] device (tapa0f6e4de-50): carrier: link connected
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92c843ea-5ebd-4b86-8abe-4a80e9d0b657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.650 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[152eb80a-16cf-4178-a083-e8d4a1e29020]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617038, 'reachable_time': 39186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298967, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e83d068f-809f-49ea-9122-c08a0a817271]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:e471'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617038, 'tstamp': 617038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298968, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.686 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fae45989-9d59-4607-9321-f8680d598331]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617038, 'reachable_time': 39186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298969, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[782f7682-4d98-4891-93f3-ea36e398b476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.790 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0fa856-ce66-497b-ba2b-df7ae34b76c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f6e4de-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:51 np0005486808 NetworkManager[44885]: <info>  [1760432391.7951] manager: (tapa0f6e4de-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct 14 04:59:51 np0005486808 kernel: tapa0f6e4de-50: entered promiscuous mode
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0f6e4de-50, col_values=(('external_ids', {'iface-id': 'f946a06b-cc1d-436a-9eac-cf144d4f5ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:51 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:51Z|00273|binding|INFO|Releasing lport f946a06b-cc1d-436a-9eac-cf144d4f5ad3 from this chassis (sb_readonly=0)
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.834 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4ed024-8932-4077-bbbf-87ad749cfd2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.836 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:51.837 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'env', 'PROCESS_TAG=haproxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0f6e4de-522c-468f-8b55-9e5064a6cce8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 14 04:59:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 14 04:59:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.937 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.939 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9366655, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.939 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.945 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.948 2 INFO nova.virt.libvirt.driver [-] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Instance spawned successfully.#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.948 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.970 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.981 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.982 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.983 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.983 2 DEBUG nova.virt.libvirt.driver [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:51 np0005486808 nova_compute[259627]: 2025-10-14 08:59:51.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.016 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.017 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9379873, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.017 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.046 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.052 2 INFO nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 8.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.052 2 DEBUG nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432391.9428499, 310ebd88-5fe0-40ad-99dd-c3a1b410d357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.054 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.089 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.095 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836305436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.128 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.129 2 DEBUG nova.virt.libvirt.vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-618227233',display_name='tempest-ServersAdminTestJSON-server-618227233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-618227233',id=34,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-wask3hqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:46Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=b932e3d1-4cf6-4934-9eec-c93284b17b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.129 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.130 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.131 2 DEBUG nova.objects.instance [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lazy-loading 'pci_devices' on Instance uuid b932e3d1-4cf6-4934-9eec-c93284b17b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.133 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.141 2 INFO nova.compute.manager [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: 310ebd88-5fe0-40ad-99dd-c3a1b410d357] Took 9.63 seconds to build instance.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.143 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <uuid>b932e3d1-4cf6-4934-9eec-c93284b17b43</uuid>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <name>instance-00000022</name>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdminTestJSON-server-618227233</nova:name>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:51</nova:creationTime>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:user uuid="56001fe1c9fc432e923f8c57058754db">tempest-ServersAdminTestJSON-276167539-project-member</nova:user>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:project uuid="ed7ee17abdbe419cb7d7fd0da2cd2068">tempest-ServersAdminTestJSON-276167539</nova:project>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <nova:port uuid="6fb13023-6749-4e1b-b7d9-235dff8e72d4">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="serial">b932e3d1-4cf6-4934-9eec-c93284b17b43</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="uuid">b932e3d1-4cf6-4934-9eec-c93284b17b43</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b932e3d1-4cf6-4934-9eec-c93284b17b43_disk">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b3:11:58"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <target dev="tap6fb13023-67"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/console.log" append="off"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:52 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:52 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:52 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:52 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Preparing to wait for external event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.144 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.virt.libvirt.vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T08:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-618227233',display_name='tempest-ServersAdminTestJSON-server-618227233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-618227233',id=34,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7ee17abdbe419cb7d7fd0da2cd2068',ramdisk_id='',reservation_id='r-wask3hqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-276167539',owner_user_name='tempest-ServersAdminTestJSON-276167539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T08:59:46Z,user_data=None,user_id='56001fe1c9fc432e923f8c57058754db',uuid=b932e3d1-4cf6-4934-9eec-c93284b17b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converting VIF {"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.145 2 DEBUG nova.network.os_vif_util [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG os_vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb13023-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:52 np0005486808 NetworkManager[44885]: <info>  [1760432392.1523] manager: (tap6fb13023-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fb13023-67, col_values=(('external_ids', {'iface-id': '6fb13023-6749-4e1b-b7d9-235dff8e72d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:11:58', 'vm-uuid': 'b932e3d1-4cf6-4934-9eec-c93284b17b43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.160 2 INFO os_vif [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:11:58,bridge_name='br-int',has_traffic_filtering=True,id=6fb13023-6749-4e1b-b7d9-235dff8e72d4,network=Network(ea0c857a-d31a-43a0-b285-c89c1ddc920a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb13023-67')#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.164 2 DEBUG oslo_concurrency.lockutils [None req-b430a768-6454-416f-8e18-1e79ca82a22e 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "310ebd88-5fe0-40ad-99dd-c3a1b410d357" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.206 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] No VIF found with MAC fa:16:3e:b3:11:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.207 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Using config drive#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.229 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:52 np0005486808 podman[299027]: 2025-10-14 08:59:52.285199622 +0000 UTC m=+0.063386557 container create 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 04:59:52 np0005486808 systemd[1]: Started libpod-conmon-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope.
Oct 14 04:59:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 04:59:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11ba0701e8e75b5e6c026747289ae9028a21947c0206fb85f067034683ad925/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 04:59:52 np0005486808 podman[299027]: 2025-10-14 08:59:52.249200988 +0000 UTC m=+0.027388003 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 04:59:52 np0005486808 podman[299027]: 2025-10-14 08:59:52.356679208 +0000 UTC m=+0.134866163 container init 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 04:59:52 np0005486808 podman[299027]: 2025-10-14 08:59:52.363167467 +0000 UTC m=+0.141354402 container start 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 04:59:52 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : New worker (299112) forked
Oct 14 04:59:52 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : Loading success.
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.472 2 DEBUG nova.compute.manager [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG oslo_concurrency.lockutils [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.473 2 DEBUG nova.compute.manager [req-387007d8-6831-45f2-ac88-f25686ba3075 req-18d75bd0-9081-4846-acc6-f18195fd30a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Processing event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 14 04:59:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.639 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Creating config drive at /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.645 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmqi5jk9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.759 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.760 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.760 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "f3dafba3-6472-4921-9ece-b6076172365e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.761 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.761 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.763 2 INFO nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Terminating instance#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.764 2 DEBUG nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.773 2 INFO nova.virt.libvirt.driver [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Instance destroyed successfully.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.773 2 DEBUG nova.objects.instance [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'resources' on Instance uuid f3dafba3-6472-4921-9ece-b6076172365e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.786 2 DEBUG nova.virt.libvirt.vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1482418899',display_name='tempest-ImagesTestJSON-server-1482418899',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1482418899',id=31,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d87d2d744db48dc8b32bb4bf6847fce',ramdisk_id='',reservation_id='r-dol66h7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-168259448',owner_user_name='tempest-ImagesTestJSON-168259448-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:49Z,user_data=None,user_id='3a217215c39e41fea2323ff7b3b4e6aa',uuid=f3dafba3-6472-4921-9ece-b6076172365e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.786 2 DEBUG nova.network.os_vif_util [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converting VIF {"id": "ecb526b7-d1ad-4a75-b851-482702018258", "address": "fa:16:3e:e5:8b:13", "network": {"id": "2322cf7a-0090-40fa-a558-42d84cc6fc2a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1932073387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d87d2d744db48dc8b32bb4bf6847fce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb526b7-d1", "ovs_interfaceid": "ecb526b7-d1ad-4a75-b851-482702018258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.787 2 DEBUG nova.network.os_vif_util [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.788 2 DEBUG os_vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecb526b7-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.795 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmqi5jk9" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.820 2 DEBUG nova.storage.rbd_utils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] rbd image b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.825 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.864 2 INFO os_vif [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:8b:13,bridge_name='br-int',has_traffic_filtering=True,id=ecb526b7-d1ad-4a75-b851-482702018258,network=Network(2322cf7a-0090-40fa-a558-42d84cc6fc2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb526b7-d1')#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.882 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8676484, 1af6c158-005b-4f3c-9044-87158e57378d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.882 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.885 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.890 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.900 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance spawned successfully.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.900 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.918 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.926 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.927 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.928 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.928 2 DEBUG nova.virt.libvirt.driver [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.940 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.941 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8679252, 1af6c158-005b-4f3c-9044-87158e57378d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.941 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.970 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.973 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432392.8897731, 1af6c158-005b-4f3c-9044-87158e57378d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.973 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.977 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updated VIF entry in instance network info cache for port 6fb13023-6749-4e1b-b7d9-235dff8e72d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 04:59:52 np0005486808 nova_compute[259627]: 2025-10-14 08:59:52.977 2 DEBUG nova.network.neutron [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Updating instance_info_cache with network_info: [{"id": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "address": "fa:16:3e:b3:11:58", "network": {"id": "ea0c857a-d31a-43a0-b285-c89c1ddc920a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332731361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7ee17abdbe419cb7d7fd0da2cd2068", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb13023-67", "ovs_interfaceid": "6fb13023-6749-4e1b-b7d9-235dff8e72d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.005 2 DEBUG oslo_concurrency.processutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config b932e3d1-4cf6-4934-9eec-c93284b17b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.005 2 INFO nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Deleting local config drive /var/lib/nova/instances/b932e3d1-4cf6-4934-9eec-c93284b17b43/disk.config because it was imported into RBD.#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.009 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.010 2 DEBUG oslo_concurrency.lockutils [req-cfb5cff7-85f3-415a-a1ff-e81b3e6029b3 req-92d4748c-127c-4e3d-a9eb-32f83bdf1061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b932e3d1-4cf6-4934-9eec-c93284b17b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.014 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.023 2 INFO nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 8.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.023 2 DEBUG nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.036 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:53 np0005486808 kernel: tap6fb13023-67: entered promiscuous mode
Oct 14 04:59:53 np0005486808 NetworkManager[44885]: <info>  [1760432393.0675] manager: (tap6fb13023-67): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:53Z|00274|binding|INFO|Claiming lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 for this chassis.
Oct 14 04:59:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:53Z|00275|binding|INFO|6fb13023-6749-4e1b-b7d9-235dff8e72d4: Claiming fa:16:3e:b3:11:58 10.100.0.5
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.076 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:11:58 10.100.0.5'], port_security=['fa:16:3e:b3:11:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b932e3d1-4cf6-4934-9eec-c93284b17b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7ee17abdbe419cb7d7fd0da2cd2068', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a616a5a0-dd86-4326-bbdf-7cf172de843b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1013e89-bd11-44b1-be74-a5ce3b4c520f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6fb13023-6749-4e1b-b7d9-235dff8e72d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.079 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb13023-6749-4e1b-b7d9-235dff8e72d4 in datapath ea0c857a-d31a-43a0-b285-c89c1ddc920a bound to our chassis#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.081 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea0c857a-d31a-43a0-b285-c89c1ddc920a#033[00m
Oct 14 04:59:53 np0005486808 NetworkManager[44885]: <info>  [1760432393.0927] device (tap6fb13023-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:53 np0005486808 NetworkManager[44885]: <info>  [1760432393.0934] device (tap6fb13023-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.093 2 INFO nova.compute.manager [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Took 9.88 seconds to build instance.#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af091888-8156-4b18-b7d1-e41b7ff726df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:53Z|00276|binding|INFO|Setting lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 ovn-installed in OVS
Oct 14 04:59:53 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:53Z|00277|binding|INFO|Setting lport 6fb13023-6749-4e1b-b7d9-235dff8e72d4 up in Southbound
Oct 14 04:59:53 np0005486808 systemd-machined[214636]: New machine qemu-38-instance-00000022.
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:53 np0005486808 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.116 2 DEBUG oslo_concurrency.lockutils [None req-c36f3357-5aa3-49ce-b2f2-ffcb06483c82 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.140 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27b7c5dc-53d7-4aa4-8b79-5a441bd40701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.145 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15f68432-f96a-4000-9272-76f8c923e5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.177 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13a8f917-b226-4e05-aece-ee56fc6de246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.198 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b70cc78d-abab-49c5-88c0-de601d7dd834]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea0c857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:51:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 702, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 702, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616925, 'reachable_time': 21171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 604, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 604, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299208, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.214 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7a9926-eecc-44ca-ad18-2a9af773bed9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616940, 'tstamp': 616940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299210, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapea0c857a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616944, 'tstamp': 616944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299210, 'error': None, 'target': 'ovnmeta-ea0c857a-d31a-43a0-b285-c89c1ddc920a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0c857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0c857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.221 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea0c857a-d0, col_values=(('external_ids', {'iface-id': '6baedd76-8a05-42d6-8356-18b586f58672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:53.222 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.310 2 INFO nova.virt.libvirt.driver [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deleting instance files /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e_del#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.311 2 INFO nova.virt.libvirt.driver [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deletion of /var/lib/nova/instances/f3dafba3-6472-4921-9ece-b6076172365e_del complete#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.380 2 INFO nova.compute.manager [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.380 2 DEBUG oslo.service.loopingcall [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.380 2 DEBUG nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.381 2 DEBUG nova.network.neutron [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.449 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432378.4484634, eb820455-d45c-4331-9363-124f11537f52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.450 2 INFO nova.compute.manager [-] [instance: eb820455-d45c-4331-9363-124f11537f52] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:53 np0005486808 nova_compute[259627]: 2025-10-14 08:59:53.471 2 DEBUG nova.compute.manager [None req-04da1bc4-e72c-447c-9151-e66959b83d80 - - - - - -] [instance: eb820455-d45c-4331-9363-124f11537f52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 273 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.203 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432394.2030919, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.203 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Started (Lifecycle Event)#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.231 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.234 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432394.2049513, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.234 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Paused (Lifecycle Event)#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.250 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.252 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:54 np0005486808 nova_compute[259627]: 2025-10-14 08:59:54.268 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.023 2 DEBUG nova.network.neutron [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.046 2 INFO nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Took 1.67 seconds to deallocate network for instance.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.080 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.080 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 WARNING nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state None.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.081 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Processing event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.082 2 DEBUG oslo_concurrency.lockutils [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.083 2 DEBUG nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] No waiting events found dispatching network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.083 2 WARNING nova.compute.manager [req-3acb8866-cd95-4571-b672-c8bc3ac13b53 req-68824200-78e2-48c8-91b6-e3c36bbae6d2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Received unexpected event network-vif-plugged-6fb13023-6749-4e1b-b7d9-235dff8e72d4 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.083 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.086 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432395.0865035, b932e3d1-4cf6-4934-9eec-c93284b17b43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] VM Resumed (Lifecycle Event)#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.088 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.090 2 INFO nova.virt.libvirt.driver [-] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Instance spawned successfully.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.090 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.118 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.121 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.124 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.124 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.130 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.131 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.132 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.132 2 DEBUG nova.virt.libvirt.driver [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.167 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.207 2 INFO nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 9.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.207 2 DEBUG nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.250 2 DEBUG oslo_concurrency.processutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.287 2 INFO nova.compute.manager [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] [instance: b932e3d1-4cf6-4934-9eec-c93284b17b43] Took 10.71 seconds to build instance.#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.310 2 DEBUG oslo_concurrency.lockutils [None req-f25b52be-60ee-442f-9bdd-fa84e7d5d101 56001fe1c9fc432e923f8c57058754db ed7ee17abdbe419cb7d7fd0da2cd2068 - - default default] Lock "b932e3d1-4cf6-4934-9eec-c93284b17b43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.518 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.518 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.518 2 INFO nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Rebooting instance#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.536 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquiring lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.537 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Acquired lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.537 2 DEBUG nova.network.neutron [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.5 MiB/s wr, 452 op/s
Oct 14 04:59:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740848897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.757 2 DEBUG oslo_concurrency.processutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.764 2 DEBUG nova.compute.provider_tree [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.782 2 DEBUG nova.scheduler.client.report [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.806 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.838 2 INFO nova.scheduler.client.report [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Deleted allocations for instance f3dafba3-6472-4921-9ece-b6076172365e#033[00m
Oct 14 04:59:55 np0005486808 nova_compute[259627]: 2025-10-14 08:59:55.921 2 DEBUG oslo_concurrency.lockutils [None req-8c65345a-6190-4ca6-a99e-2274195d8c9f 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "f3dafba3-6472-4921-9ece-b6076172365e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.353 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "6de921d2-e251-431d-9333-bae44aa81859" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.354 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "6de921d2-e251-431d-9333-bae44aa81859" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.383 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.475 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.476 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.486 2 DEBUG nova.virt.hardware [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.488 2 INFO nova.compute.claims [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.633 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:56 np0005486808 nova_compute[259627]: 2025-10-14 08:59:56.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258774786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.069 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.074 2 DEBUG nova.compute.provider_tree [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.090 2 DEBUG nova.scheduler.client.report [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.122 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.125 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.193 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.194 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.222 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.253 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.358 2 DEBUG nova.compute.manager [req-cb8802c7-a0a4-4fa3-81a0-c3e0454005dd req-58dce2ed-9787-495d-900a-6c3f98093051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Received event network-vif-deleted-ecb526b7-d1ad-4a75-b851-482702018258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.363 2 DEBUG nova.compute.manager [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.364 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.365 2 INFO nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Creating image(s)#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.386 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.415 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.452 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.458 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.495 2 DEBUG nova.policy [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a217215c39e41fea2323ff7b3b4e6aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d87d2d744db48dc8b32bb4bf6847fce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.531 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.534 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.535 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.537 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.581 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 04:59:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 55 KiB/s wr, 407 op/s
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.586 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6de921d2-e251-431d-9333-bae44aa81859_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.698 2 DEBUG nova.network.neutron [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Updating instance_info_cache with network_info: [{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.746 2 DEBUG oslo_concurrency.lockutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Releasing lock "refresh_cache-1af6c158-005b-4f3c-9044-87158e57378d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.748 2 DEBUG nova.compute.manager [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.830 2 DEBUG oslo_concurrency.processutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6de921d2-e251-431d-9333-bae44aa81859_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.879 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432382.813722, 27fa4cf8-c08c-46a2-af8f-17c8980a2317 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.879 2 INFO nova.compute.manager [-] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.886 2 DEBUG nova.storage.rbd_utils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] resizing rbd image 6de921d2-e251-431d-9333-bae44aa81859_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 04:59:57 np0005486808 nova_compute[259627]: 2025-10-14 08:59:57.917 2 DEBUG nova.compute.manager [None req-ee8e8317-fcf4-4168-96b4-915240082296 - - - - - -] [instance: 27fa4cf8-c08c-46a2-af8f-17c8980a2317] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:57 np0005486808 kernel: tap36158ae1-83 (unregistering): left promiscuous mode
Oct 14 04:59:57 np0005486808 NetworkManager[44885]: <info>  [1760432397.9549] device (tap36158ae1-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 04:59:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:57Z|00278|binding|INFO|Releasing lport 36158ae1-8367-4859-a407-565fde315649 from this chassis (sb_readonly=0)
Oct 14 04:59:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:57Z|00279|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 down in Southbound
Oct 14 04:59:57 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:57Z|00280|binding|INFO|Removing iface tap36158ae1-83 ovn-installed in OVS
Oct 14 04:59:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.970 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 unbound from our chassis#033[00m
Oct 14 04:59:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.971 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0f6e4de-522c-468f-8b55-9e5064a6cce8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 04:59:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.972 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[43f36d3c-4a7a-4ce0-8927-824397984e25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:57.972 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace which is not needed anymore#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 14 04:59:58 np0005486808 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 6.219s CPU time.
Oct 14 04:59:58 np0005486808 systemd-machined[214636]: Machine qemu-37-instance-00000021 terminated.
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.016 2 DEBUG nova.objects.instance [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lazy-loading 'migration_context' on Instance uuid 6de921d2-e251-431d-9333-bae44aa81859 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.030 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG nova.virt.libvirt.driver [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Ensure instance console log exists: /var/lib/nova/instances/6de921d2-e251-431d-9333-bae44aa81859/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.031 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.032 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:58 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : haproxy version is 2.8.14-c23fe91
Oct 14 04:59:58 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [NOTICE]   (299109) : path to executable is /usr/sbin/haproxy
Oct 14 04:59:58 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [WARNING]  (299109) : Exiting Master process...
Oct 14 04:59:58 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [ALERT]    (299109) : Current worker (299112) exited with code 143 (Terminated)
Oct 14 04:59:58 np0005486808 neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8[299101]: [WARNING]  (299109) : All workers exited. Exiting... (0)
Oct 14 04:59:58 np0005486808 systemd[1]: libpod-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope: Deactivated successfully.
Oct 14 04:59:58 np0005486808 podman[299489]: 2025-10-14 08:59:58.109621696 +0000 UTC m=+0.044654248 container died 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590-userdata-shm.mount: Deactivated successfully.
Oct 14 04:59:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b11ba0701e8e75b5e6c026747289ae9028a21947c0206fb85f067034683ad925-merged.mount: Deactivated successfully.
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.152 2 INFO nova.virt.libvirt.driver [-] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Instance destroyed successfully.#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.153 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'resources' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:58 np0005486808 podman[299489]: 2025-10-14 08:59:58.154148089 +0000 UTC m=+0.089180661 container cleanup 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.161 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Successfully created port: c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.169 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.169 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.170 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.171 2 DEBUG os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36158ae1-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.179 2 INFO os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.186 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Start _get_guest_xml network_info=[{"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.189 2 WARNING nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 04:59:58 np0005486808 systemd[1]: libpod-conmon-96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590.scope: Deactivated successfully.
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.196 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.196 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.199 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.libvirt.host [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.200 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.201 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.201 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.202 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.203 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.204 2 DEBUG nova.virt.hardware [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.204 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.219 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:58 np0005486808 podman[299525]: 2025-10-14 08:59:58.219763461 +0000 UTC m=+0.042231858 container remove 96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19c417cb-0a9c-4c78-8d42-ed52b9819d28]: (4, ('Tue Oct 14 08:59:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590)\n96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590\nTue Oct 14 08:59:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 (96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590)\n96b6b042bd0f75a32c3f621b6a4a9de25e979d9f8d11a010c48414210ad4e590\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.228 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96f9f9b2-5bb5-4327-b1b6-84396eaa876d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:58 np0005486808 kernel: tapa0f6e4de-50: left promiscuous mode
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.259 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58a197bd-7940-4edf-993d-d1e51003075e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.286 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7165e449-f174-42e6-90df-d572966c74f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f492c00-648b-4aee-aa80-b224ff6fcb39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9673e9-f21a-4e3c-81fa-f2581b36978e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617030, 'reachable_time': 41316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299540, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 systemd[1]: run-netns-ovnmeta\x2da0f6e4de\x2d522c\x2d468f\x2d8b55\x2d9e5064a6cce8.mount: Deactivated successfully.
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.305 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 04:59:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:58.305 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a0bb4e-994c-4cc1-bcb6-c114c268e87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.321 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432383.3203607, f3dafba3-6472-4921-9ece-b6076172365e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.322 2 INFO nova.compute.manager [-] [instance: f3dafba3-6472-4921-9ece-b6076172365e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.341 2 DEBUG nova.compute.manager [None req-ac2b7d07-5e51-43bd-97bd-aabdbc0d8d03 - - - - - -] [instance: f3dafba3-6472-4921-9ece-b6076172365e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 04:59:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100255372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.677 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:58 np0005486808 nova_compute[259627]: 2025-10-14 08:59:58.715 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.031 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Successfully updated port: c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.048 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquiring lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.049 2 DEBUG oslo_concurrency.lockutils [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] Acquired lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.049 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 04:59:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 04:59:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144157410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.141 2 DEBUG nova.compute.manager [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Received event network-changed-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.142 2 DEBUG nova.compute.manager [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Refreshing instance network info cache due to event network-changed-c0eb9aa7-6f93-4b3b-aa65-324e1d2b889b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.142 2 DEBUG oslo_concurrency.lockutils [req-2f05c9ec-1afd-442b-ba08-ed3ec9e01b5c req-3ed60ff6-84d6-4167-9fd8-6a8c1e51cb55 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6de921d2-e251-431d-9333-bae44aa81859" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.151 2 DEBUG oslo_concurrency.processutils [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.153 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.153 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.154 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.155 2 DEBUG nova.objects.instance [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1af6c158-005b-4f3c-9044-87158e57378d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.181 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <uuid>1af6c158-005b-4f3c-9044-87158e57378d</uuid>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <name>instance-00000021</name>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:name>tempest-InstanceActionsTestJSON-server-461517423</nova:name>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 08:59:58</nova:creationTime>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:user uuid="9ec2f781b62446cb98129707144b9d37">tempest-InstanceActionsTestJSON-580993042-project-member</nova:user>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:project uuid="fd273d79854242779e57eece9a65f7c0">tempest-InstanceActionsTestJSON-580993042</nova:project>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <nova:port uuid="36158ae1-8367-4859-a407-565fde315649">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <system>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="serial">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="uuid">1af6c158-005b-4f3c-9044-87158e57378d</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </system>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <os>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </os>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <features>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </features>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </clock>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  <devices>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1af6c158-005b-4f3c-9044-87158e57378d_disk.config">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:a4:7e:78"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <target dev="tap36158ae1-83"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </interface>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1af6c158-005b-4f3c-9044-87158e57378d/console.log" append="off"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </serial>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <video>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </video>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </rng>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 04:59:59 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 04:59:59 np0005486808 nova_compute[259627]:  </devices>
Oct 14 04:59:59 np0005486808 nova_compute[259627]: </domain>
Oct 14 04:59:59 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.182 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.183 2 DEBUG nova.virt.libvirt.driver [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.184 2 DEBUG nova.virt.libvirt.vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T08:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-461517423',display_name='tempest-InstanceActionsTestJSON-server-461517423',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-461517423',id=33,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T08:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='fd273d79854242779e57eece9a65f7c0',ramdisk_id='',reservation_id='r-al5tk2ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-580993042',owner_user_name='tempest-InstanceActionsTestJSON-580993042-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T08:59:57Z,user_data=None,user_id='9ec2f781b62446cb98129707144b9d37',uuid=1af6c158-005b-4f3c-9044-87158e57378d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.184 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converting VIF {"id": "36158ae1-8367-4859-a407-565fde315649", "address": "fa:16:3e:a4:7e:78", "network": {"id": "a0f6e4de-522c-468f-8b55-9e5064a6cce8", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1503089131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd273d79854242779e57eece9a65f7c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36158ae1-83", "ovs_interfaceid": "36158ae1-8367-4859-a407-565fde315649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.185 2 DEBUG nova.network.os_vif_util [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.185 2 DEBUG os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36158ae1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36158ae1-83, col_values=(('external_ids', {'iface-id': '36158ae1-8367-4859-a407-565fde315649', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:7e:78', 'vm-uuid': '1af6c158-005b-4f3c-9044-87158e57378d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.1925] manager: (tap36158ae1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.196 2 INFO os_vif [None req-bf859fb7-2977-4339-86c4-9ead28512fa0 9ec2f781b62446cb98129707144b9d37 fd273d79854242779e57eece9a65f7c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:7e:78,bridge_name='br-int',has_traffic_filtering=True,id=36158ae1-8367-4859-a407-565fde315649,network=Network(a0f6e4de-522c-468f-8b55-9e5064a6cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36158ae1-83')#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.226 2 DEBUG nova.network.neutron [None req-9de6454c-5c99-4574-a968-9993ad73280c 3a217215c39e41fea2323ff7b3b4e6aa 0d87d2d744db48dc8b32bb4bf6847fce - - default default] [instance: 6de921d2-e251-431d-9333-bae44aa81859] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 04:59:59 np0005486808 kernel: tap36158ae1-83: entered promiscuous mode
Oct 14 04:59:59 np0005486808 systemd-udevd[299458]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:59Z|00281|binding|INFO|Claiming lport 36158ae1-8367-4859-a407-565fde315649 for this chassis.
Oct 14 04:59:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:59Z|00282|binding|INFO|36158ae1-8367-4859-a407-565fde315649: Claiming fa:16:3e:a4:7e:78 10.100.0.7
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.2549] manager: (tap36158ae1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.261 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:7e:78 10.100.0.7'], port_security=['fa:16:3e:a4:7e:78 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1af6c158-005b-4f3c-9044-87158e57378d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd273d79854242779e57eece9a65f7c0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4b2e574d-708f-45c3-a256-0b7fe2d0873c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65584940-3c3d-4797-80d1-97beab212175, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=36158ae1-8367-4859-a407-565fde315649) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.263 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 36158ae1-8367-4859-a407-565fde315649 in datapath a0f6e4de-522c-468f-8b55-9e5064a6cce8 bound to our chassis#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.265 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0f6e4de-522c-468f-8b55-9e5064a6cce8#033[00m
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.2678] device (tap36158ae1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.2691] device (tap36158ae1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 04:59:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:59Z|00283|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 ovn-installed in OVS
Oct 14 04:59:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:59Z|00284|binding|INFO|Setting lport 36158ae1-8367-4859-a407-565fde315649 up in Southbound
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[257fefcd-e30e-41b3-a04e-fc8d3caa9cbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.282 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0f6e4de-51 in ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.284 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0f6e4de-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.284 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b681e22-6629-4a45-be01-31c14d5f8829]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bf8722-c5b5-4111-ade8-833fd692029f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 systemd-machined[214636]: New machine qemu-39-instance-00000021.
Oct 14 04:59:59 np0005486808 systemd[1]: Started Virtual Machine qemu-39-instance-00000021.
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.303 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4616b5bb-fae5-4dee-af08-6fce63c08ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[765729e0-e8da-4f14-b502-898fc733549b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.368 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3b53b2e6-b5d4-44f0-9e97-06c10edaaef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.3807] manager: (tapa0f6e4de-50): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4998212e-a42f-4be1-a18c-c2676d044724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cf06f251-d78e-4a0b-84bd-185fe92241d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.422 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fd59ed-2239-47cb-be11-8b33c01a3e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.4528] device (tapa0f6e4de-50): carrier: link connected
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.457 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[00794ea5-a768-4360-b893-5e5710c898d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.480 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab0a3c7-efc4-4370-bf33-74d4bc0a162e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617821, 'reachable_time': 15347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299649, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.496 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e2396f-cc14-4c96-bb0a-9a801c3e438d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:e471'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617821, 'tstamp': 617821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299650, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3e6f0c-7d01-4019-86ca-1ae3a05945c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0f6e4de-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617821, 'reachable_time': 15347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299651, 'error': None, 'target': 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a504dc58-7e7c-43d8-b023-be6dad2ab9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 316 op/s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.600 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fee0503-a1ff-460c-b7bd-d347349dd2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.601 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f6e4de-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.601 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.602 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f6e4de-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 NetworkManager[44885]: <info>  [1760432399.6044] manager: (tapa0f6e4de-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 14 04:59:59 np0005486808 kernel: tapa0f6e4de-50: entered promiscuous mode
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.612 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0f6e4de-50, col_values=(('external_ids', {'iface-id': 'f946a06b-cc1d-436a-9eac-cf144d4f5ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 04:59:59 np0005486808 ovn_controller[152662]: 2025-10-14T08:59:59Z|00285|binding|INFO|Releasing lport f946a06b-cc1d-436a-9eac-cf144d4f5ad3 from this chassis (sb_readonly=0)
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.631 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e74fcfc-32cd-4573-8ca5-2d1ff768a892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.632 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a0f6e4de-522c-468f-8b55-9e5064a6cce8.pid.haproxy
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a0f6e4de-522c-468f-8b55-9e5064a6cce8
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 04:59:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 08:59:59.632 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'env', 'PROCESS_TAG=haproxy-a0f6e4de-522c-468f-8b55-9e5064a6cce8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0f6e4de-522c-468f-8b55-9e5064a6cce8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.793 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.794 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.795 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.796 2 WARNING nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-unplugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.796 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.797 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1af6c158-005b-4f3c-9044-87158e57378d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.797 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.798 2 DEBUG oslo_concurrency.lockutils [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1af6c158-005b-4f3c-9044-87158e57378d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.798 2 DEBUG nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] No waiting events found dispatching network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 04:59:59 np0005486808 nova_compute[259627]: 2025-10-14 08:59:59.798 2 WARNING nova.compute.manager [req-3ae26e16-ee78-425c-973a-7666267d2462 req-8df05b49-facd-437f-91ac-a05ee3bca1d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1af6c158-005b-4f3c-9044-87158e57378d] Received unexpected event network-vif-plugged-36158ae1-8367-4859-a407-565fde315649 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct 14 05:00:00 np0005486808 podman[299681]: 2025-10-14 09:00:00.006090616 +0000 UTC m=+0.056764255 container create ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a0f6e4de-522c-468f-8b55-9e5064a6cce8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:00:00 np0005486808 systemd[1]: Started libpod-conmon-ed7f78eb3c41dff983c03eb050de066d7f5c31f6c5ba4552fc79e490802b436c.scope.
Oct 14 05:01:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:01:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742082035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:01:43 np0005486808 nova_compute[259627]: 2025-10-14 09:01:43.911 2 INFO nova.compute.manager [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Get console output#033[00m
Oct 14 05:01:43 np0005486808 nova_compute[259627]: 2025-10-14 09:01:43.918 2 INFO oslo.privsep.daemon [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpmq09rj3_/privsep.sock']#033[00m
Oct 14 05:01:43 np0005486808 nova_compute[259627]: 2025-10-14 09:01:43.947 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:43 np0005486808 nova_compute[259627]: 2025-10-14 09:01:43.972 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:01:43 np0005486808 nova_compute[259627]: 2025-10-14 09:01:43.976 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.145 2 INFO nova.virt.libvirt.driver [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Beginning live snapshot process#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.297 2 DEBUG nova.virt.libvirt.imagebackend [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:01:44 np0005486808 rsyslogd[1002]: imjournal: 7799 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.447 2 DEBUG nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.448 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.449 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.450 2 DEBUG oslo_concurrency.lockutils [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.451 2 DEBUG nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.451 2 WARNING nova.compute.manager [req-b8d53763-3b2a-4b95-8dd8-9d637f109d9b req-8bf0d66b-bfb6-442a-9519-305ad88556f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:01:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:01:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476326120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.479 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.482 2 DEBUG nova.objects.instance [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'pci_devices' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.502 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <uuid>f5ecec2f-eb67-49e5-abb8-15e2be8db618</uuid>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <name>instance-00000030</name>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-818396468</nova:name>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:01:43</nova:creationTime>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:user uuid="a8208d81c99c41668cc80998cc83bf02">tempest-ServersAdminNegativeTestJSON-114279920-project-member</nova:user>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <nova:project uuid="f0dbf2d79ae5410d965cc3670bdd26ba">tempest-ServersAdminNegativeTestJSON-114279920</nova:project>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="serial">f5ecec2f-eb67-49e5-abb8-15e2be8db618</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="uuid">f5ecec2f-eb67-49e5-abb8-15e2be8db618</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/console.log" append="off"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:01:44 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:01:44 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:01:44 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:01:44 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.554 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(56598d4525d541c693ae71ee1e24ae4b) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:01:44 np0005486808 podman[310570]: 2025-10-14 09:01:44.609312052 +0000 UTC m=+0.056064270 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.623 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.623 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.624 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Using config drive#033[00m
Oct 14 05:01:44 np0005486808 podman[310569]: 2025-10-14 09:01:44.628253027 +0000 UTC m=+0.082971091 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.648 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:01:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 14 05:01:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 14 05:01:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.746 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] cloning vms/333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk@56598d4525d541c693ae71ee1e24ae4b to images/44bff268-5d5f-42ed-bb32-357e8412a37d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.828 2 INFO oslo.privsep.daemon [None req-fea0fdf5-9c6a-4762-93fa-ef8925083a89 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.690 20781 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.700 20781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.705 20781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.705 20781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 20781#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.848 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] flattening images/44bff268-5d5f-42ed-bb32-357e8412a37d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.907 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Creating config drive at /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.914 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p4stps_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:01:44 np0005486808 nova_compute[259627]: 2025-10-14 09:01:44.929 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.077 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p4stps_" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.109 2 DEBUG nova.storage.rbd_utils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] rbd image f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.115 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.217 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] removing snapshot(56598d4525d541c693ae71ee1e24ae4b) on rbd image(333926ec-cf24-467b-b9b1-d1fa70a4feb2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.275 2 DEBUG oslo_concurrency.processutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config f5ecec2f-eb67-49e5-abb8-15e2be8db618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.275 2 INFO nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deleting local config drive /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618/disk.config because it was imported into RBD.#033[00m
Oct 14 05:01:45 np0005486808 systemd-machined[214636]: New machine qemu-57-instance-00000030.
Oct 14 05:01:45 np0005486808 systemd[1]: Started Virtual Machine qemu-57-instance-00000030.
Oct 14 05:01:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 20 MiB/s wr, 936 op/s
Oct 14 05:01:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 14 05:01:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 14 05:01:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 14 05:01:45 np0005486808 nova_compute[259627]: 2025-10-14 09:01:45.726 2 DEBUG nova.storage.rbd_utils [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] creating snapshot(snap) on rbd image(44bff268-5d5f-42ed-bb32-357e8412a37d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:01:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:01:46Z|00413|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.299 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432506.2941918, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.299 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.301 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.301 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.305 2 INFO nova.virt.libvirt.driver [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance spawned successfully.#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.305 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.339 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.352 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.359 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.360 2 DEBUG nova.virt.libvirt.driver [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.374 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432506.2956293, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Started (Lifecycle Event)#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.418 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.421 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.488 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.534 2 INFO nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 3.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.535 2 DEBUG nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.674 2 INFO nova.compute.manager [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 5.19 seconds to build instance.#033[00m
Oct 14 05:01:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 14 05:01:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 14 05:01:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 14 05:01:46 np0005486808 nova_compute[259627]: 2025-10-14 09:01:46.755 2 DEBUG oslo_concurrency.lockutils [None req-7695f723-203f-4f47-881f-99dffbfee27c a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:01:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6268 writes, 28K keys, 6268 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6268 writes, 6268 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1633 writes, 7351 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 9.84 MB, 0.02 MB/s#012Interval WAL: 1633 writes, 1633 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    138.3      0.24              0.11        16    0.015       0      0       0.0       0.0#012  L6      1/0    8.23 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    209.5    169.8      0.63              0.34        15    0.042     69K   8346       0.0       0.0#012 Sum      1/0    8.23 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    151.7    161.1      0.87              0.45        31    0.028     69K   8346       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    201.7    207.2      0.19              0.09         8    0.024     21K   2569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    209.5    169.8      0.63              0.34        15    0.042     69K   8346       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    142.4      0.23              0.11        15    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 0.9 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 15.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000196 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(995,14.84 MB,4.88211%) FilterBlock(32,199.23 KB,0.0640016%) IndexBlock(32,372.81 KB,0.119761%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.266 2 DEBUG nova.objects.instance [None req-a43656ad-bcdb-4ae4-9031-85c5d6a52320 873b066be596492e8aa5f2a9f2a01a7f f93dae4c4a5f4a16921d0424a47c214e - - default default] Lazy-loading 'pci_devices' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.291 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432507.2911167, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.291 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.311 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.316 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.339 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:01:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:01:47.511 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:01:47 np0005486808 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 14 05:01:47 np0005486808 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Consumed 1.934s CPU time.
Oct 14 05:01:47 np0005486808 systemd-machined[214636]: Machine qemu-57-instance-00000030 terminated.
Oct 14 05:01:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:01:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 14 05:01:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 14 05:01:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 14 05:01:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 16 MiB/s wr, 571 op/s
Oct 14 05:01:47 np0005486808 nova_compute[259627]: 2025-10-14 09:01:47.632 2 DEBUG nova.compute.manager [None req-a43656ad-bcdb-4ae4-9031-85c5d6a52320 873b066be596492e8aa5f2a9f2a01a7f f93dae4c4a5f4a16921d0424a47c214e - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.251 2 INFO nova.virt.libvirt.driver [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Snapshot image upload complete#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.252 2 INFO nova.compute.manager [None req-fe627879-6144-43d3-8e90-66373b21dbfc fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 4.41 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.580 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.581 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.581 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.586 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.588 2 DEBUG nova.objects.instance [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:01:48 np0005486808 nova_compute[259627]: 2025-10-14 09:01:48.627 2 DEBUG nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:01:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 533 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 463 op/s
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.857 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.857 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.858 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.859 2 INFO nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Terminating instance#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquired lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:01:49 np0005486808 nova_compute[259627]: 2025-10-14 09:01:49.860 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.079 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:01:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:01:50Z|00414|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.582 2 DEBUG nova.network.neutron [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.598 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Releasing lock "refresh_cache-f5ecec2f-eb67-49e5-abb8-15e2be8db618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.598 2 DEBUG nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.603 2 INFO nova.virt.libvirt.driver [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance destroyed successfully.#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.604 2 DEBUG nova.objects.instance [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'resources' on Instance uuid f5ecec2f-eb67-49e5-abb8-15e2be8db618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.972 2 INFO nova.virt.libvirt.driver [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deleting instance files /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618_del#033[00m
Oct 14 05:01:50 np0005486808 nova_compute[259627]: 2025-10-14 09:01:50.974 2 INFO nova.virt.libvirt.driver [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deletion of /var/lib/nova/instances/f5ecec2f-eb67-49e5-abb8-15e2be8db618_del complete#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.046 2 INFO nova.compute.manager [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.046 2 DEBUG oslo.service.loopingcall [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.047 2 DEBUG nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.047 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.341 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.358 2 DEBUG nova.network.neutron [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.376 2 INFO nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Took 0.33 seconds to deallocate network for instance.#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.435 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.435 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:51 np0005486808 nova_compute[259627]: 2025-10-14 09:01:51.566 2 DEBUG oslo_concurrency.processutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:01:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 243 op/s
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557958970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.054 2 DEBUG oslo_concurrency.processutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.062 2 DEBUG nova.compute.provider_tree [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.084 2 DEBUG nova.scheduler.client.report [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.111 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.146 2 INFO nova.scheduler.client.report [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Deleted allocations for instance f5ecec2f-eb67-49e5-abb8-15e2be8db618#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.223 2 DEBUG oslo_concurrency.lockutils [None req-5ac86ea4-2371-473b-8ceb-cd651a895ee0 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "f5ecec2f-eb67-49e5-abb8-15e2be8db618" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 14 05:01:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.869 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.869 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.870 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.871 2 INFO nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Terminating instance#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquired lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:01:52 np0005486808 nova_compute[259627]: 2025-10-14 09:01:52.872 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.195 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.607 2 DEBUG nova.network.neutron [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.626 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Releasing lock "refresh_cache-15778bbd-fbee-44e9-ba12-9884db0e7afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.626 2 DEBUG nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:01:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 571 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 211 op/s
Oct 14 05:01:53 np0005486808 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 14 05:01:53 np0005486808 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002f.scope: Consumed 12.366s CPU time.
Oct 14 05:01:53 np0005486808 systemd-machined[214636]: Machine qemu-55-instance-0000002f terminated.
Oct 14 05:01:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:01:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.845 2 INFO nova.virt.libvirt.driver [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance destroyed successfully.#033[00m
Oct 14 05:01:53 np0005486808 nova_compute[259627]: 2025-10-14 09:01:53.845 2 DEBUG nova.objects.instance [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lazy-loading 'resources' on Instance uuid 15778bbd-fbee-44e9-ba12-9884db0e7afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.277 2 INFO nova.virt.libvirt.driver [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deleting instance files /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb_del#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.278 2 INFO nova.virt.libvirt.driver [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deletion of /var/lib/nova/instances/15778bbd-fbee-44e9-ba12-9884db0e7afb_del complete#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.329 2 INFO nova.compute.manager [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.330 2 DEBUG oslo.service.loopingcall [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.331 2 DEBUG nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.331 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.853 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432499.8486285, c5dc9921-0deb-4a3f-83d2-703f8b5f1f37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.853 2 INFO nova.compute.manager [-] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.881 2 DEBUG nova.compute.manager [None req-a12e7eff-6ea4-4d4b-855d-b9f30261f910 - - - - - -] [instance: c5dc9921-0deb-4a3f-83d2-703f8b5f1f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.899 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.921 2 DEBUG nova.network.neutron [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:01:54 np0005486808 nova_compute[259627]: 2025-10-14 09:01:54.942 2 INFO nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.005 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.005 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.111 2 DEBUG oslo_concurrency.processutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:01:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:01:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701239625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.554 2 DEBUG oslo_concurrency.processutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.560 2 DEBUG nova.compute.provider_tree [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.579 2 DEBUG nova.scheduler.client.report [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.615 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.8 MiB/s wr, 342 op/s
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.650 2 INFO nova.scheduler.client.report [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Deleted allocations for instance 15778bbd-fbee-44e9-ba12-9884db0e7afb#033[00m
Oct 14 05:01:55 np0005486808 nova_compute[259627]: 2025-10-14 09:01:55.707 2 DEBUG oslo_concurrency.lockutils [None req-a56d8d13-06f4-4462-9261-c6689375b5d9 a8208d81c99c41668cc80998cc83bf02 f0dbf2d79ae5410d965cc3670bdd26ba - - default default] Lock "15778bbd-fbee-44e9-ba12-9884db0e7afb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:01:56 np0005486808 nova_compute[259627]: 2025-10-14 09:01:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:57 np0005486808 nova_compute[259627]: 2025-10-14 09:01:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:01:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 276 op/s
Oct 14 05:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 14 05:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 14 05:01:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 14 05:01:58 np0005486808 nova_compute[259627]: 2025-10-14 09:01:58.682 2 DEBUG nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:01:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 451 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 96 KiB/s wr, 162 op/s
Oct 14 05:01:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 14 05:01:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 14 05:01:59 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 14 05:02:00 np0005486808 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 05:02:00 np0005486808 NetworkManager[44885]: <info>  [1760432520.9233] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:02:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:00Z|00415|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 05:02:00 np0005486808 nova_compute[259627]: 2025-10-14 09:02:00.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:00Z|00416|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 05:02:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:00Z|00417|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 05:02:00 np0005486808 nova_compute[259627]: 2025-10-14 09:02:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.976 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:02:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.977 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:02:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.978 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:02:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a652163c-afd2-4d32-aead-cc3a984ec147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:00.980 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 05:02:01 np0005486808 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002b.scope: Consumed 12.695s CPU time.
Oct 14 05:02:01 np0005486808 systemd-machined[214636]: Machine qemu-56-instance-0000002b terminated.
Oct 14 05:02:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : haproxy version is 2.8.14-c23fe91
Oct 14 05:02:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [NOTICE]   (310270) : path to executable is /usr/sbin/haproxy
Oct 14 05:02:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [ALERT]    (310270) : Current worker (310273) exited with code 143 (Terminated)
Oct 14 05:02:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[310266]: [WARNING]  (310270) : All workers exited. Exiting... (0)
Oct 14 05:02:01 np0005486808 systemd[1]: libpod-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a.scope: Deactivated successfully.
Oct 14 05:02:01 np0005486808 podman[310951]: 2025-10-14 09:02:01.153103408 +0000 UTC m=+0.068353731 container died 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a-userdata-shm.mount: Deactivated successfully.
Oct 14 05:02:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9b4be05b7989e5d3ca75dc62592041c7de6672de773f72392d1c6ba933683c24-merged.mount: Deactivated successfully.
Oct 14 05:02:01 np0005486808 podman[310951]: 2025-10-14 09:02:01.189235267 +0000 UTC m=+0.104485580 container cleanup 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 systemd[1]: libpod-conmon-087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a.scope: Deactivated successfully.
Oct 14 05:02:01 np0005486808 podman[310983]: 2025-10-14 09:02:01.273128949 +0000 UTC m=+0.055645349 container remove 087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3463c3-7942-428b-b51b-dab7f16175f9]: (4, ('Tue Oct 14 09:02:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a)\n087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a\nTue Oct 14 09:02:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a)\n087f99062d1885c9b9b612aabcc94d330569bb6d89ebc0f48764682f773a303a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7dc0b1-50a4-446f-9217-f5c98b02655b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.308 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0efad08-24a0-41ba-b2a9-de19e05150cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.310 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.311 2 DEBUG oslo_concurrency.lockutils [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.311 2 DEBUG nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.311 2 WARNING nova.compute.manager [req-0f05712d-3ebd-4467-bc2f-d1c399583db4 req-368bfd9a-c6d9-4657-bce6-032ea3a2242a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60eff4f5-30c8-4f1b-8929-d5e563561710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ab300c-f636-40d6-88df-a9037afa3fd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.362 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82a2b9e8-3a4b-463a-95c7-3791f483739b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628014, 'reachable_time': 20762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311008, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.366 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:02:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:01.366 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9e991e-1f48-445e-a68c-1a64e1af9a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 959 KiB/s rd, 138 KiB/s wr, 233 op/s
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.695 2 INFO nova.virt.libvirt.driver [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.701 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.701 2 DEBUG nova.objects.instance [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.715 2 DEBUG nova.compute.manager [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:01 np0005486808 nova_compute[259627]: 2025-10-14 09:02:01.760 2 DEBUG oslo_concurrency.lockutils [None req-dec82346-822e-4cc6-8a4d-132918528745 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 14 05:02:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 14 05:02:01 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.485 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.487 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.488 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.488 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.489 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.491 2 INFO nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Terminating instance#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.493 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.493 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquired lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.495 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:02:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 14 05:02:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 14 05:02:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432507.6316822, f5ecec2f-eb67-49e5-abb8-15e2be8db618 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.633 2 INFO nova.compute.manager [-] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.744 2 DEBUG nova.compute.manager [None req-73ae561a-ea9c-4d91-b002-84bd8a6699fa - - - - - -] [instance: f5ecec2f-eb67-49e5-abb8-15e2be8db618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:02 np0005486808 nova_compute[259627]: 2025-10-14 09:02:02.869 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.436 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.456 2 DEBUG nova.network.neutron [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.457 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.470 2 DEBUG nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.470 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG oslo_concurrency.lockutils [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.471 2 DEBUG nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.471 2 WARNING nova.compute.manager [req-68237a80-3206-44fc-b531-2da235417d93 req-c965a987-dc9f-4303-9a6c-8b727d737c23 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.503 2 DEBUG nova.network.neutron [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.519 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Releasing lock "refresh_cache-6a03ef41-3cc5-48d2-8796-369687ac6a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.520 2 DEBUG nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:02:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 346 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 57 KiB/s wr, 97 op/s
Oct 14 05:02:03 np0005486808 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 14 05:02:03 np0005486808 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Consumed 12.845s CPU time.
Oct 14 05:02:03 np0005486808 systemd-machined[214636]: Machine qemu-53-instance-0000002d terminated.
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.740 2 INFO nova.virt.libvirt.driver [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance destroyed successfully.#033[00m
Oct 14 05:02:03 np0005486808 nova_compute[259627]: 2025-10-14 09:02:03.741 2 DEBUG nova.objects.instance [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'resources' on Instance uuid 6a03ef41-3cc5-48d2-8796-369687ac6a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:03 np0005486808 podman[311010]: 2025-10-14 09:02:03.7965177 +0000 UTC m=+0.058864998 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:02:03 np0005486808 podman[311009]: 2025-10-14 09:02:03.802008205 +0000 UTC m=+0.062324703 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.422 2 INFO nova.virt.libvirt.driver [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deleting instance files /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10_del#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.423 2 INFO nova.virt.libvirt.driver [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deletion of /var/lib/nova/instances/6a03ef41-3cc5-48d2-8796-369687ac6a10_del complete#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.467 2 INFO nova.compute.manager [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG oslo.service.loopingcall [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.468 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.781 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.804 2 DEBUG nova.network.neutron [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.819 2 INFO nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Took 0.35 seconds to deallocate network for instance.#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.871 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.872 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:04 np0005486808 nova_compute[259627]: 2025-10-14 09:02:04.969 2 DEBUG oslo_concurrency.processutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.266 2 DEBUG nova.network.neutron [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.297 2 DEBUG oslo_concurrency.lockutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.328 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.328 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.345 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340642275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.362 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.362 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.363 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.363 2 DEBUG os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.374 2 INFO os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.375 2 DEBUG oslo_concurrency.processutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.382 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.384 2 DEBUG nova.compute.provider_tree [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.388 2 WARNING nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.393 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.394 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.398 2 DEBUG nova.scheduler.client.report [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.401 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.libvirt.host [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.402 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.403 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.404 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.virt.hardware [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.405 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.419 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.452 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.489 2 INFO nova.scheduler.client.report [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Deleted allocations for instance 6a03ef41-3cc5-48d2-8796-369687ac6a10#033[00m
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793083822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.566 2 DEBUG oslo_concurrency.lockutils [None req-c4b4d93e-dc62-4724-882e-4ebf6713c9e2 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "6a03ef41-3cc5-48d2-8796-369687ac6a10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 61 KiB/s wr, 196 op/s
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401632240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.835 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:05 np0005486808 nova_compute[259627]: 2025-10-14 09:02:05.882 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:02:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373227759' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.328 2 DEBUG oslo_concurrency.processutils [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.330 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.331 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.332 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.334 2 DEBUG nova.objects.instance [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.353 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <name>instance-0000002b</name>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:02:05</nova:creationTime>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <target dev="tap8ec905f0-b7"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:02:06 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:02:06 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:02:06 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:02:06 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.356 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.356 2 DEBUG nova.virt.libvirt.driver [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.357 2 DEBUG nova.virt.libvirt.vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.358 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.358 2 DEBUG nova.network.os_vif_util [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.359 2 DEBUG os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.3701] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.375 2 INFO os_vif [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.406 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.407 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.408 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.409 2 INFO nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Terminating instance#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.410 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.410 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquired lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.411 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:02:06 np0005486808 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.4519] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:06Z|00418|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 05:02:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:06Z|00419|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.466 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.468 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:02:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:06Z|00420|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 05:02:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:06Z|00421|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 systemd-udevd[311167]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:02:06 np0005486808 systemd-machined[214636]: New machine qemu-58-instance-0000002b.
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0ddf9c-6a87-450b-98fe-1bcf993864c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.490 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eb5401-8f70-4b48-ad33-e533ca237ab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d07268-d87b-45d4-82d7-75866d54ba5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.5014] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.5029] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:02:06 np0005486808 systemd[1]: Started Virtual Machine qemu-58-instance-0000002b.
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.512 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4fd39d-eb5a-42ee-89bd-9cbdb2467531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d31e8f25-31ca-4778-8da8-d257e38b818c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38742248-42eb-4e79-97a0-1bfe1b3a85a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.6051] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2d65f4ad-7113-4191-a285-7099f8184795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.641 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2c044f-3b6f-497a-af77-49d653b37e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.645 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77c8cdbb-2630-4086-8677-2af75176119d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.6751] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.683 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[655f5591-b0d7-47e1-8f2f-e9334f674207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.704 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd70405-eda6-490c-a355-5b817cdb6aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630543, 'reachable_time': 20601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311202, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5eaa3ec-ab2f-4e9e-9155-9ca0c5d9fe31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630543, 'tstamp': 630543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311203, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb91610a-9d90-41de-af83-eb552ce617aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630543, 'reachable_time': 20601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311204, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f85e6cf8-63e9-4322-933a-470b626f40b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a707ed85-f579-447a-a6bc-08e70f2c9a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.868 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.869 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 05:02:06 np0005486808 NetworkManager[44885]: <info>  [1760432526.8731] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:06Z|00422|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.902 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9647d1ac-e513-4640-8fed-468bddea9d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.903 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:02:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:06.904 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:02:06 np0005486808 nova_compute[259627]: 2025-10-14 09:02:06.974 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.020 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.295 2 DEBUG nova.compute.manager [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.296 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.296 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432527.29517, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.301 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.302 2 DEBUG nova.compute.manager [None req-9ad76964-6521-4dfa-a0fd-4364be6aa028 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:07 np0005486808 podman[311278]: 2025-10-14 09:02:07.302686496 +0000 UTC m=+0.066245800 container create ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.329 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.333 2 DEBUG nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG oslo_concurrency.lockutils [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.334 2 DEBUG nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.334 2 WARNING nova.compute.manager [req-9f05155f-7eac-4759-a4ca-13663549a8c0 req-eff67994-ac2a-4f69-972d-4642ccea8c0e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:02:07 np0005486808 systemd[1]: Started libpod-conmon-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope.
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.351 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432527.2958403, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:02:07 np0005486808 podman[311278]: 2025-10-14 09:02:07.266943957 +0000 UTC m=+0.030503261 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.371 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.374 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bfab2c1e353f2bf941f3b9e1e0de39f9c4d7dd1ebe63d54aaa2d4f59edff49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.428 2 DEBUG nova.network.neutron [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:07 np0005486808 podman[311278]: 2025-10-14 09:02:07.442487713 +0000 UTC m=+0.206047037 container init ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.448 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Releasing lock "refresh_cache-333926ec-cf24-467b-b9b1-d1fa70a4feb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.449 2 DEBUG nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:02:07 np0005486808 podman[311278]: 2025-10-14 09:02:07.450401057 +0000 UTC m=+0.213960331 container start ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 05:02:07 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : New worker (311299) forked
Oct 14 05:02:07 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : Loading success.
Oct 14 05:02:07 np0005486808 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 14 05:02:07 np0005486808 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002c.scope: Consumed 13.496s CPU time.
Oct 14 05:02:07 np0005486808 systemd-machined[214636]: Machine qemu-52-instance-0000002c terminated.
Oct 14 05:02:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 14 05:02:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 14 05:02:07 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 14 05:02:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 4.7 KiB/s wr, 101 op/s
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.667 2 INFO nova.virt.libvirt.driver [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance destroyed successfully.#033[00m
Oct 14 05:02:07 np0005486808 nova_compute[259627]: 2025-10-14 09:02:07.667 2 DEBUG nova.objects.instance [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lazy-loading 'resources' on Instance uuid 333926ec-cf24-467b-b9b1-d1fa70a4feb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.126 2 INFO nova.virt.libvirt.driver [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deleting instance files /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2_del#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.127 2 INFO nova.virt.libvirt.driver [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deletion of /var/lib/nova/instances/333926ec-cf24-467b-b9b1-d1fa70a4feb2_del complete#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.184 2 INFO nova.compute.manager [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG oslo.service.loopingcall [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.185 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.528 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.542 2 DEBUG nova.network.neutron [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.559 2 INFO nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Took 0.37 seconds to deallocate network for instance.#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.609 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.609 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.677 2 DEBUG oslo_concurrency.processutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.844 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432513.8432994, 15778bbd-fbee-44e9-ba12-9884db0e7afb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.845 2 INFO nova.compute.manager [-] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:02:08 np0005486808 nova_compute[259627]: 2025-10-14 09:02:08.864 2 DEBUG nova.compute.manager [None req-e849734f-f36c-40ce-8b36-8665a06bd042 - - - - - -] [instance: 15778bbd-fbee-44e9-ba12-9884db0e7afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:02:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104698738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.126 2 DEBUG oslo_concurrency.processutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.134 2 DEBUG nova.compute.provider_tree [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.161 2 DEBUG nova.scheduler.client.report [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.199 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.223 2 INFO nova.scheduler.client.report [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Deleted allocations for instance 333926ec-cf24-467b-b9b1-d1fa70a4feb2#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.293 2 DEBUG oslo_concurrency.lockutils [None req-2cfbb229-01f1-4275-8dfe-f7a9af3efa63 fcce8bcba0284dda83483c56b84f3c0b 996304c8692b4264873203558d043ea2 - - default default] Lock "333926ec-cf24-467b-b9b1-d1fa70a4feb2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.573 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.574 2 DEBUG oslo_concurrency.lockutils [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.574 2 DEBUG nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:09 np0005486808 nova_compute[259627]: 2025-10-14 09:02:09.574 2 WARNING nova.compute.manager [req-47f558f2-782e-4220-b133-2f9e2f93de66 req-efebf50d-ba4d-4652-8cb8-5b65a8ec482a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:02:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 202 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 77 op/s
Oct 14 05:02:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 14 05:02:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 14 05:02:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.423 2 INFO nova.compute.manager [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Pausing#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.424 2 DEBUG nova.objects.instance [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.448 2 DEBUG nova.compute.manager [None req-bd9712f3-4522-41ab-9b35-19d5f9f70aa3 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.450 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432531.448608, de383510-2de3-40bd-b479-c0010b3f2d1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.450 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.477 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:11 np0005486808 nova_compute[259627]: 2025-10-14 09:02:11.526 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 14 05:02:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 14 05:02:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 14 05:02:11 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 14 05:02:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 05:02:12 np0005486808 nova_compute[259627]: 2025-10-14 09:02:12.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 14 05:02:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 14 05:02:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 14 05:02:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 123 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.3 KiB/s wr, 251 op/s
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.323 2 INFO nova.compute.manager [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Unpausing#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.324 2 DEBUG nova.objects.instance [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.360 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432534.3604438, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.361 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:02:14 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.368 2 DEBUG nova.virt.libvirt.guest [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.369 2 DEBUG nova.compute.manager [None req-23959e8a-65dc-4a61-9312-1e002c0bb5a1 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.380 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.384 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:14 np0005486808 nova_compute[259627]: 2025-10-14 09:02:14.409 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 14 05:02:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 15 KiB/s wr, 389 op/s
Oct 14 05:02:15 np0005486808 podman[311354]: 2025-10-14 09:02:15.68617884 +0000 UTC m=+0.077461085 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:02:15 np0005486808 podman[311353]: 2025-10-14 09:02:15.730996582 +0000 UTC m=+0.129371172 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 14 05:02:15 np0005486808 nova_compute[259627]: 2025-10-14 09:02:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:16 np0005486808 nova_compute[259627]: 2025-10-14 09:02:16.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:17 np0005486808 nova_compute[259627]: 2025-10-14 09:02:17.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 14 05:02:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 14 05:02:17 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 14 05:02:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 7.3 KiB/s wr, 137 op/s
Oct 14 05:02:17 np0005486808 nova_compute[259627]: 2025-10-14 09:02:17.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:18 np0005486808 nova_compute[259627]: 2025-10-14 09:02:18.739 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432523.7385423, 6a03ef41-3cc5-48d2-8796-369687ac6a10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:18 np0005486808 nova_compute[259627]: 2025-10-14 09:02:18.741 2 INFO nova.compute.manager [-] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:02:18 np0005486808 nova_compute[259627]: 2025-10-14 09:02:18.775 2 DEBUG nova.compute.manager [None req-d33ac651-06f7-45f4-a734-3396fc03802f - - - - - -] [instance: 6a03ef41-3cc5-48d2-8796-369687ac6a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:18 np0005486808 nova_compute[259627]: 2025-10-14 09:02:18.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.5 KiB/s wr, 103 op/s
Oct 14 05:02:19 np0005486808 nova_compute[259627]: 2025-10-14 09:02:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:19 np0005486808 nova_compute[259627]: 2025-10-14 09:02:19.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:19 np0005486808 nova_compute[259627]: 2025-10-14 09:02:19.980 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:19 np0005486808 nova_compute[259627]: 2025-10-14 09:02:19.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:02:20 np0005486808 nova_compute[259627]: 2025-10-14 09:02:20.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:20 np0005486808 nova_compute[259627]: 2025-10-14 09:02:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.002 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.004 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.004 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:02:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691515915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.460 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.545 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.546 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.0 KiB/s wr, 98 op/s
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.679 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.680 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3961MB free_disk=59.942630767822266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.680 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.681 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance de383510-2de3-40bd-b479-c0010b3f2d1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:02:21 np0005486808 nova_compute[259627]: 2025-10-14 09:02:21.808 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:21Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:02:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036963260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.233 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.240 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.267 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.305 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.306 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.666 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432527.6653194, 333926ec-cf24-467b-b9b1-d1fa70a4feb2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.666 2 INFO nova.compute.manager [-] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:02:22 np0005486808 nova_compute[259627]: 2025-10-14 09:02:22.735 2 DEBUG nova.compute.manager [None req-2e6cb24c-3a7a-46a8-a5a9-6dbd39f94e74 - - - - - -] [instance: 333926ec-cf24-467b-b9b1-d1fa70a4feb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.4 KiB/s wr, 88 op/s
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 962703ad-148d-439c-84fa-162ca297d84c does not exist
Oct 14 05:02:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 59aaea5b-eae7-4285-8106-bec33bb28e18 does not exist
Oct 14 05:02:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a815fb60-2811-4271-8913-bf9986ab71fa does not exist
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 640 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:02:25 np0005486808 podman[311715]: 2025-10-14 09:02:25.862383398 +0000 UTC m=+0.097845366 container create a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:02:25 np0005486808 podman[311715]: 2025-10-14 09:02:25.800460136 +0000 UTC m=+0.035922114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:25 np0005486808 systemd[1]: Started libpod-conmon-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope.
Oct 14 05:02:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:25 np0005486808 podman[311715]: 2025-10-14 09:02:25.979372385 +0000 UTC m=+0.214834383 container init a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:02:25 np0005486808 podman[311715]: 2025-10-14 09:02:25.986970372 +0000 UTC m=+0.222432340 container start a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:02:25 np0005486808 brave_lumiere[311731]: 167 167
Oct 14 05:02:25 np0005486808 systemd[1]: libpod-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope: Deactivated successfully.
Oct 14 05:02:26 np0005486808 podman[311715]: 2025-10-14 09:02:26.004577125 +0000 UTC m=+0.240039143 container attach a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:02:26 np0005486808 podman[311715]: 2025-10-14 09:02:26.006139643 +0000 UTC m=+0.241601631 container died a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:02:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2c97b4ef3ad9edd5c353ea609f574c31a2417d5416dc9ff18dd9eb1f53096e0b-merged.mount: Deactivated successfully.
Oct 14 05:02:26 np0005486808 podman[311715]: 2025-10-14 09:02:26.183900483 +0000 UTC m=+0.419362451 container remove a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:02:26 np0005486808 systemd[1]: libpod-conmon-a81c63fe1725eade62319234cb666e854b9b05e314dcebd8d6218e1ec35a4fda.scope: Deactivated successfully.
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.307 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.309 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.309 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:02:26 np0005486808 podman[311756]: 2025-10-14 09:02:26.374276844 +0000 UTC m=+0.050044711 container create 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:26 np0005486808 systemd[1]: Started libpod-conmon-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope.
Oct 14 05:02:26 np0005486808 podman[311756]: 2025-10-14 09:02:26.347194868 +0000 UTC m=+0.022962775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:26 np0005486808 podman[311756]: 2025-10-14 09:02:26.478818995 +0000 UTC m=+0.154586882 container init 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:02:26 np0005486808 podman[311756]: 2025-10-14 09:02:26.48515081 +0000 UTC m=+0.160918687 container start 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:02:26 np0005486808 podman[311756]: 2025-10-14 09:02:26.496054808 +0000 UTC m=+0.171822725 container attach 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.582 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:02:26 np0005486808 nova_compute[259627]: 2025-10-14 09:02:26.582 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:27 np0005486808 nova_compute[259627]: 2025-10-14 09:02:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:27 np0005486808 crazy_shockley[311773]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:02:27 np0005486808 crazy_shockley[311773]: --> relative data size: 1.0
Oct 14 05:02:27 np0005486808 crazy_shockley[311773]: --> All data devices are unavailable
Oct 14 05:02:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:27 np0005486808 systemd[1]: libpod-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Deactivated successfully.
Oct 14 05:02:27 np0005486808 systemd[1]: libpod-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Consumed 1.030s CPU time.
Oct 14 05:02:27 np0005486808 podman[311756]: 2025-10-14 09:02:27.583537455 +0000 UTC m=+1.259305372 container died 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:02:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 636 KiB/s rd, 13 KiB/s wr, 52 op/s
Oct 14 05:02:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c9bb19c1046bcb9a765cc9baa4e85c3d187190f1dffdf75ba81578374adcac48-merged.mount: Deactivated successfully.
Oct 14 05:02:27 np0005486808 podman[311756]: 2025-10-14 09:02:27.695282783 +0000 UTC m=+1.371050650 container remove 351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:02:27 np0005486808 systemd[1]: libpod-conmon-351ff6e4c2e99bff4a7f47e84b2a11667295d2ccd5558571b64dceda72e3a8a1.scope: Deactivated successfully.
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.410398115 +0000 UTC m=+0.056916250 container create 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:02:28 np0005486808 systemd[1]: Started libpod-conmon-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope.
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.381957946 +0000 UTC m=+0.028476111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.501051914 +0000 UTC m=+0.147570039 container init 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.508975559 +0000 UTC m=+0.155493674 container start 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:02:28 np0005486808 focused_borg[311973]: 167 167
Oct 14 05:02:28 np0005486808 systemd[1]: libpod-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope: Deactivated successfully.
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.520863491 +0000 UTC m=+0.167381636 container attach 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.52120928 +0000 UTC m=+0.167727395 container died 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:02:28 np0005486808 nova_compute[259627]: 2025-10-14 09:02:28.531 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:28 np0005486808 nova_compute[259627]: 2025-10-14 09:02:28.550 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:02:28 np0005486808 nova_compute[259627]: 2025-10-14 09:02:28.551 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:02:28 np0005486808 nova_compute[259627]: 2025-10-14 09:02:28.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:28 np0005486808 nova_compute[259627]: 2025-10-14 09:02:28.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:02:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ebf3bc561a275aafede5d37eee356d629fa0679de8415fc19cf02944ff6b89fb-merged.mount: Deactivated successfully.
Oct 14 05:02:28 np0005486808 podman[311956]: 2025-10-14 09:02:28.640264177 +0000 UTC m=+0.286782332 container remove 3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:02:28 np0005486808 systemd[1]: libpod-conmon-3f3abee674a680782d2b3e623a2e9f879d558835fe85d1d7a515ff685bed1a5f.scope: Deactivated successfully.
Oct 14 05:02:28 np0005486808 podman[311997]: 2025-10-14 09:02:28.831309124 +0000 UTC m=+0.052962583 container create 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:02:28 np0005486808 systemd[1]: Started libpod-conmon-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope.
Oct 14 05:02:28 np0005486808 podman[311997]: 2025-10-14 09:02:28.805224593 +0000 UTC m=+0.026878102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:28 np0005486808 podman[311997]: 2025-10-14 09:02:28.937841863 +0000 UTC m=+0.159495402 container init 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:02:28 np0005486808 podman[311997]: 2025-10-14 09:02:28.952294019 +0000 UTC m=+0.173947508 container start 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:02:28 np0005486808 podman[311997]: 2025-10-14 09:02:28.975085429 +0000 UTC m=+0.196738928 container attach 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:02:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 11 KiB/s wr, 44 op/s
Oct 14 05:02:29 np0005486808 clever_faraday[312013]: {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    "0": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "devices": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "/dev/loop3"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            ],
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_name": "ceph_lv0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_size": "21470642176",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "name": "ceph_lv0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "tags": {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_name": "ceph",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.crush_device_class": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.encrypted": "0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_id": "0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.vdo": "0"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            },
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "vg_name": "ceph_vg0"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        }
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    ],
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    "1": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "devices": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "/dev/loop4"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            ],
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_name": "ceph_lv1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_size": "21470642176",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "name": "ceph_lv1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "tags": {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_name": "ceph",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.crush_device_class": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.encrypted": "0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_id": "1",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.vdo": "0"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            },
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "vg_name": "ceph_vg1"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        }
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    ],
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    "2": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "devices": [
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "/dev/loop5"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            ],
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_name": "ceph_lv2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_size": "21470642176",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "name": "ceph_lv2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "tags": {
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.cluster_name": "ceph",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.crush_device_class": "",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.encrypted": "0",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osd_id": "2",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:                "ceph.vdo": "0"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            },
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "type": "block",
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:            "vg_name": "ceph_vg2"
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:        }
Oct 14 05:02:29 np0005486808 clever_faraday[312013]:    ]
Oct 14 05:02:29 np0005486808 clever_faraday[312013]: }
Oct 14 05:02:29 np0005486808 systemd[1]: libpod-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope: Deactivated successfully.
Oct 14 05:02:29 np0005486808 podman[311997]: 2025-10-14 09:02:29.848960365 +0000 UTC m=+1.070613834 container died 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:02:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c101f4539c5d576d3707776cb5d149273b66f8b7a6dc6023ce06227012df4bed-merged.mount: Deactivated successfully.
Oct 14 05:02:29 np0005486808 podman[311997]: 2025-10-14 09:02:29.929191997 +0000 UTC m=+1.150845496 container remove 8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:02:29 np0005486808 systemd[1]: libpod-conmon-8de6a954bc9214eddc7d05a273fa97588fdb7d0e6875685dc97036faf93d9046.scope: Deactivated successfully.
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.599228442 +0000 UTC m=+0.046970706 container create c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:02:30 np0005486808 systemd[1]: Started libpod-conmon-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope.
Oct 14 05:02:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.57680851 +0000 UTC m=+0.024550754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.691819458 +0000 UTC m=+0.139561702 container init c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.703965067 +0000 UTC m=+0.151707301 container start c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.707700238 +0000 UTC m=+0.155442502 container attach c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:02:30 np0005486808 busy_maxwell[312191]: 167 167
Oct 14 05:02:30 np0005486808 systemd[1]: libpod-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope: Deactivated successfully.
Oct 14 05:02:30 np0005486808 conmon[312191]: conmon c7d5ea9e01bd71e79574 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope/container/memory.events
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.713485661 +0000 UTC m=+0.161227955 container died c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:02:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-90ec6ca40dcc481486dc66b368804c0bd1bd2a227b0a8d1c42c0a91794cdb63b-merged.mount: Deactivated successfully.
Oct 14 05:02:30 np0005486808 podman[312175]: 2025-10-14 09:02:30.783361979 +0000 UTC m=+0.231104233 container remove c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:02:30 np0005486808 systemd[1]: libpod-conmon-c7d5ea9e01bd71e7957423c201e5a0f52d688ae8123cf2153c4cb63041e7f684.scope: Deactivated successfully.
Oct 14 05:02:30 np0005486808 podman[312216]: 2025-10-14 09:02:30.988159924 +0000 UTC m=+0.038416445 container create 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:02:31 np0005486808 systemd[1]: Started libpod-conmon-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope.
Oct 14 05:02:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:31 np0005486808 podman[312216]: 2025-10-14 09:02:30.970191732 +0000 UTC m=+0.020448243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:02:31 np0005486808 podman[312216]: 2025-10-14 09:02:31.078458473 +0000 UTC m=+0.128714984 container init 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:02:31 np0005486808 podman[312216]: 2025-10-14 09:02:31.086430799 +0000 UTC m=+0.136687290 container start 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:02:31 np0005486808 podman[312216]: 2025-10-14 09:02:31.089993217 +0000 UTC m=+0.140249738 container attach 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:02:31 np0005486808 nova_compute[259627]: 2025-10-14 09:02:31.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]: {
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_id": 2,
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "type": "bluestore"
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    },
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_id": 1,
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "type": "bluestore"
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    },
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_id": 0,
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:        "type": "bluestore"
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]:    }
Oct 14 05:02:32 np0005486808 amazing_hoover[312232]: }
Oct 14 05:02:32 np0005486808 nova_compute[259627]: 2025-10-14 09:02:32.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:32 np0005486808 systemd[1]: libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Deactivated successfully.
Oct 14 05:02:32 np0005486808 conmon[312232]: conmon 6913a6f396f1cc07d80e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope/container/memory.events
Oct 14 05:02:32 np0005486808 systemd[1]: libpod-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Consumed 1.052s CPU time.
Oct 14 05:02:32 np0005486808 podman[312216]: 2025-10-14 09:02:32.136669971 +0000 UTC m=+1.186926472 container died 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:02:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a27fd5e43c38c3b91e10868ec246bc444375bfd3f0799ba58001ad1136bc1ef7-merged.mount: Deactivated successfully.
Oct 14 05:02:32 np0005486808 podman[312216]: 2025-10-14 09:02:32.278133669 +0000 UTC m=+1.328390160 container remove 6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:02:32 np0005486808 systemd[1]: libpod-conmon-6913a6f396f1cc07d80e98a9d6a3a2e421ec316da3970923d83b3ccdb61630ca.scope: Deactivated successfully.
Oct 14 05:02:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:02:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:02:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 473321c2-10e0-4ace-ac05-3b901317db64 does not exist
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev af3cac45-41d7-4a50-884c-0174968db107 does not exist
Oct 14 05:02:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:02:32
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:02:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:02:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 20 KiB/s wr, 40 op/s
Oct 14 05:02:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:33Z|00423|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:02:33 np0005486808 nova_compute[259627]: 2025-10-14 09:02:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:34 np0005486808 nova_compute[259627]: 2025-10-14 09:02:34.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:34 np0005486808 podman[312329]: 2025-10-14 09:02:34.655309186 +0000 UTC m=+0.057525706 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 05:02:34 np0005486808 podman[312328]: 2025-10-14 09:02:34.677223025 +0000 UTC m=+0.081813553 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.112 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.112 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.113 2 INFO nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Rebooting instance#033[00m
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.139 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.140 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:02:35 np0005486808 nova_compute[259627]: 2025-10-14 09:02:35.140 2 DEBUG nova.network.neutron [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:02:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 34 KiB/s wr, 42 op/s
Oct 14 05:02:36 np0005486808 nova_compute[259627]: 2025-10-14 09:02:36.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.155 2 DEBUG nova.network.neutron [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.182 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.184 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 05:02:37 np0005486808 NetworkManager[44885]: <info>  [1760432557.3901] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:02:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:37Z|00424|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:37Z|00425|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 05:02:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:37Z|00426|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.455 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.457 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.458 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.459 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f107c14-0a88-4923-8364-7812c3d0c22e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.460 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 05:02:37 np0005486808 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002b.scope: Consumed 12.884s CPU time.
Oct 14 05:02:37 np0005486808 systemd-machined[214636]: Machine qemu-58-instance-0000002b terminated.
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.555 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.556 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.570 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.570 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.571 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.571 2 DEBUG os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.577 2 INFO os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:02:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:37 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : haproxy version is 2.8.14-c23fe91
Oct 14 05:02:37 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [NOTICE]   (311297) : path to executable is /usr/sbin/haproxy
Oct 14 05:02:37 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [WARNING]  (311297) : Exiting Master process...
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.583 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:02:37 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [ALERT]    (311297) : Current worker (311299) exited with code 143 (Terminated)
Oct 14 05:02:37 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[311293]: [WARNING]  (311297) : All workers exited. Exiting... (0)
Oct 14 05:02:37 np0005486808 systemd[1]: libpod-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope: Deactivated successfully.
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.588 2 WARNING nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:02:37 np0005486808 podman[312394]: 2025-10-14 09:02:37.593481116 +0000 UTC m=+0.046608137 container died ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.594 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.595 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.host [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.599 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.600 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.601 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.602 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.603 2 DEBUG nova.virt.hardware [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.603 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.619 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:02:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-87bfab2c1e353f2bf941f3b9e1e0de39f9c4d7dd1ebe63d54aaa2d4f59edff49-merged.mount: Deactivated successfully.
Oct 14 05:02:37 np0005486808 podman[312394]: 2025-10-14 09:02:37.640671657 +0000 UTC m=+0.093798668 container cleanup ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:02:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 05:02:37 np0005486808 systemd[1]: libpod-conmon-ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6.scope: Deactivated successfully.
Oct 14 05:02:37 np0005486808 podman[312435]: 2025-10-14 09:02:37.704132917 +0000 UTC m=+0.042566608 container remove ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77074e26-c1ec-4856-a42a-245b929ac869]: (4, ('Tue Oct 14 09:02:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6)\nab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6\nTue Oct 14 09:02:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (ab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6)\nab32ae27bddbf58ecddb1f3f251f2fd69a9fb7fa105056c4865076a9132477e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[700a38ec-b656-45ca-b334-ad990b3f7468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.714 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 05:02:37 np0005486808 nova_compute[259627]: 2025-10-14 09:02:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[616ed5bb-37dd-4b3a-84be-dedb66bbb965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8a3794-c370-4a30-83fe-d05afb5cd381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c00199a-ece1-4af9-a1cb-58482db23cd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.783 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f8626e6f-2bb0-4227-aa1c-e88eb03fdcec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630534, 'reachable_time': 22969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312469, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:37 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.786 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:02:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:37.786 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8e784a-51fb-4a7a-9b62-42ee63deffda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:02:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125458685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.079 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.112 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:02:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:02:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975625554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.623 2 DEBUG oslo_concurrency.processutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.625 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.626 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.627 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.629 2 DEBUG nova.objects.instance [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.647 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <name>instance-0000002b</name>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:02:37</nova:creationTime>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <target dev="tap8ec905f0-b7"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:02:38 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:02:38 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:02:38 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:02:38 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.650 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.651 2 DEBUG nova.virt.libvirt.driver [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.653 2 DEBUG nova.virt.libvirt.vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:02:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.654 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.655 2 DEBUG nova.network.os_vif_util [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.655 2 DEBUG os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.7112] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.716 2 INFO os_vif [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:02:38 np0005486808 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.7969] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Oct 14 05:02:38 np0005486808 systemd-udevd[312371]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:02:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:38Z|00427|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 05:02:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:38Z|00428|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.8188] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.816 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.8203] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.820 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.821 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:02:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:38Z|00429|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 05:02:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:38Z|00430|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 nova_compute[259627]: 2025-10-14 09:02:38.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[112103ed-ea69-4c24-9778-de3ca4da2ebd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.833 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:02:38 np0005486808 systemd-machined[214636]: New machine qemu-59-instance-0000002b.
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.838 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb540b4-8414-4cc5-b705-315bcaa242e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d5b97c-51d2-4b1c-9c28-5942806aa673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.850 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[341d8424-d80f-4ceb-b9b7-a74e66ce0a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 systemd[1]: Started Virtual Machine qemu-59-instance-0000002b.
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.862 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6ae106-1ae9-4ad1-9490-266bfb3dd44e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.913 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0eed56d8-81cc-4a50-b50d-f76b029c3e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.919 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e581e25f-f129-4f41-8338-3469360b9a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.9209] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.960 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2acf514e-218b-40ff-8943-d850ab49bedf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.965 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9982b274-6bfe-49b2-8b2e-79bdceac3d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:38 np0005486808 NetworkManager[44885]: <info>  [1760432558.9913] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 05:02:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:38.999 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41af675a-b1ab-4a5b-982f-23bbfd0cd486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.016 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[515895c8-0dc3-496d-a6fa-7f0b0b509eb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312556, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.031 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b45aa3-1bf7-48b6-bc7e-dd75926290c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633775, 'tstamp': 633775}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312557, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.046 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9796f1aa-3192-474c-9fe9-512287c195e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312558, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.090 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e32a5e0a-efa9-4a5f-b429-37a6bad21314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[143d2519-effb-4934-b40a-f4af4824df74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:39 np0005486808 nova_compute[259627]: 2025-10-14 09:02:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:39 np0005486808 NetworkManager[44885]: <info>  [1760432559.1673] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 14 05:02:39 np0005486808 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 05:02:39 np0005486808 nova_compute[259627]: 2025-10-14 09:02:39.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:39Z|00431|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:02:39 np0005486808 nova_compute[259627]: 2025-10-14 09:02:39.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.196 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.198 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[baf7973e-4334-4a71-bdaf-272e5c9e06c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.199 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:02:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:39.200 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:02:39 np0005486808 podman[312628]: 2025-10-14 09:02:39.579912435 +0000 UTC m=+0.052257755 container create 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:02:39 np0005486808 systemd[1]: Started libpod-conmon-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope.
Oct 14 05:02:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 14 05:02:39 np0005486808 podman[312628]: 2025-10-14 09:02:39.550618855 +0000 UTC m=+0.022964205 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:02:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:02:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b394997ed82cbbabad23e931b57790be0862fb55a64f523707c20df554392e6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:02:39 np0005486808 podman[312628]: 2025-10-14 09:02:39.678191422 +0000 UTC m=+0.150536762 container init 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:02:39 np0005486808 podman[312628]: 2025-10-14 09:02:39.68340717 +0000 UTC m=+0.155752480 container start 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:02:39 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : New worker (312651) forked
Oct 14 05:02:39 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : Loading success.
Oct 14 05:02:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:40.014 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:40.015 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.138 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.138 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432560.137207, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.138 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.141 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.146 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.146 2 DEBUG nova.compute.manager [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.155 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.158 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.189 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.189 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432560.1374612, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.189 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.200 2 DEBUG oslo_concurrency.lockutils [None req-120b10e4-a62a-421d-8f74-0b27cc17f938 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.220 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:02:40 np0005486808 nova_compute[259627]: 2025-10-14 09:02:40.224 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:02:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 24 KiB/s wr, 10 op/s
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.984 2 DEBUG nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.984 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.985 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.985 2 DEBUG oslo_concurrency.lockutils [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.986 2 DEBUG nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:41 np0005486808 nova_compute[259627]: 2025-10-14 09:02:41.986 2 WARNING nova.compute.manager [req-aed2b03e-ce80-4432-a286-8cb0ef7c1fed req-7fcde271-2ef0-411f-890b-d50e723b2b1d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:02:42 np0005486808 nova_compute[259627]: 2025-10-14 09:02:42.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613715447352999 of space, bias 1.0, pg target 0.22841146342058996 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:02:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:02:43 np0005486808 nova_compute[259627]: 2025-10-14 09:02:43.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 14 05:02:43 np0005486808 nova_compute[259627]: 2025-10-14 09:02:43.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.086 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.087 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.088 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.088 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.089 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.090 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.090 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.091 2 DEBUG oslo_concurrency.lockutils [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.091 2 DEBUG nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:02:44 np0005486808 nova_compute[259627]: 2025-10-14 09:02:44.091 2 WARNING nova.compute.manager [req-b16ac64e-8886-465d-9bdb-4e4a4afb9a83 req-970e310f-1648-4517-a66a-26df4f46bfeb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:02:45 np0005486808 nova_compute[259627]: 2025-10-14 09:02:45.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 14 05:02:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:02:46.017 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:02:46 np0005486808 podman[312661]: 2025-10-14 09:02:46.647254127 +0000 UTC m=+0.062704273 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 05:02:46 np0005486808 podman[312660]: 2025-10-14 09:02:46.706547085 +0000 UTC m=+0.121403386 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:02:47 np0005486808 nova_compute[259627]: 2025-10-14 09:02:47.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 05:02:48 np0005486808 nova_compute[259627]: 2025-10-14 09:02:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:49Z|00432|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:02:49 np0005486808 nova_compute[259627]: 2025-10-14 09:02:49.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 14 05:02:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Oct 14 05:02:52 np0005486808 nova_compute[259627]: 2025-10-14 09:02:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:52Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:02:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 14 05:02:53 np0005486808 nova_compute[259627]: 2025-10-14 09:02:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.3 KiB/s wr, 107 op/s
Oct 14 05:02:57 np0005486808 nova_compute[259627]: 2025-10-14 09:02:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:02:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 05:02:58 np0005486808 nova_compute[259627]: 2025-10-14 09:02:58.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:02:58Z|00433|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:02:58 np0005486808 nova_compute[259627]: 2025-10-14 09:02:58.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:02:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 9.3 KiB/s wr, 44 op/s
Oct 14 05:03:01 np0005486808 nova_compute[259627]: 2025-10-14 09:03:01.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 44 op/s
Oct 14 05:03:02 np0005486808 nova_compute[259627]: 2025-10-14 09:03:02.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 20 KiB/s wr, 42 op/s
Oct 14 05:03:03 np0005486808 nova_compute[259627]: 2025-10-14 09:03:03.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:04 np0005486808 nova_compute[259627]: 2025-10-14 09:03:04.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:03:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:03:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:03:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2589874852' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:03:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 526 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct 14 05:03:05 np0005486808 podman[312704]: 2025-10-14 09:03:05.667488239 +0000 UTC m=+0.085206446 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 14 05:03:05 np0005486808 podman[312705]: 2025-10-14 09:03:05.667514749 +0000 UTC m=+0.074508673 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.021 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:07 np0005486808 nova_compute[259627]: 2025-10-14 09:03:07.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:07 np0005486808 nova_compute[259627]: 2025-10-14 09:03:07.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.476 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.476 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.501 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.580 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.580 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.588 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.589 2 INFO nova.compute.claims [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.698 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:08 np0005486808 nova_compute[259627]: 2025-10-14 09:03:08.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451314201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.147 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.153 2 DEBUG nova.compute.provider_tree [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.183 2 DEBUG nova.scheduler.client.report [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.214 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.215 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.265 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.265 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.289 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.308 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.406 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.407 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.407 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating image(s)#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.431 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.453 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.472 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.476 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.542 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.543 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.544 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.544 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.563 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.566 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e0bc2109-5f5c-4797-98c7-866f2d11f513_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 123 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.761 2 DEBUG nova.policy [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f302a20e13b14bb999539ee5df041036', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '197096bf838b4b289aed810f1495a6c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.836 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e0bc2109-5f5c-4797-98c7-866f2d11f513_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.915 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] resizing rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:09 np0005486808 nova_compute[259627]: 2025-10-14 09:03:09.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.024 2 DEBUG nova.objects.instance [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'migration_context' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.048 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.048 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Ensure instance console log exists: /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.049 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.050 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.050 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.753 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Successfully created port: ec24b957-093d-460e-a2cf-925bbfd2d421 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:03:10 np0005486808 nova_compute[259627]: 2025-10-14 09:03:10.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.718 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Successfully updated port: ec24b957-093d-460e-a2cf-925bbfd2d421 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.747 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.747 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquired lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.748 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.895 2 DEBUG nova.compute.manager [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-changed-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.896 2 DEBUG nova.compute.manager [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Refreshing instance network info cache due to event network-changed-ec24b957-093d-460e-a2cf-925bbfd2d421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:03:11 np0005486808 nova_compute[259627]: 2025-10-14 09:03:11.896 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:03:12 np0005486808 nova_compute[259627]: 2025-10-14 09:03:12.061 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:03:12 np0005486808 nova_compute[259627]: 2025-10-14 09:03:12.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.120 2 DEBUG nova.network.neutron [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.149 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Releasing lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.150 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance network_info: |[{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.151 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.152 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Refreshing network info cache for port ec24b957-093d-460e-a2cf-925bbfd2d421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.158 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start _get_guest_xml network_info=[{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.165 2 WARNING nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.176 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.177 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.182 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.182 2 DEBUG nova.virt.libvirt.host [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.183 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.184 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.185 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.185 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.186 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.187 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.187 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.188 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.188 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.189 2 DEBUG nova.virt.hardware [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.193 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958762108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.647 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 169 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.673 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.677 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:13 np0005486808 nova_compute[259627]: 2025-10-14 09:03:13.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056529507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.115 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.117 2 DEBUG nova.virt.libvirt.vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:09Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.117 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.118 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.119 2 DEBUG nova.objects.instance [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.141 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <uuid>e0bc2109-5f5c-4797-98c7-866f2d11f513</uuid>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <name>instance-00000031</name>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:name>tempest-ImagesOneServerTestJSON-server-664198077</nova:name>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:03:13</nova:creationTime>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:user uuid="f302a20e13b14bb999539ee5df041036">tempest-ImagesOneServerTestJSON-1747208535-project-member</nova:user>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:project uuid="197096bf838b4b289aed810f1495a6c5">tempest-ImagesOneServerTestJSON-1747208535</nova:project>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <nova:port uuid="ec24b957-093d-460e-a2cf-925bbfd2d421">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="serial">e0bc2109-5f5c-4797-98c7-866f2d11f513</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="uuid">e0bc2109-5f5c-4797-98c7-866f2d11f513</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b8:b3:92"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <target dev="tapec24b957-09"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/console.log" append="off"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:03:14 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:03:14 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:03:14 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:03:14 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Preparing to wait for external event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.143 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.144 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.144 2 DEBUG nova.virt.libvirt.vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:09Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.145 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.146 2 DEBUG nova.network.os_vif_util [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.146 2 DEBUG os_vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec24b957-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec24b957-09, col_values=(('external_ids', {'iface-id': 'ec24b957-093d-460e-a2cf-925bbfd2d421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:b3:92', 'vm-uuid': 'e0bc2109-5f5c-4797-98c7-866f2d11f513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:14 np0005486808 NetworkManager[44885]: <info>  [1760432594.1997] manager: (tapec24b957-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.208 2 INFO os_vif [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09')#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.263 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.264 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.264 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No VIF found with MAC fa:16:3e:b8:b3:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.265 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Using config drive#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.300 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.719 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updated VIF entry in instance network info cache for port ec24b957-093d-460e-a2cf-925bbfd2d421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.720 2 DEBUG nova.network.neutron [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [{"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.742 2 DEBUG oslo_concurrency.lockutils [req-90a4705e-a2f9-4992-8985-a4e3465016e8 req-fd20d0fb-719a-4fbf-bc0b-5d666380f08d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e0bc2109-5f5c-4797-98c7-866f2d11f513" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.764 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Creating config drive at /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.774 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaksstgg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.936 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaksstgg" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.980 2 DEBUG nova.storage.rbd_utils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] rbd image e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:14 np0005486808 nova_compute[259627]: 2025-10-14 09:03:14.986 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.155 2 DEBUG oslo_concurrency.processutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config e0bc2109-5f5c-4797-98c7-866f2d11f513_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.156 2 INFO nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deleting local config drive /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513/disk.config because it was imported into RBD.#033[00m
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.2123] manager: (tapec24b957-09): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Oct 14 05:03:15 np0005486808 kernel: tapec24b957-09: entered promiscuous mode
Oct 14 05:03:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:15Z|00434|binding|INFO|Claiming lport ec24b957-093d-460e-a2cf-925bbfd2d421 for this chassis.
Oct 14 05:03:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:15Z|00435|binding|INFO|ec24b957-093d-460e-a2cf-925bbfd2d421: Claiming fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.223 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.224 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c bound to our chassis#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.225 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a0e49-1270-4bed-9021-7c81d5cc8ab7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.237 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17ac22f4-a1 in ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.239 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17ac22f4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf11bf4-db18-4bf4-ab6b-8fd6dad10875]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.240 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbdddf-8f37-4415-90ce-6368bf31c12e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:15Z|00436|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 ovn-installed in OVS
Oct 14 05:03:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:15Z|00437|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 up in Southbound
Oct 14 05:03:15 np0005486808 systemd-machined[214636]: New machine qemu-60-instance-00000031.
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.252 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1efd37da-9841-4048-9b77-6f9453fc6212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 systemd[1]: Started Virtual Machine qemu-60-instance-00000031.
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fba52819-f2ad-4969-bf71-15f45dada596]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 systemd-udevd[313070]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.2900] device (tapec24b957-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.2909] device (tapec24b957-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca19724-3ff9-490c-a722-97e02a12fc3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d472a8d7-8bc1-4e48-bffa-d3483e606678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.3131] manager: (tap17ac22f4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.344 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[adfc49ee-abfe-467f-af61-49d7b67a62ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc19d1f7-5a73-4ee8-9183-1bd43df28510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.3708] device (tap17ac22f4-a0): carrier: link connected
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.375 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb78aa0-9018-4739-8c02-84499071860e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88e7809b-91c2-4c6f-a22e-f79fc5bd8171]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17ac22f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:2e:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637413, 'reachable_time': 38862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313100, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.403 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5227661-4da6-4911-b915-c8288d87ca39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:2ece'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637413, 'tstamp': 637413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313101, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.423 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4ce3b6-d676-4ebb-b648-1d48231614c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17ac22f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:2e:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637413, 'reachable_time': 38862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313109, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5432dc-da3c-4505-b25b-ac57b9e7697a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.511 2 DEBUG nova.compute.manager [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.511 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG oslo_concurrency.lockutils [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.512 2 DEBUG nova.compute.manager [req-6cc07c3f-bbf9-4dbd-a2fd-9f177946ed5d req-4630216f-985b-4e3b-acf9-457d33dbd238 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Processing event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df449c85-a2ad-4775-966d-350361734dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17ac22f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17ac22f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:15 np0005486808 NetworkManager[44885]: <info>  [1760432595.5509] manager: (tap17ac22f4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 kernel: tap17ac22f4-a0: entered promiscuous mode
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.556 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17ac22f4-a0, col_values=(('external_ids', {'iface-id': '90534a6a-0aa1-48a2-852b-3056843e4924'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:15Z|00438|binding|INFO|Releasing lport 90534a6a-0aa1-48a2-852b-3056843e4924 from this chassis (sb_readonly=0)
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.576 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.578 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc1b56c-2c04-47df-b51d-db23bc853700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.579 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-17ac22f4-a94a-4a44-af02-3207d6bbc30c
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/17ac22f4-a94a-4a44-af02-3207d6bbc30c.pid.haproxy
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 17ac22f4-a94a-4a44-af02-3207d6bbc30c
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:03:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:15.580 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'env', 'PROCESS_TAG=haproxy-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17ac22f4-a94a-4a44-af02-3207d6bbc30c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:03:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:03:15 np0005486808 nova_compute[259627]: 2025-10-14 09:03:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.020 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.021 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.01992, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.022 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Started (Lifecycle Event)#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.031 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.037 2 INFO nova.virt.libvirt.driver [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance spawned successfully.#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.037 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:03:16 np0005486808 podman[313176]: 2025-10-14 09:03:16.059233165 +0000 UTC m=+0.084324854 container create 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.067 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.070 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.070 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.071 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.072 2 DEBUG nova.virt.libvirt.driver [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.098 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.0209563, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:03:16 np0005486808 systemd[1]: Started libpod-conmon-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope.
Oct 14 05:03:16 np0005486808 podman[313176]: 2025-10-14 09:03:16.027204977 +0000 UTC m=+0.052296666 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.128 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.131 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432596.0301926, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.131 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:03:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.139 2 INFO nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 6.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.139 2 DEBUG nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f27f2d659583d74823866ac940d88f699c60f5ba72cd023b094e06f8a3931c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.146 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.148 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:03:16 np0005486808 podman[313176]: 2025-10-14 09:03:16.197342431 +0000 UTC m=+0.222434130 container init 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:03:16 np0005486808 podman[313176]: 2025-10-14 09:03:16.203371589 +0000 UTC m=+0.228463288 container start 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.207 2 INFO nova.compute.manager [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 7.66 seconds to build instance.#033[00m
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.222 2 DEBUG oslo_concurrency.lockutils [None req-9c300dd1-af2f-4919-a9b7-bdcf16d387e8 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:16 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : New worker (313195) forked
Oct 14 05:03:16 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : Loading success.
Oct 14 05:03:16 np0005486808 nova_compute[259627]: 2025-10-14 09:03:16.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.460 2 DEBUG nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.500 2 INFO nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] instance snapshotting#033[00m
Oct 14 05:03:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:03:17 np0005486808 podman[313205]: 2025-10-14 09:03:17.693348092 +0000 UTC m=+0.086819886 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:03:17 np0005486808 podman[313204]: 2025-10-14 09:03:17.727300607 +0000 UTC m=+0.124855041 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:03:17 np0005486808 nova_compute[259627]: 2025-10-14 09:03:17.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.050 2 INFO nova.virt.libvirt.driver [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Beginning live snapshot process#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.281 2 DEBUG nova.virt.libvirt.imagebackend [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.399 2 DEBUG nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.400 2 DEBUG oslo_concurrency.lockutils [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.401 2 DEBUG nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] No waiting events found dispatching network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.401 2 WARNING nova.compute.manager [req-450404e1-aa30-42d4-b948-6c333ee15559 req-49dff267-0a51-49d6-b8f9-548d4b5a5c83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received unexpected event network-vif-plugged-ec24b957-093d-460e-a2cf-925bbfd2d421 for instance with vm_state active and task_state image_uploading.#033[00m
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.572 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(ec9a431003054ee8811038a4d62f7339) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:03:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 14 05:03:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 14 05:03:18 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 14 05:03:18 np0005486808 nova_compute[259627]: 2025-10-14 09:03:18.949 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] cloning vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk@ec9a431003054ee8811038a4d62f7339 to images/72577407-a8f9-488a-b219-b5d5d896d73d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.103 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] flattening images/72577407-a8f9-488a-b219-b5d5d896d73d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.242 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.243 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.262 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.404 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] removing snapshot(ec9a431003054ee8811038a4d62f7339) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.460 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.461 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.470 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.471 2 INFO nova.compute.claims [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:03:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 169 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.700 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 14 05:03:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 14 05:03:19 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:19 np0005486808 nova_compute[259627]: 2025-10-14 09:03:19.996 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.081 2 DEBUG nova.storage.rbd_utils [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(snap) on rbd image(72577407-a8f9-488a-b219-b5d5d896d73d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:03:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494532258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.259 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.267 2 DEBUG nova.compute.provider_tree [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.294 2 DEBUG nova.scheduler.client.report [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.355 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.357 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.418 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.419 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.441 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.457 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.543 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.545 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.545 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating image(s)#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.573 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.609 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.643 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.649 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.736 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.739 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.740 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.740 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.768 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.773 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:20 np0005486808 nova_compute[259627]: 2025-10-14 09:03:20.897 2 DEBUG nova.policy [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa32af91355a41198fd57121e5c70ec2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368d762ed02e459d892ad1e5488c2871', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:03:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 14 05:03:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 14 05:03:20 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.117 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.177 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] resizing rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.265 2 DEBUG nova.objects.instance [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'migration_context' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.281 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.281 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ensure instance console log exists: /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.282 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.282 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.283 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.888 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Successfully created port: d1066ec7-d932-4d99-aff7-7f7e80c54724 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:21 np0005486808 nova_compute[259627]: 2025-10-14 09:03:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.001 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.002 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.003 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178111964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.455 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:03:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.589 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.730 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Successfully updated port: d1066ec7-d932-4d99-aff7-7f7e80c54724 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.743 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.744 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.744 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.770 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.771 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3835MB free_disk=59.92169189453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.771 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.772 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.818 2 DEBUG nova.compute.manager [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-changed-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.819 2 DEBUG nova.compute.manager [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Refreshing instance network info cache due to event network-changed-d1066ec7-d932-4d99-aff7-7f7e80c54724. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.819 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.851 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance de383510-2de3-40bd-b479-c0010b3f2d1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.851 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e0bc2109-5f5c-4797-98c7-866f2d11f513 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.852 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.897 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:03:22 np0005486808 nova_compute[259627]: 2025-10-14 09:03:22.929 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.014 2 INFO nova.virt.libvirt.driver [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Snapshot image upload complete#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.016 2 INFO nova.compute.manager [None req-7b9e28ff-4705-4591-a1b6-5c9b8293a6f0 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 5.51 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:03:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378566224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.459 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.467 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.503 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.541 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:03:23 np0005486808 nova_compute[259627]: 2025-10-14 09:03:23.542 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 215 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.473 2 DEBUG nova.network.neutron [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.500 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.500 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance network_info: |[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.501 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.501 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Refreshing network info cache for port d1066ec7-d932-4d99-aff7-7f7e80c54724 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.505 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start _get_guest_xml network_info=[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.511 2 WARNING nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.519 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.520 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.524 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.libvirt.host [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.525 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.526 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.527 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.528 2 DEBUG nova.virt.hardware [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.530 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667482358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:24 np0005486808 nova_compute[259627]: 2025-10-14 09:03:24.996 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.016 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.020 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077889941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.451 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.453 2 DEBUG nova.virt.libvirt.vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-tempest.common.compute-instance-1481502960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:20Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.453 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.454 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.455 2 DEBUG nova.objects.instance [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.483 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <uuid>dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</uuid>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <name>instance-00000032</name>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-1481502960</nova:name>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:03:24</nova:creationTime>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <nova:port uuid="d1066ec7-d932-4d99-aff7-7f7e80c54724">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="serial">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="uuid">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:ba:e4:cb"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <target dev="tapd1066ec7-d9"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log" append="off"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:03:25 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:03:25 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:03:25 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:03:25 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.486 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Preparing to wait for external event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.487 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.488 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.488 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.490 2 DEBUG nova.virt.libvirt.vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-tempest.common.compute-instance-1481502960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:20Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.490 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.492 2 DEBUG nova.network.os_vif_util [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.492 2 DEBUG os_vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1066ec7-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1066ec7-d9, col_values=(('external_ids', {'iface-id': 'd1066ec7-d932-4d99-aff7-7f7e80c54724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e4:cb', 'vm-uuid': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:25 np0005486808 NetworkManager[44885]: <info>  [1760432605.5050] manager: (tapd1066ec7-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.515 2 INFO os_vif [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.592 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.592 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.593 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No VIF found with MAC fa:16:3e:ba:e4:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.593 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Using config drive#033[00m
Oct 14 05:03:25 np0005486808 nova_compute[259627]: 2025-10-14 09:03:25.621 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00439|binding|INFO|Releasing lport 90534a6a-0aa1-48a2-852b-3056843e4924 from this chassis (sb_readonly=0)
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00440|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.328 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating config drive at /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.338 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwe02j6kv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.486 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwe02j6kv" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.514 2 DEBUG nova.storage.rbd_utils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.518 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.550 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updated VIF entry in instance network info cache for port d1066ec7-d932-4d99-aff7-7f7e80c54724. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.551 2 DEBUG nova.network.neutron [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.570 2 DEBUG oslo_concurrency.lockutils [req-6a3a9b33-071e-45f5-a0c3-208a7ce7f382 req-7cc2fc0e-01d0-41b9-bfb5-77567be59efa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.670 2 DEBUG oslo_concurrency.processutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.671 2 INFO nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting local config drive /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:03:26 np0005486808 kernel: tapd1066ec7-d9: entered promiscuous mode
Oct 14 05:03:26 np0005486808 NetworkManager[44885]: <info>  [1760432606.7163] manager: (tapd1066ec7-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00441|binding|INFO|Claiming lport d1066ec7-d932-4d99-aff7-7f7e80c54724 for this chassis.
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00442|binding|INFO|d1066ec7-d932-4d99-aff7-7f7e80c54724: Claiming fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.726 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.728 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.729 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00443|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 ovn-installed in OVS
Oct 14 05:03:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:26Z|00444|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 up in Southbound
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 systemd-udevd[313760]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db47fdfd-d0c5-4806-a2a0-2960c05b5153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 NetworkManager[44885]: <info>  [1760432606.7579] device (tapd1066ec7-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:03:26 np0005486808 NetworkManager[44885]: <info>  [1760432606.7586] device (tapd1066ec7-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:03:26 np0005486808 systemd-machined[214636]: New machine qemu-61-instance-00000032.
Oct 14 05:03:26 np0005486808 systemd[1]: Started Virtual Machine qemu-61-instance-00000032.
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.786 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95faa109-5686-4f4c-a654-14f062658dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.789 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79218951-7899-4646-b041-2e3eecc14d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.827 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeb58f3-610f-4a14-a18f-d92098154bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06a95283-777f-4994-be60-5d4b4e0b7869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313773, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d794515b-7249-4d1b-8a32-45e2cc5bb63d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 nova_compute[259627]: 2025-10-14 09:03:26.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.878 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:26.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 14 05:03:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 14 05:03:26 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.335 2 DEBUG nova.compute.manager [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG oslo_concurrency.lockutils [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.336 2 DEBUG nova.compute.manager [req-2daa6508-c044-4394-a246-e8cd2c8182f0 req-36b19759-38fa-470e-8aaf-1e092380dcfe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Processing event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.542 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.543 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.544 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.581 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:03:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 14 05:03:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 14 05:03:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 14 05:03:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 73 op/s
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.848 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.849 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.905 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.9047358, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.905 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.907 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.915 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.918 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance spawned successfully.#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.919 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.925 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.927 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.939 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.939 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.940 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.940 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.941 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.941 2 DEBUG nova.virt.libvirt.driver [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.949 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.9049146, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.971 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.973 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432607.91331, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:27 np0005486808 nova_compute[259627]: 2025-10-14 09:03:27.973 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.002 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.005 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.024 2 INFO nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 7.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.024 2 DEBUG nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.025 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.089 2 INFO nova.compute.manager [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 8.78 seconds to build instance.#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.109 2 DEBUG oslo_concurrency.lockutils [None req-0e7e65d5-9f5e-4de6-a1ac-0afa514c70d9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:28Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 05:03:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:28Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.662 2 DEBUG nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:28 np0005486808 nova_compute[259627]: 2025-10-14 09:03:28.713 2 INFO nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] instance snapshotting#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.042 2 INFO nova.virt.libvirt.driver [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Beginning live snapshot process#033[00m
Oct 14 05:03:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:03:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 18K writes, 71K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 5948 syncs, 3.06 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 48.06 MB, 0.08 MB/s#012Interval WAL: 12K writes, 4849 syncs, 2.51 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.221 2 DEBUG nova.virt.libvirt.imagebackend [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.529 2 DEBUG nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.530 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.531 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.531 2 DEBUG oslo_concurrency.lockutils [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.532 2 DEBUG nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.532 2 WARNING nova.compute.manager [req-2ef443aa-d15a-4221-9629-4e3c375af1ed req-31a1ccce-21e8-47cd-adbf-d2d4415105a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:03:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 262 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 61 op/s
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.932 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(2af6a09057ae4279bdcc856d858c397f) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:03:29 np0005486808 nova_compute[259627]: 2025-10-14 09:03:29.980 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 14 05:03:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.001 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.002 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:30 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.053 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] cloning vms/e0bc2109-5f5c-4797-98c7-866f2d11f513_disk@2af6a09057ae4279bdcc856d858c397f to images/03a1c53c-9452-4b5f-b9d0-29539ac9e6c6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid de383510-2de3-40bd-b479-c0010b3f2d1c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.160 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.161 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.162 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.170 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] flattening images/03a1c53c-9452-4b5f-b9d0-29539ac9e6c6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.255 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.256 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.473 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] removing snapshot(2af6a09057ae4279bdcc856d858c397f) on rbd image(e0bc2109-5f5c-4797-98c7-866f2d11f513_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:03:30 np0005486808 nova_compute[259627]: 2025-10-14 09:03:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 14 05:03:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 14 05:03:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 14 05:03:31 np0005486808 nova_compute[259627]: 2025-10-14 09:03:31.066 2 DEBUG nova.storage.rbd_utils [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] creating snapshot(snap) on rbd image(03a1c53c-9452-4b5f-b9d0-29539ac9e6c6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:03:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 547 op/s
Oct 14 05:03:31 np0005486808 nova_compute[259627]: 2025-10-14 09:03:31.961 2 INFO nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Rebuilding instance#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.221 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.236 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.295 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_requests' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.307 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.319 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.339 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'migration_context' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.354 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:03:32 np0005486808 nova_compute[259627]: 2025-10-14 09:03:32.358 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 14 05:03:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:03:32
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.data', 'images']
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 097101cc-f3e2-4a4e-be06-0d77e2cd731b does not exist
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f1e9222-7c98-4a36-a2d2-7b69ece705d1 does not exist
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 29a49308-7056-4d18-b833-90d996acc8c0 does not exist
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:03:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:03:33 np0005486808 nova_compute[259627]: 2025-10-14 09:03:33.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 285 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 642 op/s
Oct 14 05:03:33 np0005486808 nova_compute[259627]: 2025-10-14 09:03:33.862 2 INFO nova.virt.libvirt.driver [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Snapshot image upload complete#033[00m
Oct 14 05:03:33 np0005486808 nova_compute[259627]: 2025-10-14 09:03:33.863 2 INFO nova.compute.manager [None req-82976428-57f8-4d6f-81b5-cab4bc2e746d f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 5.15 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:03:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:03:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.095545877 +0000 UTC m=+0.049624241 container create a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:03:34 np0005486808 systemd[1]: Started libpod-conmon-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope.
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.072261435 +0000 UTC m=+0.026339799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.203242185 +0000 UTC m=+0.157320549 container init a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.211925119 +0000 UTC m=+0.166003463 container start a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.216024609 +0000 UTC m=+0.170102973 container attach a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 05:03:34 np0005486808 optimistic_knuth[314245]: 167 167
Oct 14 05:03:34 np0005486808 systemd[1]: libpod-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope: Deactivated successfully.
Oct 14 05:03:34 np0005486808 conmon[314245]: conmon a05546547dd6590ca339 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope/container/memory.events
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.220777706 +0000 UTC m=+0.174856060 container died a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:03:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-69ddd5c392fb4b2841d7d90b2cd5e1ce1bd71a01e8b92a455db2491b6aa66e6b-merged.mount: Deactivated successfully.
Oct 14 05:03:34 np0005486808 podman[314229]: 2025-10-14 09:03:34.265816004 +0000 UTC m=+0.219894348 container remove a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:03:34 np0005486808 systemd[1]: libpod-conmon-a05546547dd6590ca339d93a1ac4fa2979f2a343e19a82049da317b73796b6f3.scope: Deactivated successfully.
Oct 14 05:03:34 np0005486808 podman[314269]: 2025-10-14 09:03:34.46090021 +0000 UTC m=+0.038021346 container create 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:03:34 np0005486808 systemd[1]: Started libpod-conmon-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope.
Oct 14 05:03:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:34 np0005486808 podman[314269]: 2025-10-14 09:03:34.445343338 +0000 UTC m=+0.022464494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:34 np0005486808 podman[314269]: 2025-10-14 09:03:34.556739066 +0000 UTC m=+0.133860242 container init 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 14 05:03:34 np0005486808 podman[314269]: 2025-10-14 09:03:34.562384495 +0000 UTC m=+0.139505631 container start 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:03:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:03:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 19K writes, 76K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 6199 syncs, 3.11 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 49.38 MB, 0.08 MB/s#012Interval WAL: 12K writes, 4738 syncs, 2.55 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:03:34 np0005486808 podman[314269]: 2025-10-14 09:03:34.577335033 +0000 UTC m=+0.154456179 container attach 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:03:35 np0005486808 nova_compute[259627]: 2025-10-14 09:03:35.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:35 np0005486808 suspicious_germain[314286]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:03:35 np0005486808 suspicious_germain[314286]: --> relative data size: 1.0
Oct 14 05:03:35 np0005486808 suspicious_germain[314286]: --> All data devices are unavailable
Oct 14 05:03:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 528 op/s
Oct 14 05:03:35 np0005486808 systemd[1]: libpod-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Deactivated successfully.
Oct 14 05:03:35 np0005486808 podman[314269]: 2025-10-14 09:03:35.702811404 +0000 UTC m=+1.279932560 container died 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:03:35 np0005486808 systemd[1]: libpod-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Consumed 1.056s CPU time.
Oct 14 05:03:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9909f8af47df212e6ab354b00f7672253ec4351cc80ca0a0939fc8f910613455-merged.mount: Deactivated successfully.
Oct 14 05:03:35 np0005486808 podman[314269]: 2025-10-14 09:03:35.776206998 +0000 UTC m=+1.353328134 container remove 318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:03:35 np0005486808 systemd[1]: libpod-conmon-318b54e9f51dfc72a0a3522b62f761f0e1281a5867109a7c7a74477d27345bcc.scope: Deactivated successfully.
Oct 14 05:03:35 np0005486808 podman[314323]: 2025-10-14 09:03:35.818307784 +0000 UTC m=+0.070105705 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:03:35 np0005486808 podman[314315]: 2025-10-14 09:03:35.844361774 +0000 UTC m=+0.109711638 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:03:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 14 05:03:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 14 05:03:36 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.421963346 +0000 UTC m=+0.047032448 container create 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:03:36 np0005486808 systemd[1]: Started libpod-conmon-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope.
Oct 14 05:03:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.396149091 +0000 UTC m=+0.021218243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.516538951 +0000 UTC m=+0.141608073 container init 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.524141048 +0000 UTC m=+0.149210150 container start 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 05:03:36 np0005486808 priceless_lehmann[314522]: 167 167
Oct 14 05:03:36 np0005486808 systemd[1]: libpod-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope: Deactivated successfully.
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.531064668 +0000 UTC m=+0.156133790 container attach 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:03:36 np0005486808 conmon[314522]: conmon 44180dd25a3b4424ff15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope/container/memory.events
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.532268038 +0000 UTC m=+0.157337140 container died 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:03:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-55c20d58613d94b246b9551e6c77513bf642172b0189d31023452c72f6119fcc-merged.mount: Deactivated successfully.
Oct 14 05:03:36 np0005486808 podman[314506]: 2025-10-14 09:03:36.576923876 +0000 UTC m=+0.201992978 container remove 44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lehmann, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:03:36 np0005486808 systemd[1]: libpod-conmon-44180dd25a3b4424ff155fd9a16d98a830dea6486e18045e72f1091e911dd6f5.scope: Deactivated successfully.
Oct 14 05:03:36 np0005486808 podman[314545]: 2025-10-14 09:03:36.751479948 +0000 UTC m=+0.038010996 container create 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:03:36 np0005486808 systemd[1]: Started libpod-conmon-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope.
Oct 14 05:03:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:36 np0005486808 podman[314545]: 2025-10-14 09:03:36.735540776 +0000 UTC m=+0.022071854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:36 np0005486808 podman[314545]: 2025-10-14 09:03:36.835224997 +0000 UTC m=+0.121756075 container init 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:03:36 np0005486808 podman[314545]: 2025-10-14 09:03:36.84269133 +0000 UTC m=+0.129222388 container start 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:03:36 np0005486808 podman[314545]: 2025-10-14 09:03:36.847060518 +0000 UTC m=+0.133591626 container attach 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:03:36 np0005486808 nova_compute[259627]: 2025-10-14 09:03:36.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.251 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.251 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.252 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.253 2 INFO nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Terminating instance#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.254 2 DEBUG nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:03:37 np0005486808 kernel: tapec24b957-09 (unregistering): left promiscuous mode
Oct 14 05:03:37 np0005486808 NetworkManager[44885]: <info>  [1760432617.3158] device (tapec24b957-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00445|binding|INFO|Releasing lport ec24b957-093d-460e-a2cf-925bbfd2d421 from this chassis (sb_readonly=0)
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00446|binding|INFO|Setting lport ec24b957-093d-460e-a2cf-925bbfd2d421 down in Southbound
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00447|binding|INFO|Removing iface tapec24b957-09 ovn-installed in OVS
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.339 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.340 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.342 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a882c0-9aa6-466b-85d9-542dd97b52a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.345 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c namespace which is not needed anymore#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 14 05:03:37 np0005486808 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000031.scope: Consumed 12.428s CPU time.
Oct 14 05:03:37 np0005486808 systemd-machined[214636]: Machine qemu-60-instance-00000031 terminated.
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 14 05:03:37 np0005486808 kernel: tapec24b957-09: entered promiscuous mode
Oct 14 05:03:37 np0005486808 NetworkManager[44885]: <info>  [1760432617.4712] manager: (tapec24b957-09): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00448|binding|INFO|Claiming lport ec24b957-093d-460e-a2cf-925bbfd2d421 for this chassis.
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00449|binding|INFO|ec24b957-093d-460e-a2cf-925bbfd2d421: Claiming fa:16:3e:b8:b3:92 10.100.0.8
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 kernel: tapec24b957-09 (unregistering): left promiscuous mode
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.498 2 INFO nova.virt.libvirt.driver [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Instance destroyed successfully.#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.498 2 DEBUG nova.objects.instance [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lazy-loading 'resources' on Instance uuid e0bc2109-5f5c-4797-98c7-866f2d11f513 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:37 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : haproxy version is 2.8.14-c23fe91
Oct 14 05:03:37 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [NOTICE]   (313193) : path to executable is /usr/sbin/haproxy
Oct 14 05:03:37 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [WARNING]  (313193) : Exiting Master process...
Oct 14 05:03:37 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [ALERT]    (313193) : Current worker (313195) exited with code 143 (Terminated)
Oct 14 05:03:37 np0005486808 neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c[313189]: [WARNING]  (313193) : All workers exited. Exiting... (0)
Oct 14 05:03:37 np0005486808 systemd[1]: libpod-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope: Deactivated successfully.
Oct 14 05:03:37 np0005486808 conmon[313189]: conmon 2d35b2e3998973554c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope/container/memory.events
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:37Z|00450|binding|INFO|Releasing lport ec24b957-093d-460e-a2cf-925bbfd2d421 from this chassis (sb_readonly=0)
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.513 2 DEBUG nova.virt.libvirt.vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-664198077',display_name='tempest-ImagesOneServerTestJSON-server-664198077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-664198077',id=49,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='197096bf838b4b289aed810f1495a6c5',ramdisk_id='',reservation_id='r-youk5hnv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1747208535',owner_user_name='tempest-ImagesOneServerTestJSON-1747208535-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:03:33Z,user_data=None,user_id='f302a20e13b14bb999539ee5df041036',uuid=e0bc2109-5f5c-4797-98c7-866f2d11f513,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.513 2 DEBUG nova.network.os_vif_util [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converting VIF {"id": "ec24b957-093d-460e-a2cf-925bbfd2d421", "address": "fa:16:3e:b8:b3:92", "network": {"id": "17ac22f4-a94a-4a44-af02-3207d6bbc30c", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1411673589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "197096bf838b4b289aed810f1495a6c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec24b957-09", "ovs_interfaceid": "ec24b957-093d-460e-a2cf-925bbfd2d421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.515 2 DEBUG nova.network.os_vif_util [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.515 2 DEBUG os_vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec24b957-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:37 np0005486808 podman[314591]: 2025-10-14 09:03:37.518827914 +0000 UTC m=+0.070419482 container died 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b3:92 10.100.0.8'], port_security=['fa:16:3e:b8:b3:92 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e0bc2109-5f5c-4797-98c7-866f2d11f513', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '197096bf838b4b289aed810f1495a6c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43932228-589c-42e3-996e-587f7969918e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b5f101-b0f2-4232-8035-0864b53402a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ec24b957-093d-460e-a2cf-925bbfd2d421) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.530 2 INFO os_vif [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b3:92,bridge_name='br-int',has_traffic_filtering=True,id=ec24b957-093d-460e-a2cf-925bbfd2d421,network=Network(17ac22f4-a94a-4a44-af02-3207d6bbc30c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec24b957-09')#033[00m
Oct 14 05:03:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed-userdata-shm.mount: Deactivated successfully.
Oct 14 05:03:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a0f27f2d659583d74823866ac940d88f699c60f5ba72cd023b094e06f8a3931c-merged.mount: Deactivated successfully.
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 14 05:03:37 np0005486808 podman[314591]: 2025-10-14 09:03:37.631449005 +0000 UTC m=+0.183040563 container cleanup 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]: {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    "0": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "devices": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "/dev/loop3"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            ],
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_name": "ceph_lv0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_size": "21470642176",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "name": "ceph_lv0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "tags": {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_name": "ceph",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.crush_device_class": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.encrypted": "0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_id": "0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.vdo": "0"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            },
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "vg_name": "ceph_vg0"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        }
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    ],
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    "1": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "devices": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "/dev/loop4"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            ],
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_name": "ceph_lv1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_size": "21470642176",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "name": "ceph_lv1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "tags": {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_name": "ceph",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.crush_device_class": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.encrypted": "0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_id": "1",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.vdo": "0"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            },
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "vg_name": "ceph_vg1"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        }
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    ],
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    "2": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "devices": [
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "/dev/loop5"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            ],
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_name": "ceph_lv2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_size": "21470642176",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "name": "ceph_lv2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "tags": {
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.cluster_name": "ceph",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.crush_device_class": "",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.encrypted": "0",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osd_id": "2",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:                "ceph.vdo": "0"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            },
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "type": "block",
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:            "vg_name": "ceph_vg2"
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:        }
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]:    ]
Oct 14 05:03:37 np0005486808 funny_chandrasekhar[314563]: }
Oct 14 05:03:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 14 05:03:37 np0005486808 systemd[1]: libpod-conmon-2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed.scope: Deactivated successfully.
Oct 14 05:03:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 05:03:37 np0005486808 systemd[1]: libpod-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope: Deactivated successfully.
Oct 14 05:03:37 np0005486808 podman[314545]: 2025-10-14 09:03:37.671414536 +0000 UTC m=+0.957945594 container died 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:03:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fb19d5391c275edbd84971213124be18c190e7f231074cc87b57e7eccc1e8b32-merged.mount: Deactivated successfully.
Oct 14 05:03:37 np0005486808 podman[314643]: 2025-10-14 09:03:37.720738938 +0000 UTC m=+0.052343037 container remove 2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[defa7945-225b-43bf-a2d9-3ee5c7a527dd]: (4, ('Tue Oct 14 09:03:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c (2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed)\n2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed\nTue Oct 14 09:03:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c (2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed)\n2d35b2e3998973554c79c4e1e783f709fa07a5abbe161bc633633c00b56e9fed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[693aa322-4bd3-4c9d-a847-87c0de9a4923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17ac22f4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:37 np0005486808 kernel: tap17ac22f4-a0: left promiscuous mode
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 podman[314545]: 2025-10-14 09:03:37.742485713 +0000 UTC m=+1.029016771 container remove 6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 14 05:03:37 np0005486808 systemd[1]: libpod-conmon-6e0ac239049ee9f02b8b4d22c4278ec3259ca482b23f12da131d172e50201391.scope: Deactivated successfully.
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251ad9-b43f-4723-b118-42551af44172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 nova_compute[259627]: 2025-10-14 09:03:37.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.792 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1317886b-7355-44ed-ac96-b8d1a7cb9843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b98da0b-38ba-468f-99d1-ff20c4a461a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b935ff-05aa-4424-b273-c685a20aec2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637406, 'reachable_time': 18355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314677, 'error': None, 'target': 'ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 systemd[1]: run-netns-ovnmeta\x2d17ac22f4\x2da94a\x2d4a44\x2daf02\x2d3207d6bbc30c.mount: Deactivated successfully.
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.813 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17ac22f4-a94a-4a44-af02-3207d6bbc30c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.813 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1cabe8-835f-44c2-96c3-7833e53c4fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.816 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.818 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.818 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b112c84-0e8d-440a-859e-fd29d07545d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.819 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ec24b957-093d-460e-a2cf-925bbfd2d421 in datapath 17ac22f4-a94a-4a44-af02-3207d6bbc30c unbound from our chassis#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.820 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17ac22f4-a94a-4a44-af02-3207d6bbc30c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:03:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:37.821 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f09731ae-e82d-4e68-943b-c3ab79850612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.057 2 INFO nova.virt.libvirt.driver [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deleting instance files /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513_del#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.058 2 INFO nova.virt.libvirt.driver [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deletion of /var/lib/nova/instances/e0bc2109-5f5c-4797-98c7-866f2d11f513_del complete#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.116 2 INFO nova.compute.manager [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.116 2 DEBUG oslo.service.loopingcall [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.117 2 DEBUG nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:03:38 np0005486808 nova_compute[259627]: 2025-10-14 09:03:38.117 2 DEBUG nova.network.neutron [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.369167801 +0000 UTC m=+0.048885853 container create 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:03:38 np0005486808 systemd[1]: Started libpod-conmon-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope.
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.349717533 +0000 UTC m=+0.029435595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.477596957 +0000 UTC m=+0.157315029 container init 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.48622932 +0000 UTC m=+0.165947332 container start 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.492154985 +0000 UTC m=+0.171873047 container attach 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:03:38 np0005486808 bold_merkle[314828]: 167 167
Oct 14 05:03:38 np0005486808 systemd[1]: libpod-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope: Deactivated successfully.
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.497473686 +0000 UTC m=+0.177191698 container died 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:03:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1ace89244b44c3003ba7262e9f0b76d9c34dfc1e2306e1bb9c822a9ab49baf77-merged.mount: Deactivated successfully.
Oct 14 05:03:38 np0005486808 podman[314812]: 2025-10-14 09:03:38.546897261 +0000 UTC m=+0.226615273 container remove 056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:03:38 np0005486808 systemd[1]: libpod-conmon-056837eaaa9e966815e18c016cf7c693fc2dd9a4ad420e1c5f0ad80da9d23828.scope: Deactivated successfully.
Oct 14 05:03:38 np0005486808 podman[314853]: 2025-10-14 09:03:38.727796359 +0000 UTC m=+0.044802573 container create 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:03:38 np0005486808 systemd[1]: Started libpod-conmon-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope.
Oct 14 05:03:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:03:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:03:38 np0005486808 podman[314853]: 2025-10-14 09:03:38.709845328 +0000 UTC m=+0.026851562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:03:38 np0005486808 podman[314853]: 2025-10-14 09:03:38.805636863 +0000 UTC m=+0.122643097 container init 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:03:38 np0005486808 podman[314853]: 2025-10-14 09:03:38.811949808 +0000 UTC m=+0.128956022 container start 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:03:38 np0005486808 podman[314853]: 2025-10-14 09:03:38.815217588 +0000 UTC m=+0.132223852 container attach 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.393 2 DEBUG nova.network.neutron [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.417 2 INFO nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Took 1.30 seconds to deallocate network for instance.#033[00m
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.474 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.475 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.540 2 DEBUG nova.compute.manager [req-88538669-7805-4271-bd0b-689c2c37ac80 req-6a82fc19-be16-4856-b06f-8ec2d7a009b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Received event network-vif-deleted-ec24b957-093d-460e-a2cf-925bbfd2d421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:39 np0005486808 nova_compute[259627]: 2025-10-14 09:03:39.561 2 DEBUG oslo_concurrency.processutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 327 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 70 op/s
Oct 14 05:03:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:03:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 5485 syncs, 3.09 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.20 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4359 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]: {
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_id": 2,
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "type": "bluestore"
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    },
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_id": 1,
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "type": "bluestore"
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    },
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_id": 0,
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:        "type": "bluestore"
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]:    }
Oct 14 05:03:39 np0005486808 funny_archimedes[314870]: }
Oct 14 05:03:39 np0005486808 systemd[1]: libpod-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope: Deactivated successfully.
Oct 14 05:03:39 np0005486808 podman[314853]: 2025-10-14 09:03:39.818502695 +0000 UTC m=+1.135508919 container died 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:03:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5f4abce03d93b81154ba28c5170f29ded231cc1aa44e4926ae9f1a6a39156508-merged.mount: Deactivated successfully.
Oct 14 05:03:39 np0005486808 podman[314853]: 2025-10-14 09:03:39.879928675 +0000 UTC m=+1.196934889 container remove 452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:03:39 np0005486808 systemd[1]: libpod-conmon-452eb90a05a7257aa1dee4ac2dfb8f473ddfbee4df864013134fef60ca693415.scope: Deactivated successfully.
Oct 14 05:03:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:03:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:03:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d27e72a3-6703-4150-8a05-07e01c2038f5 does not exist
Oct 14 05:03:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1a418c26-05a4-4720-906c-4e5eb1164e93 does not exist
Oct 14 05:03:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747435737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.062 2 DEBUG oslo_concurrency.processutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.068 2 DEBUG nova.compute.provider_tree [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.090 2 DEBUG nova.scheduler.client.report [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.184 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.215 2 INFO nova.scheduler.client.report [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Deleted allocations for instance e0bc2109-5f5c-4797-98c7-866f2d11f513#033[00m
Oct 14 05:03:40 np0005486808 nova_compute[259627]: 2025-10-14 09:03:40.281 2 DEBUG oslo_concurrency.lockutils [None req-7a19ac8f-4ddb-437c-adff-f57fc9b7b807 f302a20e13b14bb999539ee5df041036 197096bf838b4b289aed810f1495a6c5 - - default default] Lock "e0bc2109-5f5c-4797-98c7-866f2d11f513" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:03:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:40 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:03:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:40Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 05:03:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:40Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 05:03:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:41.068 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:41.070 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:03:41 np0005486808 nova_compute[259627]: 2025-10-14 09:03:41.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 4.3 MiB/s wr, 247 op/s
Oct 14 05:03:42 np0005486808 nova_compute[259627]: 2025-10-14 09:03:42.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:42 np0005486808 nova_compute[259627]: 2025-10-14 09:03:42.412 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:03:42 np0005486808 nova_compute[259627]: 2025-10-14 09:03:42.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 14 05:03:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 14 05:03:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 14 05:03:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:42Z|00451|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:03:42 np0005486808 nova_compute[259627]: 2025-10-14 09:03:42.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015175922421371362 of space, bias 1.0, pg target 0.4552776726411409 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:03:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:03:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 202 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 4.1 MiB/s wr, 237 op/s
Oct 14 05:03:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:43Z|00452|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:03:43 np0005486808 nova_compute[259627]: 2025-10-14 09:03:43.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 kernel: tapd1066ec7-d9 (unregistering): left promiscuous mode
Oct 14 05:03:44 np0005486808 NetworkManager[44885]: <info>  [1760432624.7222] device (tapd1066ec7-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:44Z|00453|binding|INFO|Releasing lport d1066ec7-d932-4d99-aff7-7f7e80c54724 from this chassis (sb_readonly=0)
Oct 14 05:03:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:44Z|00454|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 down in Southbound
Oct 14 05:03:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:44Z|00455|binding|INFO|Removing iface tapd1066ec7-d9 ovn-installed in OVS
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.757 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.758 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.759 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca23a335-5a49-490b-a2bd-f81ba21df87f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Consumed 12.853s CPU time.
Oct 14 05:03:44 np0005486808 systemd-machined[214636]: Machine qemu-61-instance-00000032 terminated.
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.811 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc26749-eff9-49dc-880b-aa9091a782f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.815 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b5ac4c-94d1-48d4-864e-c04df80c264f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.844 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ed81fbeb-0a4f-4b33-9cbb-88438574347e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.862 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5531dba-8f4b-4e7f-b21d-2ea07a3140f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314999, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40fb4e1a-52f8-4de6-9bc1-e3d90272f3e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315000, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315000, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.887 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:44.888 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 14 05:03:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 14 05:03:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.981 2 DEBUG nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.982 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.982 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.983 2 DEBUG oslo_concurrency.lockutils [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.983 2 DEBUG nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:44 np0005486808 nova_compute[259627]: 2025-10-14 09:03:44.983 2 WARNING nova.compute.manager [req-5fc091f0-a8b8-489b-bff5-bf41e137c271 req-45ec606c-324b-4b59-85af-71cbcdf5fe79 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state rebuilding.#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.430 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.437 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.443 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.444 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:31Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.445 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.446 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.447 2 DEBUG os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1066ec7-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.457 2 INFO os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')#033[00m
Oct 14 05:03:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 17 MiB/s wr, 259 op/s
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.900 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting instance files /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del#033[00m
Oct 14 05:03:45 np0005486808 nova_compute[259627]: 2025-10-14 09:03:45.901 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deletion of /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del complete#033[00m
Oct 14 05:03:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 14 05:03:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 14 05:03:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.065 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.066 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating image(s)#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.088 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.113 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.139 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.143 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.207 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.208 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.208 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.209 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.236 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.242 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.494 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.582 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] resizing rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.672 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.673 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Ensure instance console log exists: /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.674 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.674 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.675 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.678 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start _get_guest_xml network_info=[{"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.682 2 WARNING nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.687 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.688 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.host [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.692 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.693 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.693 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.694 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.695 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.696 2 DEBUG nova.virt.hardware [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.697 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:46 np0005486808 nova_compute[259627]: 2025-10-14 09:03:46.719 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.136 2 DEBUG nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.137 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG oslo_concurrency.lockutils [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.138 2 DEBUG nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.139 2 WARNING nova.compute.manager [req-028a353d-05cd-4e6c-aa8e-974cd331a1bd req-badc62d4-89be-48f5-828b-86794c8cde3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 14 05:03:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510293326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.207 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.240 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.244 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 19 MiB/s wr, 98 op/s
Oct 14 05:03:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:03:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047009119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.697 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.698 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:46Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.699 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.700 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.702 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <uuid>dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</uuid>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <name>instance-00000032</name>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestJSON-server-1331294446</nova:name>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:03:46</nova:creationTime>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <nova:port uuid="d1066ec7-d932-4d99-aff7-7f7e80c54724">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="serial">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="uuid">dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:ba:e4:cb"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <target dev="tapd1066ec7-d9"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/console.log" append="off"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:03:47 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:03:47 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:03:47 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:03:47 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.703 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Preparing to wait for external event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.703 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.704 2 DEBUG nova.virt.libvirt.vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:03:46Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG nova.network.os_vif_util [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.705 2 DEBUG os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1066ec7-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1066ec7-d9, col_values=(('external_ids', {'iface-id': 'd1066ec7-d932-4d99-aff7-7f7e80c54724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e4:cb', 'vm-uuid': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:47 np0005486808 NetworkManager[44885]: <info>  [1760432627.7120] manager: (tapd1066ec7-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.720 2 INFO os_vif [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.791 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.792 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.793 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] No VIF found with MAC fa:16:3e:ba:e4:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.794 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Using config drive#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.839 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.859 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'ec2_ids' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:47 np0005486808 podman[315264]: 2025-10-14 09:03:47.871229696 +0000 UTC m=+0.095247473 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:03:47 np0005486808 nova_compute[259627]: 2025-10-14 09:03:47.895 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'keypairs' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:47 np0005486808 podman[315265]: 2025-10-14 09:03:47.900997318 +0000 UTC m=+0.123717173 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.072 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.384 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Creating config drive at /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.393 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcyj4oo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.549 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcyj4oo" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.590 2 DEBUG nova.storage.rbd_utils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] rbd image dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.596 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.793 2 DEBUG oslo_concurrency.processutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.794 2 INFO nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting local config drive /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:03:48 np0005486808 NetworkManager[44885]: <info>  [1760432628.8646] manager: (tapd1066ec7-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct 14 05:03:48 np0005486808 kernel: tapd1066ec7-d9: entered promiscuous mode
Oct 14 05:03:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:48Z|00456|binding|INFO|Claiming lport d1066ec7-d932-4d99-aff7-7f7e80c54724 for this chassis.
Oct 14 05:03:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:48Z|00457|binding|INFO|d1066ec7-d932-4d99-aff7-7f7e80c54724: Claiming fa:16:3e:ba:e4:cb 10.100.0.7
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.916 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.917 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.918 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.938 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbe5a6b-1e3e-4235-8af8-1c70fd119388]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:48 np0005486808 systemd-machined[214636]: New machine qemu-62-instance-00000032.
Oct 14 05:03:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:48Z|00458|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 ovn-installed in OVS
Oct 14 05:03:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:48Z|00459|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 up in Southbound
Oct 14 05:03:48 np0005486808 nova_compute[259627]: 2025-10-14 09:03:48.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:48 np0005486808 systemd[1]: Started Virtual Machine qemu-62-instance-00000032.
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.978 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b7eb2-e865-4f71-ac2e-c51a3e721ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:48 np0005486808 systemd-udevd[315382]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:03:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:48.982 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa1734c-5ce0-4537-a965-47675e4cd3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:49 np0005486808 NetworkManager[44885]: <info>  [1760432628.9995] device (tapd1066ec7-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:03:49 np0005486808 NetworkManager[44885]: <info>  [1760432629.0004] device (tapd1066ec7-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.016 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[73c7d8a4-c912-465f-9ae9-a13c13381480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.031 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d448a334-ba8f-4655-9691-96e8c21a15bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315390, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fef6653-b1cd-42a4-9b6e-737ce5f5e331]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315393, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315393, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.049 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.055 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:49.055 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.314 2 DEBUG nova.compute.manager [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.315 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.315 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.316 2 DEBUG oslo_concurrency.lockutils [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:49 np0005486808 nova_compute[259627]: 2025-10-14 09:03:49.317 2 DEBUG nova.compute.manager [req-911fd691-8691-49df-831a-1bf2f3d5b28e req-4cbd9bda-193a-4020-8aa3-a4ca6fa92146 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Processing event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:03:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 226 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 84 op/s
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.187 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.188 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.186601, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.191 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.194 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.199 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance spawned successfully.#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.199 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.215 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.221 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.226 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.228 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.229 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.229 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.230 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.230 2 DEBUG nova.virt.libvirt.driver [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.269 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.270 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.1867893, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.270 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.292 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.295 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432630.1939597, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.303 2 DEBUG nova.compute.manager [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.316 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.319 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.339 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.367 2 DEBUG nova.objects.instance [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:03:50 np0005486808 nova_compute[259627]: 2025-10-14 09:03:50.423 2 DEBUG oslo_concurrency.lockutils [None req-31d5050a-2806-41b9-b968-56a0c8ae69fa aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 14 05:03:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 14 05:03:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.460 2 DEBUG nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.461 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.461 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.462 2 DEBUG oslo_concurrency.lockutils [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.462 2 DEBUG nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:51 np0005486808 nova_compute[259627]: 2025-10-14 09:03:51.463 2 WARNING nova.compute.manager [req-a3eae20f-5ce2-4153-a8b7-8bc6bd75858e req-86ad07b5-028b-48cc-8913-9670234193f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:03:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 20 MiB/s wr, 250 op/s
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 14 05:03:52 np0005486808 nova_compute[259627]: 2025-10-14 09:03:52.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:52 np0005486808 nova_compute[259627]: 2025-10-14 09:03:52.497 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432617.496041, e0bc2109-5f5c-4797-98c7-866f2d11f513 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:03:52 np0005486808 nova_compute[259627]: 2025-10-14 09:03:52.498 2 INFO nova.compute.manager [-] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:03:52 np0005486808 nova_compute[259627]: 2025-10-14 09:03:52.519 2 DEBUG nova.compute.manager [None req-efd05d70-8ac7-429a-a0ee-90c067b76042 - - - - - -] [instance: e0bc2109-5f5c-4797-98c7-866f2d11f513] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 14 05:03:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 14 05:03:52 np0005486808 nova_compute[259627]: 2025-10-14 09:03:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:53 np0005486808 nova_compute[259627]: 2025-10-14 09:03:53.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 14 05:03:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 14 05:03:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 14 05:03:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 169 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 5.4 MiB/s wr, 271 op/s
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.519 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.520 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.521 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.523 2 INFO nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Terminating instance#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.524 2 DEBUG nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:03:54 np0005486808 kernel: tapd1066ec7-d9 (unregistering): left promiscuous mode
Oct 14 05:03:54 np0005486808 NetworkManager[44885]: <info>  [1760432634.5734] device (tapd1066ec7-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:54Z|00460|binding|INFO|Releasing lport d1066ec7-d932-4d99-aff7-7f7e80c54724 from this chassis (sb_readonly=0)
Oct 14 05:03:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:54Z|00461|binding|INFO|Setting lport d1066ec7-d932-4d99-aff7-7f7e80c54724 down in Southbound
Oct 14 05:03:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:03:54Z|00462|binding|INFO|Removing iface tapd1066ec7-d9 ovn-installed in OVS
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.593 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e4:cb 10.100.0.7'], port_security=['fa:16:3e:ba:e4:cb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0e3897ff-4300-4387-bfc4-36acf3f6c752', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d1066ec7-d932-4d99-aff7-7f7e80c54724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.594 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d1066ec7-d932-4d99-aff7-7f7e80c54724 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.595 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85bfd522-9a5d-4671-8a15-619f13c87bd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 14 05:03:54 np0005486808 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000032.scope: Consumed 5.586s CPU time.
Oct 14 05:03:54 np0005486808 systemd-machined[214636]: Machine qemu-62-instance-00000032 terminated.
Oct 14 05:03:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 14 05:03:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 14 05:03:54 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.662 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b15bf109-c2d2-40be-a556-5cbf2d172a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.665 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dc806668-b34a-41c0-887b-f5d5651d19fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.694 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[03e3b7e5-a143-47ca-9c94-b7d1fc7d6651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a29a4c55-f2d8-4746-90b0-b404c3695243]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633775, 'reachable_time': 24446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315448, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db2b76-f935-4e2d-9a14-e455eb1e5f64]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633788, 'tstamp': 633788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315449, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7c6eb8a4-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633791, 'tstamp': 633791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315449, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.738 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.738 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:03:54.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.759 2 INFO nova.virt.libvirt.driver [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Instance destroyed successfully.#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.760 2 DEBUG nova.objects.instance [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.781 2 DEBUG nova.virt.libvirt.vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:03:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1481502960',display_name='tempest-ServerActionsTestJSON-server-1331294446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1481502960',id=50,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:03:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-hqb4aruj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:03:50Z,user_data=None,user_id='aa32af91355a41198fd57121e5c70ec2',uuid=dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.782 2 DEBUG nova.network.os_vif_util [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "address": "fa:16:3e:ba:e4:cb", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1066ec7-d9", "ovs_interfaceid": "d1066ec7-d932-4d99-aff7-7f7e80c54724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.783 2 DEBUG nova.network.os_vif_util [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.784 2 DEBUG os_vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1066ec7-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.797 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.798 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.798 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.799 2 DEBUG oslo_concurrency.lockutils [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.799 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.800 2 DEBUG nova.compute.manager [req-0c1afaad-7053-488e-bf61-92446001ad44 req-0f7a834f-3a15-4a20-ba55-029778ce4c5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-unplugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:03:54 np0005486808 nova_compute[259627]: 2025-10-14 09:03:54.800 2 INFO os_vif [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e4:cb,bridge_name='br-int',has_traffic_filtering=True,id=d1066ec7-d932-4d99-aff7-7f7e80c54724,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1066ec7-d9')#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.170 2 INFO nova.virt.libvirt.driver [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deleting instance files /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.171 2 INFO nova.virt.libvirt.driver [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deletion of /var/lib/nova/instances/dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9_del complete#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.239 2 INFO nova.compute.manager [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.239 2 DEBUG oslo.service.loopingcall [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.240 2 DEBUG nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.240 2 DEBUG nova.network.neutron [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:03:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 14 05:03:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 14 05:03:55 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 14 05:03:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 49 KiB/s wr, 481 op/s
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.868 2 DEBUG nova.network.neutron [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.894 2 INFO nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.939 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:55 np0005486808 nova_compute[259627]: 2025-10-14 09:03:55.940 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.026 2 DEBUG oslo_concurrency.processutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:03:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283183907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.481 2 DEBUG oslo_concurrency.processutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.487 2 DEBUG nova.compute.provider_tree [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.510 2 DEBUG nova.scheduler.client.report [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.539 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.573 2 INFO nova.scheduler.client.report [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Deleted allocations for instance dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.653 2 DEBUG oslo_concurrency.lockutils [None req-36100f1e-e914-44b9-9a4e-e286786cab1a aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.928 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.929 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.929 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.930 2 DEBUG oslo_concurrency.lockutils [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.930 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] No waiting events found dispatching network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.931 2 WARNING nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received unexpected event network-vif-plugged-d1066ec7-d932-4d99-aff7-7f7e80c54724 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:03:56 np0005486808 nova_compute[259627]: 2025-10-14 09:03:56.931 2 DEBUG nova.compute.manager [req-130b3d9c-d85d-4351-936f-b4eb80683c55 req-74eb1b3e-5cab-4eab-b4dd-8e7f54f39156 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Received event network-vif-deleted-d1066ec7-d932-4d99-aff7-7f7e80c54724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:03:57 np0005486808 nova_compute[259627]: 2025-10-14 09:03:57.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:03:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:03:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 347 op/s
Oct 14 05:03:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 14 05:03:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 14 05:03:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.951 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.955 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.956 2 DEBUG nova.objects.instance [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:03:58 np0005486808 nova_compute[259627]: 2025-10-14 09:03:58.982 2 DEBUG nova.virt.libvirt.driver [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.483 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.483 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.499 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.568 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.569 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.574 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.574 2 INFO nova.compute.claims [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:03:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 123 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 291 op/s
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.699 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:03:59 np0005486808 nova_compute[259627]: 2025-10-14 09:03:59.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916589333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.163 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.169 2 DEBUG nova.compute.provider_tree [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.191 2 DEBUG nova.scheduler.client.report [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.251 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.252 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.351 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.351 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.373 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.393 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.574 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.576 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.576 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating image(s)#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.599 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.622 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.644 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.648 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.695 2 DEBUG nova.policy [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:04:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 14 05:04:00 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.721 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.722 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.723 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.723 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.748 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.751 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:00 np0005486808 nova_compute[259627]: 2025-10-14 09:04:00.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:00 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.046 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.126 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:04:01 np0005486808 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 05:04:01 np0005486808 NetworkManager[44885]: <info>  [1760432641.2221] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:01Z|00463|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 05:04:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:01Z|00464|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 05:04:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:01Z|00465|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.245 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.246 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.247 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.248 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16e8c3be-9ec0-469f-a3f3-5f5ba5f0630a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.249 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.271 2 DEBUG nova.objects.instance [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.294 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.295 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Ensure instance console log exists: /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.295 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.296 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.296 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:01 np0005486808 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 05:04:01 np0005486808 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000002b.scope: Consumed 16.264s CPU time.
Oct 14 05:04:01 np0005486808 systemd-machined[214636]: Machine qemu-59-instance-0000002b terminated.
Oct 14 05:04:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : haproxy version is 2.8.14-c23fe91
Oct 14 05:04:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [NOTICE]   (312649) : path to executable is /usr/sbin/haproxy
Oct 14 05:04:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [WARNING]  (312649) : Exiting Master process...
Oct 14 05:04:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [ALERT]    (312649) : Current worker (312651) exited with code 143 (Terminated)
Oct 14 05:04:01 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[312645]: [WARNING]  (312649) : All workers exited. Exiting... (0)
Oct 14 05:04:01 np0005486808 systemd[1]: libpod-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope: Deactivated successfully.
Oct 14 05:04:01 np0005486808 podman[315715]: 2025-10-14 09:04:01.403724894 +0000 UTC m=+0.048359540 container died 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:04:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:04:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b394997ed82cbbabad23e931b57790be0862fb55a64f523707c20df554392e6a-merged.mount: Deactivated successfully.
Oct 14 05:04:01 np0005486808 podman[315715]: 2025-10-14 09:04:01.455971009 +0000 UTC m=+0.100605635 container cleanup 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:04:01 np0005486808 systemd[1]: libpod-conmon-123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6.scope: Deactivated successfully.
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.510 2 DEBUG nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG oslo_concurrency.lockutils [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.511 2 DEBUG nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.513 2 WARNING nova.compute.manager [req-ccacdefe-73b9-4ffd-9022-a076de0f195a req-d6561182-b391-4ce3-9092-c1af6a7f1016 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 05:04:01 np0005486808 podman[315747]: 2025-10-14 09:04:01.517947843 +0000 UTC m=+0.041485241 container remove 123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.523 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9722ad-69f4-459c-82e2-230a6417f59c]: (4, ('Tue Oct 14 09:04:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6)\n123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6\nTue Oct 14 09:04:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6)\n123d2746f7e500462fc42138fd3b250c46021cde1bff8d295624c6f4403c53c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.525 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f25aea4-fef7-4e30-a3a6-807adce82155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.525 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:01 np0005486808 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.551 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[539da1f5-05fe-4a4d-a4e1-9785d0158dec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3b0f85-19df-4011-9537-41c32bb1413f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.590 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff1b751-f043-4ea4-8746-d47c1d42a779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4cca35-4b91-4340-8e6a-332d1411265a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633766, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315775, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.610 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:04:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:01.610 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[55c5f20f-c8d9-46f9-8a5d-91b52583dcfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:01 np0005486808 nova_compute[259627]: 2025-10-14 09:04:01.641 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully created port: 09d03bcf-f719-4ec4-91a0-3c14e350a342 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:04:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 41 op/s
Oct 14 05:04:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 14 05:04:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 14 05:04:01 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.003 2 INFO nova.virt.libvirt.driver [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance shutdown successfully after 3 seconds.#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.008 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.009 2 DEBUG nova.objects.instance [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.027 2 DEBUG nova.compute.manager [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.082 2 DEBUG oslo_concurrency.lockutils [None req-5cb9e994-186f-4619-9290-c0e327c31690 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.535 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully updated port: 09d03bcf-f719-4ec4-91a0-3c14e350a342 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.549 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.549 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.550 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 14 05:04:02 np0005486808 nova_compute[259627]: 2025-10-14 09:04:02.750 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 14 05:04:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.005 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.035 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.036 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.036 2 DEBUG nova.network.neutron [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.037 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'info_cache' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.610 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.611 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.612 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.612 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.613 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.613 2 WARNING nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.614 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.614 2 DEBUG nova.compute.manager [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.615 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.660 2 DEBUG nova.network.neutron [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 123 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 42 op/s
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance network_info: |[{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.682 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.683 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.686 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start _get_guest_xml network_info=[{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.693 2 WARNING nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.698 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.699 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.702 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.702 2 DEBUG nova.virt.libvirt.host [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.703 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.703 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.704 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.705 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.706 2 DEBUG nova.virt.hardware [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:04:03 np0005486808 nova_compute[259627]: 2025-10-14 09:04:03.710 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 14 05:04:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 14 05:04:03 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371345489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.153 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.178 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.182 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.408 2 DEBUG nova.network.neutron [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.430 2 DEBUG oslo_concurrency.lockutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.454 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.455 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.466 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.476 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.477 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.477 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.478 2 DEBUG os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.486 2 INFO os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.492 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start _get_guest_xml network_info=[{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.495 2 WARNING nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.501 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.502 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.host [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.505 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.506 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.507 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.508 2 DEBUG nova.virt.hardware [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.508 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'vcpu_model' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.523 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495734449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.632 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.634 2 DEBUG nova.virt.libvirt.vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.634 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.635 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.636 2 DEBUG nova.objects.instance [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.663 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <name>instance-00000033</name>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:04:03</nova:creationTime>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="serial">3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="uuid">3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:48:7e:a5"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <target dev="tap09d03bcf-f7"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log" append="off"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:04:04 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:04 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:04 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:04 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.669 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Preparing to wait for external event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.669 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.670 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.670 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.671 2 DEBUG nova.virt.libvirt.vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.671 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG nova.network.os_vif_util [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG os_vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09d03bcf-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09d03bcf-f7, col_values=(('external_ids', {'iface-id': '09d03bcf-f719-4ec4-91a0-3c14e350a342', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:7e:a5', 'vm-uuid': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 NetworkManager[44885]: <info>  [1760432644.6795] manager: (tap09d03bcf-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.684 2 INFO os_vif [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7')#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.738 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.739 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.739 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:48:7e:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.740 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Using config drive#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.764 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063659031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.960 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.993 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.993 2 DEBUG nova.network.neutron [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:04 np0005486808 nova_compute[259627]: 2025-10-14 09:04:04.998 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.027 2 DEBUG oslo_concurrency.lockutils [req-4a843b07-f7b6-4414-9d29-bf626a49c2ab req-1923fd7b-0c23-452f-b15d-d4ae42fc7c2c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.105 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Creating config drive at /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.115 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pwvt98p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.259 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pwvt98p" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.288 2 DEBUG nova.storage.rbd_utils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.291 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292753674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.415 2 DEBUG oslo_concurrency.processutils [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.417 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.417 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.418 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.419 2 DEBUG nova.objects.instance [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.434 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <uuid>de383510-2de3-40bd-b479-c0010b3f2d1c</uuid>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <name>instance-0000002b</name>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestJSON-server-1794713901</nova:name>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:04:04</nova:creationTime>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:user uuid="aa32af91355a41198fd57121e5c70ec2">tempest-ServerActionsTestJSON-1593617559-project-member</nova:user>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:project uuid="368d762ed02e459d892ad1e5488c2871">tempest-ServerActionsTestJSON-1593617559</nova:project>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <nova:port uuid="8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="serial">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="uuid">de383510-2de3-40bd-b479-c0010b3f2d1c</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/de383510-2de3-40bd-b479-c0010b3f2d1c_disk.config">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:be:e2:1b"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <target dev="tap8ec905f0-b7"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c/console.log" append="off"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:04:05 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:05 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:05 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:05 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.435 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.436 2 DEBUG nova.virt.libvirt.driver [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.436 2 DEBUG nova.virt.libvirt.vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG nova.network.os_vif_util [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.437 2 DEBUG os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.4459] manager: (tap8ec905f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.455 2 INFO os_vif [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.5238] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Oct 14 05:04:05 np0005486808 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00466|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00467|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.537 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.538 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.539 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.551 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b55295c-18d8-4347-a670-4f04a7bb42b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.549 2 DEBUG oslo_concurrency.processutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config 3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.552 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.550 2 INFO nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deleting local config drive /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/disk.config because it was imported into RBD.#033[00m
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918559064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.554 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d8a1e9-46d8-4467-9abb-eb145be19564]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00468|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00469|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7483f5-ae23-4b9d-8ba6-3d0b69dbe69e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 systemd-machined[214636]: New machine qemu-63-instance-0000002b.
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.569 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea9c7dd-eeab-44c4-887a-d047272d4fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 systemd-udevd[315980]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.5866] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.5877] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:05 np0005486808 systemd[1]: Started Virtual Machine qemu-63-instance-0000002b.
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.588 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07789fe6-9207-445b-b0d0-cca72a24621d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff0ecc-88d8-4cd5-ad58-ef67b1fadf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.6220] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76d3be43-bcc5-48a2-85df-65dcad2c6ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 systemd-udevd[315986]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:05 np0005486808 kernel: tap09d03bcf-f7: entered promiscuous mode
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.6352] manager: (tap09d03bcf-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00470|binding|INFO|Claiming lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 for this chassis.
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00471|binding|INFO|09d03bcf-f719-4ec4-91a0-3c14e350a342: Claiming fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.648 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:05 np0005486808 systemd-udevd[316008]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00472|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 ovn-installed in OVS
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00473|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 up in Southbound
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.664 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[badd8954-20f3-405a-8a0f-80733c1706ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.6686] device (tap09d03bcf-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.6695] device (tap09d03bcf-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.670 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f7690746-286b-415e-942e-68238157e8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 05:04:05 np0005486808 systemd-machined[214636]: New machine qemu-64-instance-00000033.
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.695 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.696 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.6977] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 05:04:05 np0005486808 systemd[1]: Started Virtual Machine qemu-64-instance-00000033.
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.705 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c63953-2eb4-4b4b-9490-b20899a26f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.717 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.728 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e7bd21-e1f6-457a-9d9a-6a2574afcb2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642445, 'reachable_time': 24937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316026, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[322282eb-cf0d-44c5-b91b-21b42754d5b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642445, 'tstamp': 642445}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316032, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d137cde-be04-489d-8c24-841c67e7a5a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642445, 'reachable_time': 24937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316033, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 14 05:04:05 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d85a2892-3c8e-48dc-8728-750c79fc5852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.821 2 DEBUG nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.822 2 DEBUG oslo_concurrency.lockutils [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.823 2 DEBUG nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.823 2 WARNING nova.compute.manager [req-4b4f1500-1c30-4b40-a4dc-74b89f342a94 req-8fa138de-509b-4139-963d-bd9b937a671d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.826 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.826 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.834 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.835 2 INFO nova.compute.claims [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.874 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c49725-bb73-4591-a724-e7f28f41dc7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.875 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.876 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 NetworkManager[44885]: <info>  [1760432645.8783] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Oct 14 05:04:05 np0005486808 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:05Z|00474|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.896 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c300575d-465a-4743-bcc2-fea554af3374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.899 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.900 2 DEBUG nova.compute.manager [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.900 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:05.901 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG oslo_concurrency.lockutils [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.901 2 DEBUG nova.compute.manager [req-62d4dff4-3dd5-40a6-9ff0-e9756c6b3387 req-123df882-1c3a-43d2-868d-bd948f52c64b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Processing event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:04:05 np0005486808 nova_compute[259627]: 2025-10-14 09:04:05.982 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:06 np0005486808 podman[316170]: 2025-10-14 09:04:06.369738962 +0000 UTC m=+0.071160761 container create fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:04:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675114764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:06 np0005486808 podman[316170]: 2025-10-14 09:04:06.338860692 +0000 UTC m=+0.040282531 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:06 np0005486808 systemd[1]: Started libpod-conmon-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope.
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.448 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.460 2 DEBUG nova.compute.provider_tree [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8301634e7ba8182a15159584c4f3aa16339d156ed90bf5af3faa623c7d035d54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:06 np0005486808 podman[316184]: 2025-10-14 09:04:06.485137439 +0000 UTC m=+0.071978911 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.491 2 DEBUG nova.scheduler.client.report [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:06 np0005486808 podman[316170]: 2025-10-14 09:04:06.493165286 +0000 UTC m=+0.194587105 container init fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:04:06 np0005486808 podman[316170]: 2025-10-14 09:04:06.498790835 +0000 UTC m=+0.200212634 container start fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:04:06 np0005486808 podman[316183]: 2025-10-14 09:04:06.512693827 +0000 UTC m=+0.097218302 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.519 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:06 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : New worker (316230) forked
Oct 14 05:04:06 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : Loading success.
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.519 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.524 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.525 2 DEBUG nova.compute.manager [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.525 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5244722, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.526 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.532 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.536 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance rebooted successfully.#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.536 2 DEBUG nova.compute.manager [None req-1171c1d7-1e80-4587-b43a-006ac7325cd9 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.537 2 INFO nova.virt.libvirt.driver [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance spawned successfully.#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.537 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.562 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.564 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.574 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b69e1da0-5d01-42dc-be84-1f619af67ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.575 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.577 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c74b36a-7c7a-4600-a1cd-ed0c59531fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.579 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee1b09b-563d-4f0c-96f6-f0685432a61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.595 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d13c65-2fe1-438f-a027-14df14bc9449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.595 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.596 2 DEBUG nova.virt.libvirt.driver [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.607 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.607 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5255911, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.607 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6385c6-31cd-4cec-8a79-cff24cf6e585]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.624 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.624 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.637 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.640 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.640 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5273564, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.640 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.642 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[49146578-8d02-4406-af0c-0074f85d1a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.649 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b3152a-6630-4f82-919e-60afd918d371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 systemd-udevd[316010]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:06 np0005486808 NetworkManager[44885]: <info>  [1760432646.6502] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.658 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.665 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.668 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.675 2 INFO nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 6.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.675 2 DEBUG nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.680 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.681 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6f6847-5866-4243-bc31-89db5e3275ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.684 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17b1d391-8091-44a9-b244-0566cca5b489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 NetworkManager[44885]: <info>  [1760432646.7071] device (tapfc2d149f-a0): carrier: link connected
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.712 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdfe06e-dbfb-446f-8a5c-0e0ca07af71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.713 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5324605, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.713 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.730 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[361e1a3a-b4c2-4575-97fc-e2c1032b6f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316249, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.745 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.749 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.749 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[033d1f2a-853f-45bd-995c-a8fb28305745]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642546, 'tstamp': 642546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316250, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.764 2 INFO nova.compute.manager [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 7.22 seconds to build instance.#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.769 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a93009c4-883c-4e13-8ba9-ae63b2d137da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316251, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.780 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432646.5330865, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.796 2 DEBUG oslo_concurrency.lockutils [None req-e52e8ceb-86bd-4a62-b796-f35f28c3ece3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.802 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.802 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe5e95-53dd-4beb-bf4b-4dccc4e5da07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.807 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.808 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.809 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating image(s)#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.832 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.862 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.871 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[980f086b-f67f-43fe-9a2f-8a7a7349ef57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.874 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:06 np0005486808 NetworkManager[44885]: <info>  [1760432646.8774] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct 14 05:04:06 np0005486808 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.879 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:06Z|00475|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.887 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.891 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.905 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.906 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d74be060-c40f-41ba-a7c5-c218871ce118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.907 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:06.909 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.931 2 DEBUG nova.policy [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4907b291c4c64d2eb768d0036817a00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e520d17b20a44440b176c2297c35286a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.967 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.967 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.968 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.968 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.987 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:06 np0005486808 nova_compute[259627]: 2025-10-14 09:04:06.989 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e038df86-1323-4b04-afae-9fe68c98c22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.022 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.023 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.239 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e038df86-1323-4b04-afae-9fe68c98c22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.295 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] resizing rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:04:07 np0005486808 podman[316377]: 2025-10-14 09:04:07.335033915 +0000 UTC m=+0.072432402 container create 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 05:04:07 np0005486808 systemd[1]: Started libpod-conmon-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope.
Oct 14 05:04:07 np0005486808 podman[316377]: 2025-10-14 09:04:07.30350945 +0000 UTC m=+0.040907947 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6ef93dddd05609592d54febe7e864f7239987c158375c17766114f7714daa3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:07 np0005486808 podman[316377]: 2025-10-14 09:04:07.436771117 +0000 UTC m=+0.174169624 container init 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:04:07 np0005486808 podman[316377]: 2025-10-14 09:04:07.4454395 +0000 UTC m=+0.182837987 container start 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.461 2 DEBUG nova.objects.instance [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'migration_context' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:07 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : New worker (316470) forked
Oct 14 05:04:07 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : Loading success.
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.476 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.477 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Ensure instance console log exists: /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.477 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.478 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.478 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 14 05:04:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 14 05:04:07 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 14 05:04:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 5.3 MiB/s wr, 401 op/s
Oct 14 05:04:07 np0005486808 nova_compute[259627]: 2025-10-14 09:04:07.969 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Successfully created port: 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.216 2 DEBUG nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.217 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.218 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.218 2 DEBUG oslo_concurrency.lockutils [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.219 2 DEBUG nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.219 2 WARNING nova.compute.manager [req-260486ea-3ee0-461c-a7d9-5c79ee0a6f53 req-c3d0efb2-34e0-4404-a3a1-16dbb37e0a7d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.221 2 DEBUG nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.222 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.223 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.223 2 DEBUG oslo_concurrency.lockutils [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.224 2 DEBUG nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:08 np0005486808 nova_compute[259627]: 2025-10-14 09:04:08.225 2 WARNING nova.compute.manager [req-2de1bf1d-88cd-4357-93a5-7618305617d6 req-95974312-357e-4a45-8250-70cd2ca3e38a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.041 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Successfully updated port: 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.058 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.178 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:04:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 169 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 272 op/s
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.758 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432634.757353, dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.759 2 INFO nova.compute.manager [-] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:04:09 np0005486808 nova_compute[259627]: 2025-10-14 09:04:09.778 2 DEBUG nova.compute.manager [None req-919cec2c-8cd9-41f2-bfb8-c9c338aef802 - - - - - -] [instance: dcaa085e-cf7e-4a34-9728-4f12a1bc9ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.162 2 DEBUG nova.network.neutron [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.186 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.186 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance network_info: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.188 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start _get_guest_xml network_info=[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.193 2 WARNING nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.204 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.205 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.208 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.208 2 DEBUG nova.virt.libvirt.host [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.209 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.210 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.211 2 DEBUG nova.virt.hardware [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.214 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.310 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.310 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-09d03bcf-f719-4ec4-91a0-3c14e350a342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.311 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.514 2 DEBUG nova.objects.instance [None req-a2bcc681-167a-46c3-a9a5-3a1afb58fa73 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.541 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432650.53431, de383510-2de3-40bd-b479-c0010b3f2d1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.557 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.568 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.584 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:04:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197499560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.686 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.705 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.708 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:10 np0005486808 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 05:04:10 np0005486808 NetworkManager[44885]: <info>  [1760432650.9741] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:10Z|00476|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 05:04:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:10Z|00477|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 05:04:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:10Z|00478|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 05:04:10 np0005486808 nova_compute[259627]: 2025-10-14 09:04:10.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.003 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.004 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.006 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.007 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf99e61-3bc4-4a4a-889f-632ad6fd15a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.008 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 05:04:11 np0005486808 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000002b.scope: Consumed 4.898s CPU time.
Oct 14 05:04:11 np0005486808 systemd-machined[214636]: Machine qemu-63-instance-0000002b terminated.
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705211513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:11 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : haproxy version is 2.8.14-c23fe91
Oct 14 05:04:11 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [NOTICE]   (316227) : path to executable is /usr/sbin/haproxy
Oct 14 05:04:11 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [WARNING]  (316227) : Exiting Master process...
Oct 14 05:04:11 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [ALERT]    (316227) : Current worker (316230) exited with code 143 (Terminated)
Oct 14 05:04:11 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316208]: [WARNING]  (316227) : All workers exited. Exiting... (0)
Oct 14 05:04:11 np0005486808 systemd[1]: libpod-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope: Deactivated successfully.
Oct 14 05:04:11 np0005486808 conmon[316208]: conmon fb10a26491d9f1f52952 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope/container/memory.events
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 podman[316566]: 2025-10-14 09:04:11.150740691 +0000 UTC m=+0.051547849 container died fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.156 2 DEBUG nova.compute.manager [None req-a2bcc681-167a-46c3-a9a5-3a1afb58fa73 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.161 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.162 2 DEBUG nova.virt.libvirt.vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.163 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.163 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.165 2 DEBUG nova.objects.instance [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'pci_devices' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246-userdata-shm.mount: Deactivated successfully.
Oct 14 05:04:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8301634e7ba8182a15159584c4f3aa16339d156ed90bf5af3faa623c7d035d54-merged.mount: Deactivated successfully.
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.190 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <uuid>e038df86-1323-4b04-afae-9fe68c98c22c</uuid>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <name>instance-00000034</name>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1327010057</nova:name>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:04:10</nova:creationTime>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:user uuid="4907b291c4c64d2eb768d0036817a00b">tempest-AttachInterfacesUnderV243Test-1413244718-project-member</nova:user>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:project uuid="e520d17b20a44440b176c2297c35286a">tempest-AttachInterfacesUnderV243Test-1413244718</nova:project>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <nova:port uuid="4f8c1944-ec5d-4de3-82f1-19760c6b4dd4">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="serial">e038df86-1323-4b04-afae-9fe68c98c22c</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="uuid">e038df86-1323-4b04-afae-9fe68c98c22c</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e038df86-1323-4b04-afae-9fe68c98c22c_disk">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e038df86-1323-4b04-afae-9fe68c98c22c_disk.config">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:fb:80:d4"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <target dev="tap4f8c1944-ec"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/console.log" append="off"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:04:11 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:11 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:11 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:11 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Preparing to wait for external event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.191 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.192 2 DEBUG nova.virt.libvirt.vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.192 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG nova.network.os_vif_util [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG os_vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:11 np0005486808 podman[316566]: 2025-10-14 09:04:11.195826939 +0000 UTC m=+0.096634107 container cleanup fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f8c1944-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f8c1944-ec, col_values=(('external_ids', {'iface-id': '4f8c1944-ec5d-4de3-82f1-19760c6b4dd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:80:d4', 'vm-uuid': 'e038df86-1323-4b04-afae-9fe68c98c22c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 NetworkManager[44885]: <info>  [1760432651.1985] manager: (tap4f8c1944-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:11 np0005486808 systemd[1]: libpod-conmon-fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246.scope: Deactivated successfully.
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.205 2 INFO os_vif [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec')#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.267 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] No VIF found with MAC fa:16:3e:fb:80:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:11 np0005486808 podman[316607]: 2025-10-14 09:04:11.2678827 +0000 UTC m=+0.046097085 container remove fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.268 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Using config drive#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38ce6d25-dcc5-40ed-995b-188bd2898478]: (4, ('Tue Oct 14 09:04:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246)\nfb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246\nTue Oct 14 09:04:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (fb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246)\nfb10a26491d9f1f52952733b6c13d4afb70c9ddfe04e9a01c7736b4306db2246\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[92fa0b98-c7a6-4cd3-a484-cabe738709a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.282 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:11 np0005486808 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.296 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.311 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f14cc1af-51f1-49ad-a5f1-d8c5818ca6d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba31b3d-70f2-49e8-a43f-14b3d95ecd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de14f6ce-27b5-4a2c-b364-c6c5669bcf11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9380612-9d07-4e59-9802-5ade09921f2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642437, 'reachable_time': 44473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316647, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.361 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:04:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:11.361 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ed95b2-e952-4b95-9c41-40be868e774a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.500 2 DEBUG nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.500 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG oslo_concurrency.lockutils [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.501 2 DEBUG nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.501 2 WARNING nova.compute.manager [req-e5f2f79d-d0ff-4741-a2c6-2f9672cc8764 req-d0398d22-7442-45f6-b1ca-fe2117f0ef7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:04:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.2 MiB/s wr, 563 op/s
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.686 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Creating config drive at /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.692 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8lj4pft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.826 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8lj4pft" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.852 2 DEBUG nova.storage.rbd_utils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] rbd image e038df86-1323-4b04-afae-9fe68c98c22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:11 np0005486808 nova_compute[259627]: 2025-10-14 09:04:11.855 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config e038df86-1323-4b04-afae-9fe68c98c22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.935947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651936002, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2612, "num_deletes": 529, "total_data_size": 3416722, "memory_usage": 3475456, "flush_reason": "Manual Compaction"}
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651952285, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3349392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27928, "largest_seqno": 30539, "table_properties": {"data_size": 3337760, "index_size": 7166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27116, "raw_average_key_size": 20, "raw_value_size": 3312510, "raw_average_value_size": 2481, "num_data_blocks": 311, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 529, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432482, "oldest_key_time": 1760432482, "file_creation_time": 1760432651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 16375 microseconds, and 7398 cpu microseconds.
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.952328) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3349392 bytes OK
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.952349) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.954967) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.954984) EVENT_LOG_v1 {"time_micros": 1760432651954978, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.955001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3404533, prev total WAL file size 3404533, number of live WAL files 2.
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.956144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3270KB)], [62(8428KB)]
Oct 14 05:04:11 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432651956212, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11980335, "oldest_snapshot_seqno": -1}
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5543 keys, 10230631 bytes, temperature: kUnknown
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652004149, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10230631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10189175, "index_size": 26498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 139135, "raw_average_key_size": 25, "raw_value_size": 10085005, "raw_average_value_size": 1819, "num_data_blocks": 1083, "num_entries": 5543, "num_filter_entries": 5543, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.004346) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10230631 bytes
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.008824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.6 rd, 213.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 6600, records dropped: 1057 output_compression: NoCompression
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.008844) EVENT_LOG_v1 {"time_micros": 1760432652008834, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47998, "compaction_time_cpu_micros": 21010, "output_level": 6, "num_output_files": 1, "total_output_size": 10230631, "num_input_records": 6600, "num_output_records": 5543, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652009544, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432652010731, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:11.956039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:12.010792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.060 2 DEBUG oslo_concurrency.processutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config e038df86-1323-4b04-afae-9fe68c98c22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.061 2 INFO nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deleting local config drive /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c/disk.config because it was imported into RBD.#033[00m
Oct 14 05:04:12 np0005486808 kernel: tap4f8c1944-ec: entered promiscuous mode
Oct 14 05:04:12 np0005486808 systemd-udevd[316545]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.1099] manager: (tap4f8c1944-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Oct 14 05:04:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:12Z|00479|binding|INFO|Claiming lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for this chassis.
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:12Z|00480|binding|INFO|4f8c1944-ec5d-4de3-82f1-19760c6b4dd4: Claiming fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.1241] device (tap4f8c1944-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.121 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:80:d4 10.100.0.7'], port_security=['fa:16:3e:fb:80:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e038df86-1323-4b04-afae-9fe68c98c22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e520d17b20a44440b176c2297c35286a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfbcaeb0-59cf-4ea6-aad2-32a400918089', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c03d4d8-729d-49db-b443-08ab27defcda, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.122 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 in datapath e964bc94-eb23-4bb9-b6af-2d14c0f7d764 bound to our chassis#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.124 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e964bc94-eb23-4bb9-b6af-2d14c0f7d764#033[00m
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.1253] device (tap4f8c1944-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8106ade3-b0c7-4750-8cf0-691b1b251272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.136 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape964bc94-e1 in ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.140 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape964bc94-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.140 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[007e7b43-5aec-457e-90dc-3eef11599246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c466e13-59a9-4205-b16c-a1188a139728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 systemd-machined[214636]: New machine qemu-65-instance-00000034.
Oct 14 05:04:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:12Z|00481|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 ovn-installed in OVS
Oct 14 05:04:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:12Z|00482|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 up in Southbound
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8664c9-aeaa-4852-9a61-0c8e75eca6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 systemd[1]: Started Virtual Machine qemu-65-instance-00000034.
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.184 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52a6d86b-ef41-4cb2-9fc8-07187a65cc94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.212 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e401cd-4397-4b26-b993-1af6655bed49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.2265] manager: (tape964bc94-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab719fd-614b-4573-8a44-ee420663fab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.242 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 09d03bcf-f719-4ec4-91a0-3c14e350a342. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.243 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.269 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG nova.compute.manager [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.270 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.271 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.271 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.271 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[336403d9-b402-4c3e-a97b-acce9d74fdd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.275 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ecf19a-5859-44cf-a86f-40ab00ce3550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.3014] device (tape964bc94-e0): carrier: link connected
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.311 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef882f33-b618-41bc-aef0-0b797be442a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.334 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5f53a6-c7d9-4215-bb25-760171a8b7f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape964bc94-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:2c:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643106, 'reachable_time': 27840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316735, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.355 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19885a57-f883-4fb1-8c19-27bc9bbd2f11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:2c10'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643106, 'tstamp': 643106}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316736, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.374 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[971c94d6-7932-4dda-a927-d52e7ac6a5ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape964bc94-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:2c:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643106, 'reachable_time': 27840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316737, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76d72c4e-7197-4998-a499-2f428b0b4aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.456 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26f36671-bb20-4f6e-8d41-52c575543d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.457 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape964bc94-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.458 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape964bc94-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:12 np0005486808 NetworkManager[44885]: <info>  [1760432652.4605] manager: (tape964bc94-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Oct 14 05:04:12 np0005486808 kernel: tape964bc94-e0: entered promiscuous mode
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.465 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape964bc94-e0, col_values=(('external_ids', {'iface-id': 'bcf643fa-2c1a-444e-ad03-f473ae6c9565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:12Z|00483|binding|INFO|Releasing lport bcf643fa-2c1a-444e-ad03-f473ae6c9565 from this chassis (sb_readonly=0)
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 nova_compute[259627]: 2025-10-14 09:04:12.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.488 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61f10717-e2dd-46de-bb35-a84d9b9c226e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.490 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-e964bc94-eb23-4bb9-b6af-2d14c0f7d764
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.pid.haproxy
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID e964bc94-eb23-4bb9-b6af-2d14c0f7d764
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:12.492 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'env', 'PROCESS_TAG=haproxy-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e964bc94-eb23-4bb9-b6af-2d14c0f7d764.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 14 05:04:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 14 05:04:12 np0005486808 podman[316813]: 2025-10-14 09:04:12.849805874 +0000 UTC m=+0.047398036 container create a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:04:12 np0005486808 systemd[1]: Started libpod-conmon-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope.
Oct 14 05:04:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:12 np0005486808 podman[316813]: 2025-10-14 09:04:12.825374473 +0000 UTC m=+0.022966685 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7bf3898c6dcba520ee1a299c42284442b12fb92ae3baa3607f44b8c13a94b30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:12 np0005486808 podman[316813]: 2025-10-14 09:04:12.931640496 +0000 UTC m=+0.129232668 container init a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:04:12 np0005486808 podman[316813]: 2025-10-14 09:04:12.938779792 +0000 UTC m=+0.136371954 container start a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 05:04:12 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : New worker (316837) forked
Oct 14 05:04:12 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : Loading success.
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.148 2 INFO nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Resuming#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.149 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'flavor' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquired lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.187 2 DEBUG nova.network.neutron [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.2396312, e038df86-1323-4b04-afae-9fe68c98c22c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.240 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.262 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.266 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.2398152, e038df86-1323-4b04-afae-9fe68c98c22c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.267 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.283 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.286 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.308 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.590 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 WARNING nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.591 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Processing event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG oslo_concurrency.lockutils [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 DEBUG nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] No waiting events found dispatching network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.592 2 WARNING nova.compute.manager [req-dd7302f0-f2a3-4704-ab3b-7056a841162b req-19bf9844-b73e-4034-bd81-aec0c142c40c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received unexpected event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.593 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.596 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432653.5961614, e038df86-1323-4b04-afae-9fe68c98c22c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.596 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.598 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.598 2 DEBUG nova.network.neutron [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.600 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.603 2 INFO nova.virt.libvirt.driver [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance spawned successfully.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.603 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.619 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.623 2 DEBUG oslo_concurrency.lockutils [req-55a0c90b-e0d7-442d-9c11-bf6e95ef17e5 req-b75dcbe4-5dc5-4472-a9af-b8f9215cde93 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.627 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.632 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.633 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.634 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.634 2 DEBUG nova.virt.libvirt.driver [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.655 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 215 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 289 op/s
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.697 2 INFO nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 6.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.697 2 DEBUG nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.770 2 INFO nova.compute.manager [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 7.98 seconds to build instance.#033[00m
Oct 14 05:04:13 np0005486808 nova_compute[259627]: 2025-10-14 09:04:13.793 2 DEBUG oslo_concurrency.lockutils [None req-5930cd7d-1f44-4a60-8c9f-7446630269e0 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:14 np0005486808 nova_compute[259627]: 2025-10-14 09:04:14.979 2 DEBUG nova.network.neutron [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [{"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:14 np0005486808 nova_compute[259627]: 2025-10-14 09:04:14.996 2 DEBUG oslo_concurrency.lockutils [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Releasing lock "refresh_cache-de383510-2de3-40bd-b479-c0010b3f2d1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.000 2 DEBUG nova.virt.libvirt.vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.001 2 DEBUG nova.network.os_vif_util [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.001 2 DEBUG nova.network.os_vif_util [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.002 2 DEBUG os_vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec905f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ec905f0-b7, col_values=(('external_ids', {'iface-id': '8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:e2:1b', 'vm-uuid': 'de383510-2de3-40bd-b479-c0010b3f2d1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.007 2 INFO os_vif [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.026 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'numa_topology' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.0983] manager: (tap8ec905f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Oct 14 05:04:15 np0005486808 kernel: tap8ec905f0-b7: entered promiscuous mode
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:15Z|00484|binding|INFO|Claiming lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for this chassis.
Oct 14 05:04:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:15Z|00485|binding|INFO|8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2: Claiming fa:16:3e:be:e2:1b 10.100.0.10
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.114 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.116 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 bound to our chassis#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.118 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c6eb8a4-6604-462a-8730-43f3742053a7#033[00m
Oct 14 05:04:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:15Z|00486|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 ovn-installed in OVS
Oct 14 05:04:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:15Z|00487|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 up in Southbound
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.130 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e331b1-4b36-453e-b0a8-38fc01345b29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.131 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c6eb8a4-61 in ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.136 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c6eb8a4-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.136 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f151d475-75f5-4ade-a9a2-373178afec13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.141 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d81f2785-49fd-498a-b979-d068a4b2ea36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 systemd-machined[214636]: New machine qemu-66-instance-0000002b.
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[62c85fa7-d635-4ab0-87e3-8251764ac726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 systemd[1]: Started Virtual Machine qemu-66-instance-0000002b.
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc619c9e-6ad1-4d86-8648-115c2d60f412]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 systemd-udevd[316864]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.2090] device (tap8ec905f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.2099] device (tap8ec905f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.211 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[202fbd0d-9d3a-428d-9659-5ba0caeb26b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.217 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9aab6e59-4286-4a50-8352-070eb7ac0e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.2234] manager: (tap7c6eb8a4-60): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.262 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd737a75-e947-419d-8c20-5c451a062ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.267 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b606665a-d0f8-45b3-9d0a-211f1f2d7865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.2925] device (tap7c6eb8a4-60): carrier: link connected
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e884e930-eee0-45c0-9fe4-b7a04dcebf9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[27414336-5c80-4941-bcae-7bc1fc10bbc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643405, 'reachable_time': 44987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316893, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52e65582-023a-46a0-9193-9d612fecbfe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:70fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643405, 'tstamp': 643405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316894, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c684ce15-acdc-4d52-b741-fa814a4c1653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c6eb8a4-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:70:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643405, 'reachable_time': 44987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316895, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f21281-9ac5-428e-9888-c115c612cc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b07e4f39-2d63-46f6-9e60-075ee0ea7df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.447 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6eb8a4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 NetworkManager[44885]: <info>  [1760432655.4505] manager: (tap7c6eb8a4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 14 05:04:15 np0005486808 kernel: tap7c6eb8a4-60: entered promiscuous mode
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.457 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c6eb8a4-60, col_values=(('external_ids', {'iface-id': '0f8f2393-a280-4a6e-ac22-da67bf8d74da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:15Z|00488|binding|INFO|Releasing lport 0f8f2393-a280-4a6e-ac22-da67bf8d74da from this chassis (sb_readonly=0)
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.462 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.468 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aec749a2-d3d9-4213-88ce-9f01420a18bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.473 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7c6eb8a4-6604-462a-8730-43f3742053a7.pid.haproxy
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7c6eb8a4-6604-462a-8730-43f3742053a7
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:15.478 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'env', 'PROCESS_TAG=haproxy-7c6eb8a4-6604-462a-8730-43f3742053a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c6eb8a4-6604-462a-8730-43f3742053a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 2.7 MiB/s wr, 393 op/s
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.791 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.795 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.796 2 WARNING nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.796 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.796 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG oslo_concurrency.lockutils [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.797 2 DEBUG nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:15 np0005486808 nova_compute[259627]: 2025-10-14 09:04:15.797 2 WARNING nova.compute.manager [req-1f067e30-933b-4d36-8e8e-8d1ce6151443 req-ecc9ba8b-ae27-4dda-80f8-bc311d491dc1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:04:15 np0005486808 podman[316925]: 2025-10-14 09:04:15.867731845 +0000 UTC m=+0.055857854 container create c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:04:15 np0005486808 systemd[1]: Started libpod-conmon-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope.
Oct 14 05:04:15 np0005486808 podman[316925]: 2025-10-14 09:04:15.833737059 +0000 UTC m=+0.021863108 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085b80beecbc83e40c99bbc55ff06db8151404b833b2eae4640168b2b3becb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:15 np0005486808 podman[316925]: 2025-10-14 09:04:15.970963043 +0000 UTC m=+0.159089082 container init c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:04:15 np0005486808 podman[316925]: 2025-10-14 09:04:15.977714359 +0000 UTC m=+0.165840398 container start c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 05:04:16 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : New worker (316987) forked
Oct 14 05:04:16 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : Loading success.
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.507 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for de383510-2de3-40bd-b479-c0010b3f2d1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.508 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432656.5071824, de383510-2de3-40bd-b479-c0010b3f2d1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.508 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.523 2 DEBUG nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.524 2 DEBUG nova.objects.instance [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'pci_devices' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.543 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.547 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.550 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance running successfully.#033[00m
Oct 14 05:04:16 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.553 2 DEBUG nova.virt.libvirt.guest [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.553 2 DEBUG nova.compute.manager [None req-f50359d9-f394-47b9-a68f-1bb052c4aa72 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.573 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.574 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432656.5109954, de383510-2de3-40bd-b479-c0010b3f2d1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.574 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.596 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:16 np0005486808 nova_compute[259627]: 2025-10-14 09:04:16.600 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.042 2 DEBUG nova.compute.manager [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.042 2 DEBUG nova.compute.manager [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.043 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 05:04:17 np0005486808 nova_compute[259627]: 2025-10-14 09:04:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:18Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 05:04:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:18Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 05:04:18 np0005486808 nova_compute[259627]: 2025-10-14 09:04:18.599 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:18 np0005486808 nova_compute[259627]: 2025-10-14 09:04:18.600 2 DEBUG nova.network.neutron [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:18 np0005486808 nova_compute[259627]: 2025-10-14 09:04:18.618 2 DEBUG oslo_concurrency.lockutils [req-9fa77482-3c2a-44bc-b5f4-bf7c1dba100d req-26a2649b-2d5a-4c0b-905f-ca4af90e3f5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:18 np0005486808 podman[316997]: 2025-10-14 09:04:18.692888196 +0000 UTC m=+0.108723734 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:04:18 np0005486808 podman[316998]: 2025-10-14 09:04:18.694407614 +0000 UTC m=+0.106288185 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.091 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.092 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.093 2 INFO nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Terminating instance#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.094 2 DEBUG nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:04:19 np0005486808 kernel: tap8ec905f0-b7 (unregistering): left promiscuous mode
Oct 14 05:04:19 np0005486808 NetworkManager[44885]: <info>  [1760432659.1410] device (tap8ec905f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:19Z|00489|binding|INFO|Releasing lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 from this chassis (sb_readonly=0)
Oct 14 05:04:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:19Z|00490|binding|INFO|Setting lport 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 down in Southbound
Oct 14 05:04:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:19Z|00491|binding|INFO|Removing iface tap8ec905f0-b7 ovn-installed in OVS
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.170 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:e2:1b 10.100.0.10'], port_security=['fa:16:3e:be:e2:1b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'de383510-2de3-40bd-b479-c0010b3f2d1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c6eb8a4-6604-462a-8730-43f3742053a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368d762ed02e459d892ad1e5488c2871', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'b04f9b87-06e3-4b99-859f-b095869a50f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d063443f-d68f-4763-a6fc-fc1631e3cde1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 in datapath 7c6eb8a4-6604-462a-8730-43f3742053a7 unbound from our chassis#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.173 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c6eb8a4-6604-462a-8730-43f3742053a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60ffc238-f780-4a34-8ef8-2a7944aebc1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.174 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 namespace which is not needed anymore#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 14 05:04:19 np0005486808 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000002b.scope: Consumed 3.817s CPU time.
Oct 14 05:04:19 np0005486808 systemd-machined[214636]: Machine qemu-66-instance-0000002b terminated.
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.334 2 INFO nova.virt.libvirt.driver [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Instance destroyed successfully.#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.335 2 DEBUG nova.objects.instance [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lazy-loading 'resources' on Instance uuid de383510-2de3-40bd-b479-c0010b3f2d1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:19 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : haproxy version is 2.8.14-c23fe91
Oct 14 05:04:19 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [NOTICE]   (316981) : path to executable is /usr/sbin/haproxy
Oct 14 05:04:19 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [WARNING]  (316981) : Exiting Master process...
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.351 2 DEBUG nova.virt.libvirt.vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1794713901',display_name='tempest-ServerActionsTestJSON-server-1794713901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1794713901',id=43,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYFNlP3hTYeUpT8nc14lRftSHASrG8vP9AGnXtKYdUjMqVtIeIad+zh+sJMfad25mvb2X38sw9juFrm0nqlru2kjBIYbt6/N3JxbHEKwTt4Dj077tGdx4oZ3+9/NHyjkQ==',key_name='tempest-keypair-1333367407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368d762ed02e459d892ad1e5488c2871',ramdisk_id='',reservation_id='r-cj1cuo6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1593617559',owner_user_name='tempest-ServerActionsTestJSON-1593617559-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa32af91355a41198fd57121e5c70ec2',uuid=de383510-2de3-40bd-b479-c0010b3f2d1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.352 2 DEBUG nova.network.os_vif_util [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converting VIF {"id": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "address": "fa:16:3e:be:e2:1b", "network": {"id": "7c6eb8a4-6604-462a-8730-43f3742053a7", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-167587328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368d762ed02e459d892ad1e5488c2871", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec905f0-b7", "ovs_interfaceid": "8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.352 2 DEBUG nova.network.os_vif_util [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.353 2 DEBUG os_vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:19 np0005486808 systemd[1]: libpod-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope: Deactivated successfully.
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [ALERT]    (316981) : Current worker (316987) exited with code 143 (Terminated)
Oct 14 05:04:19 np0005486808 neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7[316951]: [WARNING]  (316981) : All workers exited. Exiting... (0)
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec905f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:19 np0005486808 conmon[316951]: conmon c0ed9eadfcd606c190c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope/container/memory.events
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 podman[317061]: 2025-10-14 09:04:19.359761402 +0000 UTC m=+0.060841967 container died c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.365 2 INFO os_vif [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:e2:1b,bridge_name='br-int',has_traffic_filtering=True,id=8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2,network=Network(7c6eb8a4-6604-462a-8730-43f3742053a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec905f0-b7')#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.414 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.415 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.415 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97-userdata-shm.mount: Deactivated successfully.
Oct 14 05:04:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4085b80beecbc83e40c99bbc55ff06db8151404b833b2eae4640168b2b3becb0-merged.mount: Deactivated successfully.
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.416 2 DEBUG oslo_concurrency.lockutils [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.421 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.421 2 DEBUG nova.compute.manager [req-d64ca07c-41f4-45c2-9ecd-3b5498fb1db8 req-b184a85e-05dd-4af3-ad55-3c3232611617 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-unplugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:04:19 np0005486808 podman[317061]: 2025-10-14 09:04:19.434252334 +0000 UTC m=+0.135332879 container cleanup c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:04:19 np0005486808 systemd[1]: libpod-conmon-c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97.scope: Deactivated successfully.
Oct 14 05:04:19 np0005486808 podman[317116]: 2025-10-14 09:04:19.495499289 +0000 UTC m=+0.038037696 container remove c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.501 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01beda27-41b0-4a84-971d-f7e82b4cf92e]: (4, ('Tue Oct 14 09:04:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97)\nc0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97\nTue Oct 14 09:04:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 (c0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97)\nc0ed9eadfcd606c190c0ebd50e869d1b2e5bdc8b1b2637e4d1935e35db04bf97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8e3219-b228-49bb-b78e-81b2a6ea4e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.503 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6eb8a4-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 kernel: tap7c6eb8a4-60: left promiscuous mode
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.521 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01897d1b-179c-447b-9025-861e8476acef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96be702e-e042-4339-be3c-9855756e83ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b68542e-c823-4438-8c8f-dd529cd6e437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.577 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[216c2a44-9b5e-472b-87ce-3b67dae34592]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643396, 'reachable_time': 28838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317133, 'error': None, 'target': 'ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7c6eb8a4\x2d6604\x2d462a\x2d8730\x2d43f3742053a7.mount: Deactivated successfully.
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.580 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c6eb8a4-6604-462a-8730-43f3742053a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:04:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:19.581 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[191aaa96-eee4-424c-9d28-a6b7e74f815a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 216 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.781 2 INFO nova.virt.libvirt.driver [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deleting instance files /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c_del#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.782 2 INFO nova.virt.libvirt.driver [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deletion of /var/lib/nova/instances/de383510-2de3-40bd-b479-c0010b3f2d1c_del complete#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.885 2 INFO nova.compute.manager [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.886 2 DEBUG oslo.service.loopingcall [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.887 2 DEBUG nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:04:19 np0005486808 nova_compute[259627]: 2025-10-14 09:04:19.887 2 DEBUG nova.network.neutron [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:04:20 np0005486808 nova_compute[259627]: 2025-10-14 09:04:20.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:20 np0005486808 nova_compute[259627]: 2025-10-14 09:04:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.063 2 DEBUG nova.network.neutron [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.084 2 INFO nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.120 2 DEBUG nova.compute.manager [req-be2304d6-7415-4da6-84e4-1cda39ba1827 req-506e0de1-7515-47e6-8bcb-8aabc6ec5c0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-deleted-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.128 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.129 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.171 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.188 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.189 2 DEBUG nova.compute.provider_tree [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.208 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.245 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.529 2 DEBUG oslo_concurrency.processutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.605 2 DEBUG nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.606 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.607 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.608 2 DEBUG oslo_concurrency.lockutils [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.608 2 DEBUG nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] No waiting events found dispatching network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.609 2 WARNING nova.compute.manager [req-013e9db2-113e-4ed7-9798-694ceed68b90 req-776f15fb-ea1b-4ec6-aaf1-586049b9ae26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Received unexpected event network-vif-plugged-8ec905f0-b7ad-4a4a-9dee-cb59a95bafe2 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:21 np0005486808 nova_compute[259627]: 2025-10-14 09:04:21.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:04:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453293817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.023 2 DEBUG oslo_concurrency.processutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.029 2 DEBUG nova.compute.provider_tree [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.050 2 DEBUG nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.082 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.108 2 INFO nova.scheduler.client.report [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Deleted allocations for instance de383510-2de3-40bd-b479-c0010b3f2d1c#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.178 2 DEBUG oslo_concurrency.lockutils [None req-11c3638f-a5da-4529-abc0-1d6100c23d25 aa32af91355a41198fd57121e5c70ec2 368d762ed02e459d892ad1e5488c2871 - - default default] Lock "de383510-2de3-40bd-b479-c0010b3f2d1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:22 np0005486808 nova_compute[259627]: 2025-10-14 09:04:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895446000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.446 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.508 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.508 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.512 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.512 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.662 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.663 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3753MB free_disk=59.921939849853516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.663 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.742 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e038df86-1323-4b04-afae-9fe68c98c22c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.743 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:04:23 np0005486808 nova_compute[259627]: 2025-10-14 09:04:23.812 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495202466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.253 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.258 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.272 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.308 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.308 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:24 np0005486808 nova_compute[259627]: 2025-10-14 09:04:24.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:25Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 05:04:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:25Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:80:d4 10.100.0.7
Oct 14 05:04:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.310 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.310 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.341 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:04:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:27Z|00492|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:04:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:27Z|00493|binding|INFO|Releasing lport bcf643fa-2c1a-444e-ad03-f473ae6c9565 from this chassis (sb_readonly=0)
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.605 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.606 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.607 2 DEBUG nova.objects.instance [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 05:04:27 np0005486808 nova_compute[259627]: 2025-10-14 09:04:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:28 np0005486808 nova_compute[259627]: 2025-10-14 09:04:28.073 2 DEBUG nova.objects.instance [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:28 np0005486808 nova_compute[259627]: 2025-10-14 09:04:28.088 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:04:28 np0005486808 nova_compute[259627]: 2025-10-14 09:04:28.293 2 DEBUG nova.policy [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:04:28 np0005486808 nova_compute[259627]: 2025-10-14 09:04:28.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:04:29 np0005486808 nova_compute[259627]: 2025-10-14 09:04:29.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 189 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 4.2 MiB/s wr, 141 op/s
Oct 14 05:04:29 np0005486808 nova_compute[259627]: 2025-10-14 09:04:29.768 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully created port: 194dc9cd-03af-4e2c-b8d6-107081204a25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:04:30 np0005486808 nova_compute[259627]: 2025-10-14 09:04:30.875 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Successfully updated port: 194dc9cd-03af-4e2c-b8d6-107081204a25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:04:30 np0005486808 nova_compute[259627]: 2025-10-14 09:04:30.896 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:30 np0005486808 nova_compute[259627]: 2025-10-14 09:04:30.897 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:30 np0005486808 nova_compute[259627]: 2025-10-14 09:04:30.898 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:31 np0005486808 nova_compute[259627]: 2025-10-14 09:04:31.033 2 DEBUG nova.compute.manager [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-changed-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:31 np0005486808 nova_compute[259627]: 2025-10-14 09:04:31.034 2 DEBUG nova.compute.manager [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing instance network info cache due to event network-changed-194dc9cd-03af-4e2c-b8d6-107081204a25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:31 np0005486808 nova_compute[259627]: 2025-10-14 09:04:31.034 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:31 np0005486808 nova_compute[259627]: 2025-10-14 09:04:31.116 2 WARNING nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:04:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Oct 14 05:04:32 np0005486808 nova_compute[259627]: 2025-10-14 09:04:32.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:04:32
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.control']
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:04:32 np0005486808 nova_compute[259627]: 2025-10-14 09:04:32.970 2 DEBUG nova.network.neutron [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:32 np0005486808 nova_compute[259627]: 2025-10-14 09:04:32.995 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:32 np0005486808 nova_compute[259627]: 2025-10-14 09:04:32.997 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:32 np0005486808 nova_compute[259627]: 2025-10-14 09:04:32.997 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Refreshing network info cache for port 194dc9cd-03af-4e2c-b8d6-107081204a25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.001 2 DEBUG nova.virt.libvirt.vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.001 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.003 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.003 2 DEBUG os_vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap194dc9cd-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap194dc9cd-03, col_values=(('external_ids', {'iface-id': '194dc9cd-03af-4e2c-b8d6-107081204a25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:f7:fb', 'vm-uuid': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:04:33 np0005486808 NetworkManager[44885]: <info>  [1760432673.0141] manager: (tap194dc9cd-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.023 2 INFO os_vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.024 2 DEBUG nova.virt.libvirt.vif [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.024 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.025 2 DEBUG nova.network.os_vif_util [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.027 2 DEBUG nova.virt.libvirt.guest [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <target dev="tap194dc9cd-03"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:04:33 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:04:33 np0005486808 NetworkManager[44885]: <info>  [1760432673.0405] manager: (tap194dc9cd-03): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 14 05:04:33 np0005486808 kernel: tap194dc9cd-03: entered promiscuous mode
Oct 14 05:04:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:33Z|00494|binding|INFO|Claiming lport 194dc9cd-03af-4e2c-b8d6-107081204a25 for this chassis.
Oct 14 05:04:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:33Z|00495|binding|INFO|194dc9cd-03af-4e2c-b8d6-107081204a25: Claiming fa:16:3e:5f:f7:fb 10.100.0.14
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.049 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:f7:fb 10.100.0.14'], port_security=['fa:16:3e:5f:f7:fb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=194dc9cd-03af-4e2c-b8d6-107081204a25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.053 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.054 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:04:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:33Z|00496|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 ovn-installed in OVS
Oct 14 05:04:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:33Z|00497|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 up in Southbound
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db394068-a4fb-44a8-9be8-35f4aaeb8368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 systemd-udevd[317211]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:33 np0005486808 NetworkManager[44885]: <info>  [1760432673.1106] device (tap194dc9cd-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:33 np0005486808 NetworkManager[44885]: <info>  [1760432673.1129] device (tap194dc9cd-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.121 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e921f0-7a1e-4ea4-ba46-ed87857c0820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.122 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.122 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.123 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:48:7e:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.123 2 DEBUG nova.virt.libvirt.driver [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:5f:f7:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea50318c-ca23-4328-b9cf-21216de8d085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.143 2 DEBUG nova.virt.libvirt.guest [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:33 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 05:04:33 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:33 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:33 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:33 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.154 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02720ab9-0740-491d-a072-5cfa496d3a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.164 2 DEBUG oslo_concurrency.lockutils [None req-e31680fe-feb0-4071-9314-551e933f7935 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.168 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8de1f2e-47ce-4870-b329-d27df913e96e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317217, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56c4368f-b955-44dc-899e-ddeabc36ce9a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642559, 'tstamp': 642559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317218, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642562, 'tstamp': 642562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317218, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.188 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:33.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.450 2 DEBUG nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.450 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.451 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.451 2 DEBUG oslo_concurrency.lockutils [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.452 2 DEBUG nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:33 np0005486808 nova_compute[259627]: 2025-10-14 09:04:33.452 2 WARNING nova.compute.manager [req-1245325e-20cd-49ad-b8c3-e65361b3edbe req-32658566-b9c2-45ec-a48b-16b111df6119 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:04:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.246 2 DEBUG nova.objects.instance [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'flavor' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.285 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.286 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.331 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432659.3304415, de383510-2de3-40bd-b479-c0010b3f2d1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.332 2 INFO nova.compute.manager [-] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:04:34 np0005486808 nova_compute[259627]: 2025-10-14 09:04:34.353 2 DEBUG nova.compute.manager [None req-f30420e9-2a73-4f1c-981e-e81862b060d3 - - - - - -] [instance: de383510-2de3-40bd-b479-c0010b3f2d1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.140 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.140 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.165 2 DEBUG nova.objects.instance [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.193 2 DEBUG nova.virt.libvirt.vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.194 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.195 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.199 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.203 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.207 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.207 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <target dev="tap194dc9cd-03"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.214 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.218 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <name>instance-00000033</name>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='tap09d03bcf-f7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:5f:f7:fb'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='tap194dc9cd-03'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='net1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/1'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.219 2 INFO nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the persistent domain config.#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.220 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tap194dc9cd-03 with device alias net1 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.220 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:5f:f7:fb"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <target dev="tap194dc9cd-03"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.293 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updated VIF entry in instance network info cache for port 194dc9cd-03af-4e2c-b8d6-107081204a25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.294 2 DEBUG nova.network.neutron [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.317 2 DEBUG oslo_concurrency.lockutils [req-c1a00de1-ea2b-4561-b3fa-f53164ebe3f3 req-5cdc9e9f-267b-4d80-acd6-f427cad6007e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:35 np0005486808 kernel: tap194dc9cd-03 (unregistering): left promiscuous mode
Oct 14 05:04:35 np0005486808 NetworkManager[44885]: <info>  [1760432675.3376] device (tap194dc9cd-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:35Z|00498|binding|INFO|Releasing lport 194dc9cd-03af-4e2c-b8d6-107081204a25 from this chassis (sb_readonly=0)
Oct 14 05:04:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:35Z|00499|binding|INFO|Setting lport 194dc9cd-03af-4e2c-b8d6-107081204a25 down in Southbound
Oct 14 05:04:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:35Z|00500|binding|INFO|Removing iface tap194dc9cd-03 ovn-installed in OVS
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.357 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:f7:fb 10.100.0.14'], port_security=['fa:16:3e:5f:f7:fb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=194dc9cd-03af-4e2c-b8d6-107081204a25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.358 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432675.3579726, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.358 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.360 2 DEBUG nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tap194dc9cd-03 with device alias net1 for instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.360 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.360 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.364 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <name>instance-00000033</name>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:33</nova:creationTime>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:port uuid="194dc9cd-03af-4e2c-b8d6-107081204a25">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target dev='tap09d03bcf-f7'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/1'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.364 2 INFO nova.virt.libvirt.driver [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tap194dc9cd-03 from instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d from the live domain config.#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.365 2 DEBUG nova.virt.libvirt.vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.365 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.368 2 DEBUG nova.network.os_vif_util [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.369 2 DEBUG os_vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194dc9cd-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.379 2 INFO os_vif [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.380 2 DEBUG nova.virt.libvirt.guest [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:35 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:35 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:35 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbba2fe-e9cc-4225-a02c-d6417fa54866]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.439 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[865acbdb-f6d4-42ed-a762-0b2fed629e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.443 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee724d-54dd-4ec7-8e8a-c3ee46f4c13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.475 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2bda4eb9-a08d-4566-a00a-5866b938ffd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.494 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4429ce0c-0800-4709-94ed-44f8855286e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642546, 'reachable_time': 32167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317228, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.511 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fac5ee-465b-475f-9273-e3065c0c3028]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642559, 'tstamp': 642559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317229, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642562, 'tstamp': 642562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317229, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.513 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:35.516 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.682 2 DEBUG nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.682 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG oslo_concurrency.lockutils [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.683 2 DEBUG nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:35 np0005486808 nova_compute[259627]: 2025-10-14 09:04:35.683 2 WARNING nova.compute.manager [req-3142e40d-5615-43c3-8234-34d5d8b4a9b4 req-f161c668-f9b7-4254-a0d7-59651d4d96b4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:04:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Oct 14 05:04:36 np0005486808 nova_compute[259627]: 2025-10-14 09:04:36.289 2 DEBUG nova.network.neutron [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:36 np0005486808 nova_compute[259627]: 2025-10-14 09:04:36.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:36 np0005486808 podman[317230]: 2025-10-14 09:04:36.664834225 +0000 UTC m=+0.064356833 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:04:36 np0005486808 podman[317231]: 2025-10-14 09:04:36.6670683 +0000 UTC m=+0.062664462 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 05:04:36 np0005486808 nova_compute[259627]: 2025-10-14 09:04:36.877 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:36 np0005486808 nova_compute[259627]: 2025-10-14 09:04:36.878 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:36 np0005486808 nova_compute[259627]: 2025-10-14 09:04:36.878 2 DEBUG nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.057 2 DEBUG nova.compute.manager [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-deleted-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.058 2 INFO nova.compute.manager [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Neutron deleted interface 194dc9cd-03af-4e2c-b8d6-107081204a25; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.058 2 DEBUG nova.network.neutron [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.087 2 DEBUG nova.objects.instance [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.137 2 DEBUG nova.objects.instance [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.179 2 DEBUG nova.virt.libvirt.vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.180 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.181 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.185 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.190 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <name>instance-00000033</name>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='tap09d03bcf-f7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/1'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.191 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.199 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:5f:f7:fb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap194dc9cd-03"/></interface>not found in domain: <domain type='kvm' id='64'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <name>instance-00000033</name>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <uuid>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</uuid>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:35</nova:creationTime>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='serial'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='uuid'>3c8eac8e-dcfa-41ba-9c01-1185061baf5d</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk' index='2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_disk.config' index='1'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:48:7e:a5'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target dev='tap09d03bcf-f7'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/1'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <source path='/dev/pts/1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d/console.log' append='off'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c247,c365</label>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c247,c365</imagelabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.200 2 WARNING nova.virt.libvirt.driver [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Detaching interface fa:16:3e:5f:f7:fb failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap194dc9cd-03' not found.#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.201 2 DEBUG nova.virt.libvirt.vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.201 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "194dc9cd-03af-4e2c-b8d6-107081204a25", "address": "fa:16:3e:5f:f7:fb", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194dc9cd-03", "ovs_interfaceid": "194dc9cd-03af-4e2c-b8d6-107081204a25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.202 2 DEBUG nova.network.os_vif_util [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.203 2 DEBUG os_vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194dc9cd-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.214 2 INFO os_vif [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:f7:fb,bridge_name='br-int',has_traffic_filtering=True,id=194dc9cd-03af-4e2c-b8d6-107081204a25,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194dc9cd-03')#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.215 2 DEBUG nova.virt.libvirt.guest [req-8d30da72-76e5-4139-8fb7-5b1c51ecf4c1 req-4b170249-c985-4543-b0e4-b5f1ed1fd201 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-880311128</nova:name>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:04:37</nova:creationTime>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    <nova:port uuid="09d03bcf-f719-4ec4-91a0-3c14e350a342">
Oct 14 05:04:37 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:04:37 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:04:37 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.296 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.296 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.297 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.298 2 INFO nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Terminating instance#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.299 2 DEBUG nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:04:37 np0005486808 kernel: tap09d03bcf-f7 (unregistering): left promiscuous mode
Oct 14 05:04:37 np0005486808 NetworkManager[44885]: <info>  [1760432677.3676] device (tap09d03bcf-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00501|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=0)
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00502|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down in Southbound
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00503|binding|INFO|Removing iface tap09d03bcf-f7 ovn-installed in OVS
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.407 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.409 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[014c12f8-24c4-4981-814e-aaaba87e246a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.411 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 14 05:04:37 np0005486808 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Consumed 12.886s CPU time.
Oct 14 05:04:37 np0005486808 systemd-machined[214636]: Machine qemu-64-instance-00000033 terminated.
Oct 14 05:04:37 np0005486808 kernel: tap09d03bcf-f7: entered promiscuous mode
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.516 2 DEBUG nova.network.neutron [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:37 np0005486808 systemd-udevd[317276]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:37 np0005486808 NetworkManager[44885]: <info>  [1760432677.5212] manager: (tap09d03bcf-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00504|binding|INFO|Claiming lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 for this chassis.
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00505|binding|INFO|09d03bcf-f719-4ec4-91a0-3c14e350a342: Claiming fa:16:3e:48:7e:a5 10.100.0.8
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.529 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:37 np0005486808 kernel: tap09d03bcf-f7 (unregistering): left promiscuous mode
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.534 2 DEBUG oslo_concurrency.lockutils [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.535 2 DEBUG nova.compute.manager [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.535 2 DEBUG nova.compute.manager [None req-c8980aa4-5084-4acd-8050-212b7bed928b 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] network_info to inject: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00506|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 ovn-installed in OVS
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00507|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 up in Southbound
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00508|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=1)
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00509|if_status|INFO|Dropped 2 log messages in last 246 seconds (most recently, 246 seconds ago) due to excessive rate
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00510|if_status|INFO|Not setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down as sb is readonly
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00511|binding|INFO|Releasing lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 from this chassis (sb_readonly=0)
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00512|binding|INFO|Removing iface tap09d03bcf-f7 ovn-installed in OVS
Oct 14 05:04:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:37Z|00513|binding|INFO|Setting lport 09d03bcf-f719-4ec4-91a0-3c14e350a342 down in Southbound
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.556 2 INFO nova.virt.libvirt.driver [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Instance destroyed successfully.#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.557 2 DEBUG nova.objects.instance [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 3c8eac8e-dcfa-41ba-9c01-1185061baf5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.562 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:7e:a5 10.100.0.8'], port_security=['fa:16:3e:48:7e:a5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c8eac8e-dcfa-41ba-9c01-1185061baf5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c6f10764-f6df-4d21-b829-68562680623d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=09d03bcf-f719-4ec4-91a0-3c14e350a342) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : haproxy version is 2.8.14-c23fe91
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [NOTICE]   (316466) : path to executable is /usr/sbin/haproxy
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : Exiting Master process...
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : Exiting Master process...
Oct 14 05:04:37 np0005486808 systemd[1]: libpod-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope: Deactivated successfully.
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [ALERT]    (316466) : Current worker (316470) exited with code 143 (Terminated)
Oct 14 05:04:37 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[316446]: [WARNING]  (316466) : All workers exited. Exiting... (0)
Oct 14 05:04:37 np0005486808 conmon[316446]: conmon 167ba1bfdc1cafb6e28f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope/container/memory.events
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 podman[317295]: 2025-10-14 09:04:37.574000108 +0000 UTC m=+0.058428407 container died 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.583 2 DEBUG nova.virt.libvirt.vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:03:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-880311128',display_name='tempest-AttachInterfacesTestJSON-server-880311128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-880311128',id=51,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJeC/x7PID63fa+XdM1puH3fFuAGCvZuvVgTscxZNhVHcJaK6qOoyzTmbsI3sgqD2j7CCpP6E241ZXLsn//8ic8awTSD83uMaP7AvsnDddi9mfK+5p/w2g8zmdvw2fjPQ==',key_name='tempest-keypair-1232666789',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-vco8xn00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=3c8eac8e-dcfa-41ba-9c01-1185061baf5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.583 2 DEBUG nova.network.os_vif_util [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.584 2 DEBUG nova.network.os_vif_util [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.584 2 DEBUG os_vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09d03bcf-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.592 2 INFO os_vif [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:7e:a5,bridge_name='br-int',has_traffic_filtering=True,id=09d03bcf-f719-4ec4-91a0-3c14e350a342,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09d03bcf-f7')#033[00m
Oct 14 05:04:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7f6ef93dddd05609592d54febe7e864f7239987c158375c17766114f7714daa3-merged.mount: Deactivated successfully.
Oct 14 05:04:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:04:37 np0005486808 podman[317295]: 2025-10-14 09:04:37.618163574 +0000 UTC m=+0.102591873 container cleanup 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:04:37 np0005486808 systemd[1]: libpod-conmon-167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6.scope: Deactivated successfully.
Oct 14 05:04:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 05:04:37 np0005486808 podman[317344]: 2025-10-14 09:04:37.706955907 +0000 UTC m=+0.054319256 container remove 167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79a58077-9dd4-4d1c-b4af-ca5f67a4943a]: (4, ('Tue Oct 14 09:04:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6)\n167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6\nTue Oct 14 09:04:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6)\n167ba1bfdc1cafb6e28fcf09a8c5ee914fcbfc9ccc96a10d234f92e59e8337d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32e1385d-e783-4b1f-b0a9-82a86f8137aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.715 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f75c01f-6c65-45ee-9bcb-10133315eb89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fdb4d3-36eb-4aec-b27b-2cef8863dcb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.753 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffda8e-d2d4-47ae-a543-f2eeb4bdec9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0386ad51-dac0-44f3-8ef9-3ddec6a0aa61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642540, 'reachable_time': 44268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317362, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.775 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.775 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[16ed3af3-e068-40cc-860c-3190dc746d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.776 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.779 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b3e1b-c837-4420-ae9f-a53d4bf5ad7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.781 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 09d03bcf-f719-4ec4-91a0-3c14e350a342 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.783 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:37.784 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc72115-d228-4ef6-8de9-7ad6cbddc464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.856 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.856 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.857 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 WARNING nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-194dc9cd-03af-4e2c-b8d6-107081204a25 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.858 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.859 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.984 2 INFO nova.virt.libvirt.driver [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deleting instance files /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_del#033[00m
Oct 14 05:04:37 np0005486808 nova_compute[259627]: 2025-10-14 09:04:37.985 2 INFO nova.virt.libvirt.driver [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deletion of /var/lib/nova/instances/3c8eac8e-dcfa-41ba-9c01-1185061baf5d_del complete#033[00m
Oct 14 05:04:38 np0005486808 nova_compute[259627]: 2025-10-14 09:04:38.147 2 INFO nova.compute.manager [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:04:38 np0005486808 nova_compute[259627]: 2025-10-14 09:04:38.148 2 DEBUG oslo.service.loopingcall [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:04:38 np0005486808 nova_compute[259627]: 2025-10-14 09:04:38.155 2 DEBUG nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:04:38 np0005486808 nova_compute[259627]: 2025-10-14 09:04:38.155 2 DEBUG nova.network.neutron [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.675 2 INFO nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Port 194dc9cd-03af-4e2c-b8d6-107081204a25 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.676 2 DEBUG nova.network.neutron [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [{"id": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "address": "fa:16:3e:48:7e:a5", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09d03bcf-f7", "ovs_interfaceid": "09d03bcf-f719-4ec4-91a0-3c14e350a342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 200 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 110 KiB/s wr, 18 op/s
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.706 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-3c8eac8e-dcfa-41ba-9c01-1185061baf5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.754 2 DEBUG oslo_concurrency.lockutils [None req-41b722ba-8b5c-4cc3-8449-654c8d75664f 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-3c8eac8e-dcfa-41ba-9c01-1185061baf5d-194dc9cd-03af-4e2c-b8d6-107081204a25" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.908 2 DEBUG nova.network.neutron [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.923 2 DEBUG nova.objects.instance [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'flavor' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.927 2 INFO nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Took 1.77 seconds to deallocate network for instance.#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.956 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:39 np0005486808 nova_compute[259627]: 2025-10-14 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.040 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.041 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.042 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.043 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG oslo_concurrency.lockutils [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.044 2 DEBUG nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.044 2 WARNING nova.compute.manager [req-e0a1f05f-50f1-4de9-ac01-8d43a20f736a req-6399ccb0-f598-4429-8fb1-8e63a645be74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.072 2 DEBUG oslo_concurrency.processutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.176 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.177 2 DEBUG nova.network.neutron [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.194 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.195 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.196 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.196 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.197 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-unplugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.198 2 DEBUG oslo_concurrency.lockutils [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.203 2 DEBUG nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] No waiting events found dispatching network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.204 2 WARNING nova.compute.manager [req-9507d0d4-59dd-4e7d-ad2a-e9b6023bca15 req-99d79600-a272-4ccd-86a8-d62d27cf7f27 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received unexpected event network-vif-plugged-09d03bcf-f719-4ec4-91a0-3c14e350a342 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.204 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377394577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.541 2 DEBUG oslo_concurrency.processutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.546 2 DEBUG nova.compute.provider_tree [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.571 2 DEBUG nova.scheduler.client.report [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.603 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:04:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.632 2 INFO nova.scheduler.client.report [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 3c8eac8e-dcfa-41ba-9c01-1185061baf5d#033[00m
Oct 14 05:04:40 np0005486808 nova_compute[259627]: 2025-10-14 09:04:40.699 2 DEBUG oslo_concurrency.lockutils [None req-69761c97-1066-4b72-a8c3-17431dfdc6cd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "3c8eac8e-dcfa-41ba-9c01-1185061baf5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 64edb15e-1350-41f6-a412-c158d20679fe does not exist
Oct 14 05:04:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fb5d74a8-ad22-4e2f-a4cc-28a9a4ad79c8 does not exist
Oct 14 05:04:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ac65fe4d-22be-4cf1-9b7a-e588513f9fce does not exist
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:04:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:04:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:41.512 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:41.514 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.577 2 DEBUG nova.network.neutron [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.659 2 DEBUG nova.compute.manager [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.659 2 DEBUG nova.compute.manager [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing instance network info cache due to event network-changed-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:41 np0005486808 nova_compute[259627]: 2025-10-14 09:04:41.660 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 114 KiB/s wr, 105 op/s
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.055211515 +0000 UTC m=+0.041461460 container create 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:04:42 np0005486808 systemd[1]: Started libpod-conmon-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope.
Oct 14 05:04:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.03670135 +0000 UTC m=+0.022951335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.146182732 +0000 UTC m=+0.132432717 container init 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.153803799 +0000 UTC m=+0.140053764 container start 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:04:42 np0005486808 nova_compute[259627]: 2025-10-14 09:04:42.152 2 DEBUG nova.compute.manager [req-f72d75b5-46d2-414a-b667-223f5cf49cce req-0d7b2539-c0ec-42da-98a8-e4e838fde150 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Received event network-vif-deleted-09d03bcf-f719-4ec4-91a0-3c14e350a342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.157629693 +0000 UTC m=+0.143879638 container attach 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 14 05:04:42 np0005486808 cool_edison[317793]: 167 167
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.158659228 +0000 UTC m=+0.144909173 container died 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:04:42 np0005486808 systemd[1]: libpod-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope: Deactivated successfully.
Oct 14 05:04:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-151d15ba7a3999ab1fddf68fab7262dce235815330dfc81cff9456f3f8eac6d4-merged.mount: Deactivated successfully.
Oct 14 05:04:42 np0005486808 podman[317776]: 2025-10-14 09:04:42.211650771 +0000 UTC m=+0.197900716 container remove 59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_edison, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:04:42 np0005486808 systemd[1]: libpod-conmon-59d5d97d93eed29c00ead58c46895eb506aa6fcf3b35470d3661299c348ab68f.scope: Deactivated successfully.
Oct 14 05:04:42 np0005486808 nova_compute[259627]: 2025-10-14 09:04:42.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:42 np0005486808 podman[317817]: 2025-10-14 09:04:42.373344987 +0000 UTC m=+0.039374069 container create eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:04:42 np0005486808 systemd[1]: Started libpod-conmon-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope.
Oct 14 05:04:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:42 np0005486808 podman[317817]: 2025-10-14 09:04:42.356709798 +0000 UTC m=+0.022738900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:42 np0005486808 podman[317817]: 2025-10-14 09:04:42.474091554 +0000 UTC m=+0.140120666 container init eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:04:42 np0005486808 podman[317817]: 2025-10-14 09:04:42.480949663 +0000 UTC m=+0.146978755 container start eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:04:42 np0005486808 podman[317817]: 2025-10-14 09:04:42.484863069 +0000 UTC m=+0.150892181 container attach eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:04:42 np0005486808 nova_compute[259627]: 2025-10-14 09:04:42.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.653832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682653866, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 531, "num_deletes": 251, "total_data_size": 508241, "memory_usage": 518768, "flush_reason": "Manual Compaction"}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682658212, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 384812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30540, "largest_seqno": 31070, "table_properties": {"data_size": 382026, "index_size": 758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7617, "raw_average_key_size": 20, "raw_value_size": 376193, "raw_average_value_size": 1033, "num_data_blocks": 33, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432652, "oldest_key_time": 1760432652, "file_creation_time": 1760432682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4403 microseconds, and 1559 cpu microseconds.
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.658239) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 384812 bytes OK
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.658253) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659746) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659755) EVENT_LOG_v1 {"time_micros": 1760432682659752, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.659767) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 505197, prev total WAL file size 505197, number of live WAL files 2.
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.660197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323531' seq:0, type:0; will stop at (end)
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(375KB)], [65(9990KB)]
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682660228, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10615443, "oldest_snapshot_seqno": -1}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5398 keys, 7432390 bytes, temperature: kUnknown
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682700579, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7432390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7396384, "index_size": 21432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136314, "raw_average_key_size": 25, "raw_value_size": 7299152, "raw_average_value_size": 1352, "num_data_blocks": 873, "num_entries": 5398, "num_filter_entries": 5398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.700790) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7432390 bytes
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.702407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.5 rd, 183.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(46.9) write-amplify(19.3) OK, records in: 5907, records dropped: 509 output_compression: NoCompression
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.702421) EVENT_LOG_v1 {"time_micros": 1760432682702414, "job": 36, "event": "compaction_finished", "compaction_time_micros": 40440, "compaction_time_cpu_micros": 18068, "output_level": 6, "num_output_files": 1, "total_output_size": 7432390, "num_input_records": 5907, "num_output_records": 5398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682702569, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432682703961, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.660122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:04:42.704061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759845367747607 of space, bias 1.0, pg target 0.2279536103242821 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:04:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.419 2 DEBUG nova.network.neutron [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.442 2 DEBUG oslo_concurrency.lockutils [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.443 2 DEBUG nova.compute.manager [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.443 2 DEBUG nova.compute.manager [None req-77405f18-bf68-4de8-bec8-f1699284fe07 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] network_info to inject: |[{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.446 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:43 np0005486808 nova_compute[259627]: 2025-10-14 09:04:43.446 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Refreshing network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:43 np0005486808 practical_solomon[317834]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:04:43 np0005486808 practical_solomon[317834]: --> relative data size: 1.0
Oct 14 05:04:43 np0005486808 practical_solomon[317834]: --> All data devices are unavailable
Oct 14 05:04:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 16 KiB/s wr, 87 op/s
Oct 14 05:04:43 np0005486808 systemd[1]: libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Deactivated successfully.
Oct 14 05:04:43 np0005486808 systemd[1]: libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Consumed 1.161s CPU time.
Oct 14 05:04:43 np0005486808 conmon[317834]: conmon eaaa6a24ab2126dd0802 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope/container/memory.events
Oct 14 05:04:43 np0005486808 podman[317817]: 2025-10-14 09:04:43.717572276 +0000 UTC m=+1.383601358 container died eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:04:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-41b48ee913fb1e4ef71280ddc26fb7e26a7d00e83e57eb2404442d52130fe60c-merged.mount: Deactivated successfully.
Oct 14 05:04:43 np0005486808 podman[317817]: 2025-10-14 09:04:43.790596482 +0000 UTC m=+1.456625564 container remove eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:04:43 np0005486808 systemd[1]: libpod-conmon-eaaa6a24ab2126dd0802caacb28a1d7c47c05c53cd8a059ed5b7efb9292b4992.scope: Deactivated successfully.
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.489784562 +0000 UTC m=+0.040609839 container create 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:04:44 np0005486808 systemd[1]: Started libpod-conmon-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope.
Oct 14 05:04:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.470790095 +0000 UTC m=+0.021615402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.574363502 +0000 UTC m=+0.125188789 container init 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.582514372 +0000 UTC m=+0.133339649 container start 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:04:44 np0005486808 infallible_austin[318033]: 167 167
Oct 14 05:04:44 np0005486808 systemd[1]: libpod-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope: Deactivated successfully.
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.624388532 +0000 UTC m=+0.175213909 container attach 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.624955426 +0000 UTC m=+0.175780713 container died 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:04:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a9444ddadd17af2c923978a01c31124bf4633214eab3e50b74c29cc84a1b4479-merged.mount: Deactivated successfully.
Oct 14 05:04:44 np0005486808 podman[318016]: 2025-10-14 09:04:44.685121915 +0000 UTC m=+0.235947202 container remove 3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:04:44 np0005486808 systemd[1]: libpod-conmon-3b4e8a89e6f584c57b7cf6b429f67c8f21d865803573e97f67efa31c808adc2e.scope: Deactivated successfully.
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.729 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.732 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.732 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.733 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.733 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.734 2 INFO nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Terminating instance#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.735 2 DEBUG nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:04:44 np0005486808 kernel: tap4f8c1944-ec (unregistering): left promiscuous mode
Oct 14 05:04:44 np0005486808 NetworkManager[44885]: <info>  [1760432684.7956] device (tap4f8c1944-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:04:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:44Z|00514|binding|INFO|Releasing lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 from this chassis (sb_readonly=0)
Oct 14 05:04:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:44Z|00515|binding|INFO|Setting lport 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 down in Southbound
Oct 14 05:04:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:44Z|00516|binding|INFO|Removing iface tap4f8c1944-ec ovn-installed in OVS
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.812 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:80:d4 10.100.0.7'], port_security=['fa:16:3e:fb:80:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e038df86-1323-4b04-afae-9fe68c98c22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e520d17b20a44440b176c2297c35286a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dfbcaeb0-59cf-4ea6-aad2-32a400918089', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c03d4d8-729d-49db-b443-08ab27defcda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 in datapath e964bc94-eb23-4bb9-b6af-2d14c0f7d764 unbound from our chassis#033[00m
Oct 14 05:04:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.816 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e964bc94-eb23-4bb9-b6af-2d14c0f7d764, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:04:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.818 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[188e5e8a-9c0d-4096-b04c-db058a149b63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:44.818 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 namespace which is not needed anymore#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:44 np0005486808 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 14 05:04:44 np0005486808 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000034.scope: Consumed 13.534s CPU time.
Oct 14 05:04:44 np0005486808 systemd-machined[214636]: Machine qemu-65-instance-00000034 terminated.
Oct 14 05:04:44 np0005486808 podman[318068]: 2025-10-14 09:04:44.932475037 +0000 UTC m=+0.059058773 container create 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.972 2 INFO nova.virt.libvirt.driver [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Instance destroyed successfully.#033[00m
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.973 2 DEBUG nova.objects.instance [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lazy-loading 'resources' on Instance uuid e038df86-1323-4b04-afae-9fe68c98c22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:44 np0005486808 systemd[1]: Started libpod-conmon-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope.
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : haproxy version is 2.8.14-c23fe91
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [NOTICE]   (316835) : path to executable is /usr/sbin/haproxy
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : Exiting Master process...
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : Exiting Master process...
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [ALERT]    (316835) : Current worker (316837) exited with code 143 (Terminated)
Oct 14 05:04:44 np0005486808 neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764[316831]: [WARNING]  (316835) : All workers exited. Exiting... (0)
Oct 14 05:04:44 np0005486808 systemd[1]: libpod-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope: Deactivated successfully.
Oct 14 05:04:44 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.997 2 DEBUG nova.virt.libvirt.vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1327010057',display_name='tempest-AttachInterfacesUnderV243Test-server-1327010057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1327010057',id=52,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoL8Eizmz78I7kJk+2faYxVDwYlZ7Qa0JVnSyW4HvPt6t6qpenjELhDNQJBBgBLKQxH+hNzILHY6YG4gLNrrM0gadWtg4ztrg1o/Wi2tCk6CtSq2N27wHKOX5s993gLcg==',key_name='tempest-keypair-1836165188',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e520d17b20a44440b176c2297c35286a',ramdisk_id='',reservation_id='r-oukj60f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1413244718',owner_user_name='tempest-AttachInterfacesUnderV243Test-1413244718-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4907b291c4c64d2eb768d0036817a00b',uuid=e038df86-1323-4b04-afae-9fe68c98c22c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:44.999 2 DEBUG nova.network.os_vif_util [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converting VIF {"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.000 2 DEBUG nova.network.os_vif_util [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:44.905648557 +0000 UTC m=+0.032232293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.000 2 DEBUG os_vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f8c1944-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:45 np0005486808 podman[318096]: 2025-10-14 09:04:45.003912903 +0000 UTC m=+0.064690521 container died a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.009 2 INFO os_vif [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:80:d4,bridge_name='br-int',has_traffic_filtering=True,id=4f8c1944-ec5d-4de3-82f1-19760c6b4dd4,network=Network(e964bc94-eb23-4bb9-b6af-2d14c0f7d764),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f8c1944-ec')#033[00m
Oct 14 05:04:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:45.048193262 +0000 UTC m=+0.174777048 container init 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:45.056149128 +0000 UTC m=+0.182732854 container start 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:04:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436-userdata-shm.mount: Deactivated successfully.
Oct 14 05:04:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f7bf3898c6dcba520ee1a299c42284442b12fb92ae3baa3607f44b8c13a94b30-merged.mount: Deactivated successfully.
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:45.068214054 +0000 UTC m=+0.194797780 container attach 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:04:45 np0005486808 podman[318096]: 2025-10-14 09:04:45.072891849 +0000 UTC m=+0.133669447 container cleanup a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:04:45 np0005486808 systemd[1]: libpod-conmon-a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436.scope: Deactivated successfully.
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.141 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updated VIF entry in instance network info cache for port 4f8c1944-ec5d-4de3-82f1-19760c6b4dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.142 2 DEBUG nova.network.neutron [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [{"id": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "address": "fa:16:3e:fb:80:d4", "network": {"id": "e964bc94-eb23-4bb9-b6af-2d14c0f7d764", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-577710959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e520d17b20a44440b176c2297c35286a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f8c1944-ec", "ovs_interfaceid": "4f8c1944-ec5d-4de3-82f1-19760c6b4dd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:45 np0005486808 podman[318161]: 2025-10-14 09:04:45.156942706 +0000 UTC m=+0.051182710 container remove a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.162 2 DEBUG oslo_concurrency.lockutils [req-85611c4a-335a-4979-9638-696387ab8bf6 req-4d53921d-2ce3-43ee-acf9-01efbd62cb3d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e038df86-1323-4b04-afae-9fe68c98c22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60d988-abf5-496a-ad2c-a6e8f908e0b7]: (4, ('Tue Oct 14 09:04:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 (a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436)\na036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436\nTue Oct 14 09:04:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 (a036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436)\na036452adf9bb3dedf800ea7a14af4cf510fe781db3de8f1af0fc0e5f1281436\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ade080df-8f08-4d51-a2d5-638edac27dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.166 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape964bc94-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:45 np0005486808 kernel: tape964bc94-e0: left promiscuous mode
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[398a5f8e-cf05-491c-b051-e9aab1a05501]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[149ffd2d-92a7-4644-8fe6-7f0395abf22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.229 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47f9412c-95e0-4749-a0f4-e3bddb53e3b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.256 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec64dec2-44e7-4098-a053-12bf1c983021]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643096, 'reachable_time': 25660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318175, 'error': None, 'target': 'ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.258 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e964bc94-eb23-4bb9-b6af-2d14c0f7d764 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:04:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:45.258 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[cf44bed7-cbcf-4e4f-b2cd-13be280674b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:45 np0005486808 systemd[1]: run-netns-ovnmeta\x2de964bc94\x2deb23\x2d4bb9\x2db6af\x2d2d14c0f7d764.mount: Deactivated successfully.
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.464 2 INFO nova.virt.libvirt.driver [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deleting instance files /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c_del#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.465 2 INFO nova.virt.libvirt.driver [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deletion of /var/lib/nova/instances/e038df86-1323-4b04-afae-9fe68c98c22c_del complete#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.518 2 INFO nova.compute.manager [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG oslo.service.loopingcall [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:04:45 np0005486808 nova_compute[259627]: 2025-10-14 09:04:45.519 2 DEBUG nova.network.neutron [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:04:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 17 KiB/s wr, 99 op/s
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]: {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    "0": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "devices": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "/dev/loop3"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            ],
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_name": "ceph_lv0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_size": "21470642176",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "name": "ceph_lv0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "tags": {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_name": "ceph",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.crush_device_class": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.encrypted": "0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_id": "0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.vdo": "0"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            },
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "vg_name": "ceph_vg0"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        }
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    ],
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    "1": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "devices": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "/dev/loop4"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            ],
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_name": "ceph_lv1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_size": "21470642176",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "name": "ceph_lv1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "tags": {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_name": "ceph",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.crush_device_class": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.encrypted": "0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_id": "1",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.vdo": "0"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            },
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "vg_name": "ceph_vg1"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        }
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    ],
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    "2": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "devices": [
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "/dev/loop5"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            ],
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_name": "ceph_lv2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_size": "21470642176",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "name": "ceph_lv2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "tags": {
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.cluster_name": "ceph",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.crush_device_class": "",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.encrypted": "0",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osd_id": "2",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:                "ceph.vdo": "0"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            },
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "type": "block",
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:            "vg_name": "ceph_vg2"
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:        }
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]:    ]
Oct 14 05:04:45 np0005486808 ecstatic_roentgen[318121]: }
Oct 14 05:04:45 np0005486808 systemd[1]: libpod-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope: Deactivated successfully.
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:45.832407293 +0000 UTC m=+0.958991029 container died 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:04:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a0856283cf378d79a7e9897247cf7a3c254d6cc79a79408a6c9956ff22096f8a-merged.mount: Deactivated successfully.
Oct 14 05:04:45 np0005486808 podman[318068]: 2025-10-14 09:04:45.897865553 +0000 UTC m=+1.024449249 container remove 342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_roentgen, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:04:45 np0005486808 systemd[1]: libpod-conmon-342c51a3b1ea399f0460e22e233166681a8fc50fab6cdd0ff9e74474638ce7d9.scope: Deactivated successfully.
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.404 2 DEBUG nova.network.neutron [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.431 2 INFO nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.494 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.495 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.571 2 DEBUG oslo_concurrency.processutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.724032156 +0000 UTC m=+0.043381748 container create a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:04:46 np0005486808 systemd[1]: Started libpod-conmon-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope.
Oct 14 05:04:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.704026404 +0000 UTC m=+0.023376026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.810322567 +0000 UTC m=+0.129672189 container init a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.817130654 +0000 UTC m=+0.136480266 container start a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:04:46 np0005486808 kind_kepler[318374]: 167 167
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.821880431 +0000 UTC m=+0.141230103 container attach a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:04:46 np0005486808 systemd[1]: libpod-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope: Deactivated successfully.
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.822676461 +0000 UTC m=+0.142026053 container died a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:04:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-669081a5f319c5adcf99d7323d93857029afe17de2f02a3d0d8fccfaf4c2ab00-merged.mount: Deactivated successfully.
Oct 14 05:04:46 np0005486808 podman[318339]: 2025-10-14 09:04:46.861860544 +0000 UTC m=+0.181210136 container remove a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.867 2 DEBUG nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.867 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG oslo_concurrency.lockutils [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.868 2 DEBUG nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] No waiting events found dispatching network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.868 2 WARNING nova.compute.manager [req-d20e3b74-c664-4e61-902d-55dcec15b960 req-94adbed2-27b5-4d39-a03f-873b4681c62c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received unexpected event network-vif-plugged-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:04:46 np0005486808 systemd[1]: libpod-conmon-a26f3446342fe5f4fb5a56b9ca3db3662353ce605f670c7895f8b07505a34604.scope: Deactivated successfully.
Oct 14 05:04:46 np0005486808 nova_compute[259627]: 2025-10-14 09:04:46.936 2 DEBUG nova.compute.manager [req-e0b42417-c412-444c-a4bf-4f8c387bb3c6 req-c2fbe515-f97a-4b86-ad58-ed719c4870e0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Received event network-vif-deleted-4f8c1944-ec5d-4de3-82f1-19760c6b4dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105800694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.019 2 DEBUG oslo_concurrency.processutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.028 2 DEBUG nova.compute.provider_tree [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.048 2 DEBUG nova.scheduler.client.report [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:47 np0005486808 podman[318398]: 2025-10-14 09:04:47.055508884 +0000 UTC m=+0.070548444 container create 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.084 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:47 np0005486808 systemd[1]: Started libpod-conmon-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope.
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.116 2 INFO nova.scheduler.client.report [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Deleted allocations for instance e038df86-1323-4b04-afae-9fe68c98c22c#033[00m
Oct 14 05:04:47 np0005486808 podman[318398]: 2025-10-14 09:04:47.02685595 +0000 UTC m=+0.041895470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:04:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:47 np0005486808 podman[318398]: 2025-10-14 09:04:47.157476482 +0000 UTC m=+0.172516022 container init 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:04:47 np0005486808 podman[318398]: 2025-10-14 09:04:47.174433218 +0000 UTC m=+0.189472778 container start 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:04:47 np0005486808 podman[318398]: 2025-10-14 09:04:47.178531899 +0000 UTC m=+0.193571429 container attach 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.221 2 DEBUG oslo_concurrency.lockutils [None req-cf3d410e-d2f9-4ad8-a024-d71b33b25430 4907b291c4c64d2eb768d0036817a00b e520d17b20a44440b176c2297c35286a - - default default] Lock "e038df86-1323-4b04-afae-9fe68c98c22c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.370 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.374 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.394 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.483 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.486 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.497 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.498 2 INFO nova.compute.claims [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:04:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:47.515 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.616 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.858 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.859 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.878 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:04:47 np0005486808 nova_compute[259627]: 2025-10-14 09:04:47.959 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575635753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.107 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.112 2 DEBUG nova.compute.provider_tree [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.131 2 DEBUG nova.scheduler.client.report [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.150 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.150 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.152 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.159 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.159 2 INFO nova.compute.claims [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]: {
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_id": 2,
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "type": "bluestore"
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    },
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_id": 1,
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "type": "bluestore"
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    },
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_id": 0,
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:        "type": "bluestore"
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]:    }
Oct 14 05:04:48 np0005486808 infallible_hertz[318416]: }
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.212 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.214 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:04:48 np0005486808 systemd[1]: libpod-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Deactivated successfully.
Oct 14 05:04:48 np0005486808 podman[318398]: 2025-10-14 09:04:48.219103304 +0000 UTC m=+1.234142864 container died 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:04:48 np0005486808 systemd[1]: libpod-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Consumed 1.047s CPU time.
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.242 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:04:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e67e4b92f1e67f68d2cdfb2cb402cc800814ee9f9eba3d0a2943244ae352b352-merged.mount: Deactivated successfully.
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.275 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:04:48 np0005486808 podman[318398]: 2025-10-14 09:04:48.276250599 +0000 UTC m=+1.291290119 container remove 3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:04:48 np0005486808 systemd[1]: libpod-conmon-3e21757b148d2ac393daae0297aba4fa5a42281df00d351821e79923b12cf5de.scope: Deactivated successfully.
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 43bfb54d-6621-4c8f-9561-3d1a3e3b0989 does not exist
Oct 14 05:04:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ff13fbef-07d3-4797-ab9f-737deb07e2d5 does not exist
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.344 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.375 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.376 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.377 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating image(s)#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.399 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.428 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.449 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.452 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.516 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.517 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.517 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.518 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.535 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.537 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.772 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:04:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887727024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.807 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.849 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.888 2 DEBUG nova.compute.provider_tree [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.905 2 DEBUG nova.scheduler.client.report [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.953 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.954 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.961 2 DEBUG nova.objects.instance [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.989 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.989 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Ensure instance console log exists: /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.990 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.991 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:48 np0005486808 nova_compute[259627]: 2025-10-14 09:04:48.991 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.008 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.009 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.032 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.056 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.126 2 DEBUG nova.policy [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.189 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.191 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.192 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating image(s)#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.218 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.239 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.264 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.268 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.310 2 DEBUG nova.policy [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.359 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.360 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.361 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.361 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.389 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.393 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:49 np0005486808 podman[318817]: 2025-10-14 09:04:49.664097171 +0000 UTC m=+0.070561186 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:04:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 73 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.704 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:49 np0005486808 podman[318816]: 2025-10-14 09:04:49.725263625 +0000 UTC m=+0.123100718 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.755 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.830 2 DEBUG nova.objects.instance [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.853 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.854 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Ensure instance console log exists: /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.854 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.855 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:49 np0005486808 nova_compute[259627]: 2025-10-14 09:04:49.855 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:50 np0005486808 nova_compute[259627]: 2025-10-14 09:04:50.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:50 np0005486808 nova_compute[259627]: 2025-10-14 09:04:50.275 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Successfully created port: 58429c4c-bdab-4d51-8440-95fb6e0fab00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:04:50 np0005486808 nova_compute[259627]: 2025-10-14 09:04:50.515 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: 971d99c2-5a60-4cac-8f99-e819d71e419c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.300 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: 971d99c2-5a60-4cac-8f99-e819d71e419c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.317 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.351 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Successfully updated port: 58429c4c-bdab-4d51-8440-95fb6e0fab00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.364 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.364 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.365 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG nova.compute.manager [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG nova.compute.manager [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.413 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG nova.compute.manager [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG nova.compute.manager [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing instance network info cache due to event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.516 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.555 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:04:51 np0005486808 nova_compute[259627]: 2025-10-14 09:04:51.563 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:04:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 3.5 MiB/s wr, 169 op/s
Oct 14 05:04:52 np0005486808 nova_compute[259627]: 2025-10-14 09:04:52.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:52 np0005486808 nova_compute[259627]: 2025-10-14 09:04:52.553 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432677.5520945, 3c8eac8e-dcfa-41ba-9c01-1185061baf5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:52 np0005486808 nova_compute[259627]: 2025-10-14 09:04:52.554 2 INFO nova.compute.manager [-] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:04:52 np0005486808 nova_compute[259627]: 2025-10-14 09:04:52.576 2 DEBUG nova.compute.manager [None req-e06b9750-cf6b-499e-a701-deea2b3443b8 - - - - - -] [instance: 3c8eac8e-dcfa-41ba-9c01-1185061baf5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.379 2 DEBUG nova.network.neutron [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.406 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.407 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance network_info: |[{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.408 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.408 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.413 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start _get_guest_xml network_info=[{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.420 2 WARNING nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.434 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.435 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.439 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.440 2 DEBUG nova.virt.libvirt.host [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.441 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.441 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.442 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.443 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.443 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.444 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.444 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.445 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.445 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.446 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.446 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.447 2 DEBUG nova.virt.hardware [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.452 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 134 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Oct 14 05:04:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971864206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.942 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.974 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:53 np0005486808 nova_compute[259627]: 2025-10-14 09:04:53.979 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.221 2 DEBUG nova.network.neutron [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.248 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.249 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance network_info: |[{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.250 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.251 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.255 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start _get_guest_xml network_info=[{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.260 2 WARNING nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.265 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.266 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.268 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.269 2 DEBUG nova.virt.libvirt.host [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.269 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.270 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.270 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.271 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.271 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.272 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.273 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.273 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.274 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.274 2 DEBUG nova.virt.hardware [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.278 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743216292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.491 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.493 2 DEBUG nova.virt.libvirt.vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.493 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.494 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.495 2 DEBUG nova.objects.instance [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.515 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <name>instance-00000036</name>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:04:53</nova:creationTime>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="serial">47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="uuid">47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9a:79:ab"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <target dev="tap971d99c2-5a"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log" append="off"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:04:54 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:54 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:54 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:54 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Preparing to wait for external event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.521 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.virt.libvirt.vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.522 2 DEBUG nova.network.os_vif_util [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.525 2 DEBUG os_vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap971d99c2-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap971d99c2-5a, col_values=(('external_ids', {'iface-id': '971d99c2-5a60-4cac-8f99-e819d71e419c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:79:ab', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:54 np0005486808 NetworkManager[44885]: <info>  [1760432694.5336] manager: (tap971d99c2-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.539 2 INFO os_vif [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a')#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.596 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.597 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.597 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.598 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Using config drive#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.618 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218737721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.761 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.791 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:54 np0005486808 nova_compute[259627]: 2025-10-14 09:04:54.797 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.103 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Creating config drive at /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.116 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9th7ndf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.280 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9th7ndf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:04:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460046403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.319 2 DEBUG nova.storage.rbd_utils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.323 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.364 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.365 2 DEBUG nova.virt.libvirt.vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.366 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.367 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.369 2 DEBUG nova.objects.instance [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.386 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <uuid>1ce7a863-d0bf-4ea3-80f5-18675b16ac93</uuid>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <name>instance-00000035</name>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestOtherA-server-1496520165</nova:name>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:04:54</nova:creationTime>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <nova:port uuid="58429c4c-bdab-4d51-8440-95fb6e0fab00">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="serial">1ce7a863-d0bf-4ea3-80f5-18675b16ac93</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="uuid">1ce7a863-d0bf-4ea3-80f5-18675b16ac93</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:53:a5:80"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <target dev="tap58429c4c-bd"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/console.log" append="off"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:04:55 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:04:55 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:04:55 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:04:55 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Preparing to wait for external event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.394 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.395 2 DEBUG nova.virt.libvirt.vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:04:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.395 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.396 2 DEBUG nova.network.os_vif_util [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.397 2 DEBUG os_vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58429c4c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58429c4c-bd, col_values=(('external_ids', {'iface-id': '58429c4c-bdab-4d51-8440-95fb6e0fab00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:a5:80', 'vm-uuid': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.4064] manager: (tap58429c4c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.480 2 DEBUG oslo_concurrency.processutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config 47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.480 2 INFO nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deleting local config drive /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/disk.config because it was imported into RBD.#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.505 2 INFO os_vif [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd')#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.5385] manager: (tap971d99c2-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Oct 14 05:04:55 np0005486808 kernel: tap971d99c2-5a: entered promiscuous mode
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:55Z|00517|binding|INFO|Claiming lport 971d99c2-5a60-4cac-8f99-e819d71e419c for this chassis.
Oct 14 05:04:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:55Z|00518|binding|INFO|971d99c2-5a60-4cac-8f99-e819d71e419c: Claiming fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.550 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:79:ab 10.100.0.3'], port_security=['fa:16:3e:9a:79:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c96b2336-ed00-4da6-b121-ce1c9aa6f017', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=971d99c2-5a60-4cac-8f99-e819d71e419c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.552 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 971d99c2-5a60-4cac-8f99-e819d71e419c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.553 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.564 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84c1bf6c-7a60-4e8b-9d77-d0220961d135]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.565 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.566 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9acc79-2ebe-4314-b343-0b1ea71dbbcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.567 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0846a2ba-bcb9-4948-b9c8-d5abfce2d104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.565 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.566 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.566 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:53:a5:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.566 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Using config drive#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.579 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[64b8b4fb-3d3e-4dad-9b6e-63fb506d22a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 systemd-machined[214636]: New machine qemu-67-instance-00000036.
Oct 14 05:04:55 np0005486808 systemd[1]: Started Virtual Machine qemu-67-instance-00000036.
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.600 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:55 np0005486808 systemd-udevd[319154]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26ccb375-d5ad-4d8b-933d-db110e1db7b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.6269] device (tap971d99c2-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.6285] device (tap971d99c2-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.637 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9004165e-f9f7-4bcd-951f-aaafc5196771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a37dab74-74ff-4e29-a6de-a2fdf18e91a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.6468] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Oct 14 05:04:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:55Z|00519|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c ovn-installed in OVS
Oct 14 05:04:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:55Z|00520|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c up in Southbound
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.685 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4dff792e-feb7-492a-83d2-4f96daebcd69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.690 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb8ad08-1aa0-4ad1-8d83-1af6e58c31f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 MiB/s wr, 83 op/s
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.7115] device (tapfc2d149f-a0): carrier: link connected
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.713 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.713 2 DEBUG nova.network.neutron [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.718 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c51fd0-8ec7-4824-aaa1-3512d2caedbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88bd7c63-f57f-416c-affe-550e260ab681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319189, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.740 2 DEBUG oslo_concurrency.lockutils [req-53c5855b-48e3-49ba-80b2-9d725c97d068 req-7121235a-3875-4843-914d-bd515e829902 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1a7034-2103-4a23-bbdc-467525b35da6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647447, 'tstamp': 647447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319190, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a770bc77-6923-4a5d-8285-553b420408fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319198, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.796 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[457f0635-b6a6-49f8-92ee-ac11e1992d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.879 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[243685fd-1512-4a7e-b59e-1ccd6ea87abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.882 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 NetworkManager[44885]: <info>  [1760432695.9215] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.923 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated VIF entry in instance network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.923 2 DEBUG nova.network.neutron [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:04:55 np0005486808 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.927 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:55Z|00521|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.940 2 DEBUG oslo_concurrency.lockutils [req-6a3b91ee-b0ee-4fe0-ab87-d67a73ae97a3 req-da0629c3-ace9-4d55-9cd1-514a7b0fc421 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 nova_compute[259627]: 2025-10-14 09:04:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.951 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6afe58ff-f0d4-40bc-9f3c-c5470ff1cff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.952 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:55.953 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.071 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Creating config drive at /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.078 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0t4ksl32 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.137 2 DEBUG nova.compute.manager [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG oslo_concurrency.lockutils [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.138 2 DEBUG nova.compute.manager [req-ec8ccaee-da92-48d9-9fcf-4b476afc9b6c req-8d252e40-5090-41eb-95d0-b3c0b297d93f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Processing event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.215 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0t4ksl32" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.239 2 DEBUG nova.storage.rbd_utils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.242 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:04:56 np0005486808 podman[319291]: 2025-10-14 09:04:56.308981356 +0000 UTC m=+0.041458410 container create 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:04:56 np0005486808 systemd[1]: Started libpod-conmon-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope.
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.340 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3396842, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.341 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.344 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.348 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.355 2 INFO nova.virt.libvirt.driver [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance spawned successfully.#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.355 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:04:56 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f03912ee5e1d4e48b9e053b6890e84bd070633917fc91412b280f99baf2363/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:56 np0005486808 podman[319291]: 2025-10-14 09:04:56.373234726 +0000 UTC m=+0.105711800 container init 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.375 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:56 np0005486808 podman[319291]: 2025-10-14 09:04:56.378570107 +0000 UTC m=+0.111047161 container start 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:04:56 np0005486808 podman[319291]: 2025-10-14 09:04:56.287775745 +0000 UTC m=+0.020252819 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.382 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.388 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.389 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.390 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.390 2 DEBUG nova.virt.libvirt.driver [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:04:56 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : New worker (319330) forked
Oct 14 05:04:56 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : Loading success.
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.416 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.416 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3398838, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.416 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.423 2 DEBUG oslo_concurrency.processutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config 1ce7a863-d0bf-4ea3-80f5-18675b16ac93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.424 2 INFO nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deleting local config drive /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93/disk.config because it was imported into RBD.#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.441 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.446 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432696.3480465, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.447 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.449 2 INFO nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.450 2 DEBUG nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.461 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:56 np0005486808 kernel: tap58429c4c-bd: entered promiscuous mode
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:56 np0005486808 systemd-udevd[319184]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:04:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:56Z|00522|binding|INFO|Claiming lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 for this chassis.
Oct 14 05:04:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:56Z|00523|binding|INFO|58429c4c-bdab-4d51-8440-95fb6e0fab00: Claiming fa:16:3e:53:a5:80 10.100.0.3
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.4812] manager: (tap58429c4c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.489 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a5:80 10.100.0.3'], port_security=['fa:16:3e:53:a5:80 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73d8240-1201-4e28-9385-26f0dd3955ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=58429c4c-bdab-4d51-8440-95fb6e0fab00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.490 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 58429c4c-bdab-4d51-8440-95fb6e0fab00 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.491 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.492 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.4979] device (tap58429c4c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.4987] device (tap58429c4c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cee8e013-d14b-4c47-b0c9-26c0926837b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.502 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3b87118-f1 in ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.504 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3b87118-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.504 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe50a562-e558-45ec-8620-51476a0deeb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.505 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[770b029d-6550-41d9-acb8-0de9a436cee7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 systemd-machined[214636]: New machine qemu-68-instance-00000035.
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.516 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0637f4a5-a866-4ab8-ac8d-8cf73d69f443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.524 2 INFO nova.compute.manager [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 8.58 seconds to build instance.#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.538 2 DEBUG oslo_concurrency.lockutils [None req-fc167e60-406f-42d5-9a12-142220b90d1a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0123846-582c-4f22-a298-de67e87822e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 systemd[1]: Started Virtual Machine qemu-68-instance-00000035.
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:56Z|00524|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 ovn-installed in OVS
Oct 14 05:04:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:56Z|00525|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 up in Southbound
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.589 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8905815d-155c-4041-ad4b-f8738a1b862f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be01a6c2-deb2-412e-b990-78376f91033d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.5951] manager: (tapf3b87118-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9c4591-5e3c-456d-a61a-7f863f125fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.629 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ac860218-bea7-43e5-9289-a607ce083673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.6470] device (tapf3b87118-f0): carrier: link connected
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.652 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bcf45d-2f42-47de-bb77-6ad4623de003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2de1b4-cc87-48e4-9f55-36c7b1dff1df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319369, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.693 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a05d899b-054c-4853-8db8-672fe9353d19]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:43f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647540, 'tstamp': 647540}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319370, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.705 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29b7710c-39c0-4f73-8eba-2df4905384b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319371, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f51ad508-9adc-43f5-8156-9dc12f3de803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dccbcfd9-260d-413d-a9bb-f8a2ecb02a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.806 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.806 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.807 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:56 np0005486808 NetworkManager[44885]: <info>  [1760432696.8103] manager: (tapf3b87118-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct 14 05:04:56 np0005486808 kernel: tapf3b87118-f0: entered promiscuous mode
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.821 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:04:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:56Z|00526|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.829 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.830 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3df8fe5d-da64-41f5-82d7-580cd80052bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.831 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/f3b87118-f516-4f2d-8696-aa7290af9d83.pid.haproxy
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID f3b87118-f516-4f2d-8696-aa7290af9d83
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:04:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:04:56.832 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'env', 'PROCESS_TAG=haproxy-f3b87118-f516-4f2d-8696-aa7290af9d83', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3b87118-f516-4f2d-8696-aa7290af9d83.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:04:56 np0005486808 nova_compute[259627]: 2025-10-14 09:04:56.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:57 np0005486808 podman[319445]: 2025-10-14 09:04:57.246676431 +0000 UTC m=+0.051892267 container create 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:57 np0005486808 systemd[1]: Started libpod-conmon-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope.
Oct 14 05:04:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:04:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59812b275a7f08f721c19ad28122ba9e2d64d1426f442032d01ef6fa1360d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:04:57 np0005486808 podman[319445]: 2025-10-14 09:04:57.219970455 +0000 UTC m=+0.025186341 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:04:57 np0005486808 podman[319445]: 2025-10-14 09:04:57.317072152 +0000 UTC m=+0.122288008 container init 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:04:57 np0005486808 podman[319445]: 2025-10-14 09:04:57.323362097 +0000 UTC m=+0.128577933 container start 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:04:57 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : New worker (319466) forked
Oct 14 05:04:57 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : Loading success.
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.444 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432697.4445388, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.445 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Started (Lifecycle Event)#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.495 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.499 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432697.4446344, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.501 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.521 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:04:57 np0005486808 nova_compute[259627]: 2025-10-14 09:04:57.546 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:04:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:04:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.236 2 DEBUG nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.237 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.238 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.239 2 DEBUG oslo_concurrency.lockutils [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.239 2 DEBUG nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.240 2 WARNING nova.compute.manager [req-33b01b59-861c-42fa-b77d-ffdd1687bf17 req-80cc02fb-9b96-4215-a909-912224c91490 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:04:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:58Z|00527|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:04:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:58Z|00528|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 05:04:58 np0005486808 NetworkManager[44885]: <info>  [1760432698.7622] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:58 np0005486808 NetworkManager[44885]: <info>  [1760432698.7645] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:58Z|00529|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:04:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:04:58Z|00530|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 05:04:58 np0005486808 nova_compute[259627]: 2025-10-14 09:04:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:04:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 134 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct 14 05:04:59 np0005486808 nova_compute[259627]: 2025-10-14 09:04:59.971 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432684.9695382, e038df86-1323-4b04-afae-9fe68c98c22c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:04:59 np0005486808 nova_compute[259627]: 2025-10-14 09:04:59.972 2 INFO nova.compute.manager [-] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:04:59 np0005486808 nova_compute[259627]: 2025-10-14 09:04:59.992 2 DEBUG nova.compute.manager [None req-9be7d0c4-fd42-495a-bc55-ce86df4f0d9a - - - - - -] [instance: e038df86-1323-4b04-afae-9fe68c98c22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.342 2 DEBUG nova.compute.manager [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.343 2 DEBUG nova.compute.manager [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-971d99c2-5a60-4cac-8f99-e819d71e419c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.344 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.344 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.345 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:00 np0005486808 nova_compute[259627]: 2025-10-14 09:05:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.040 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port 971d99c2-5a60-4cac-8f99-e819d71e419c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.041 2 DEBUG nova.network.neutron [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.064 2 DEBUG oslo_concurrency.lockutils [req-fdee0557-a2fc-43c2-b802-871c8b485dc1 req-4d4a0d2b-edef-4edf-bec9-a08f06fab450 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.116 2 DEBUG nova.compute.manager [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.117 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.118 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.119 2 DEBUG oslo_concurrency.lockutils [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.119 2 DEBUG nova.compute.manager [req-08a9b2ea-28ae-48cd-95af-5ee2491bd5d6 req-4a8abe34-a1f3-4e5b-83e5-0e431220d1dc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Processing event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.120 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.124 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432702.1243072, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.125 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.127 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.132 2 INFO nova.virt.libvirt.driver [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance spawned successfully.#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.132 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.155 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.158 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.164 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.165 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.166 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.166 2 DEBUG nova.virt.libvirt.driver [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.204 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.241 2 INFO nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 13.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.243 2 DEBUG nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.320 2 INFO nova.compute.manager [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 14.87 seconds to build instance.#033[00m
Oct 14 05:05:02 np0005486808 nova_compute[259627]: 2025-10-14 09:05:02.347 2 DEBUG oslo_concurrency.lockutils [None req-19abdae1-f926-4192-9ace-d10a77aa09a5 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:03 np0005486808 nova_compute[259627]: 2025-10-14 09:05:03.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 83 op/s
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.238 2 DEBUG nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.239 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.240 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.241 2 DEBUG oslo_concurrency.lockutils [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.241 2 DEBUG nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:04 np0005486808 nova_compute[259627]: 2025-10-14 09:05:04.242 2 WARNING nova.compute.manager [req-1416b009-f3a0-4755-a0c1-4d4ee8f19798 req-1c8af559-a8fa-4126-ae83-c0e57b0abc36 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received unexpected event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.485 2 DEBUG nova.compute.manager [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.485 2 DEBUG nova.compute.manager [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing instance network info cache due to event network-changed-58429c4c-bdab-4d51-8440-95fb6e0fab00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.486 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.486 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:05 np0005486808 nova_compute[259627]: 2025-10-14 09:05:05.487 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Refreshing network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:05:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:05:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:05:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3543546620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:05:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 14 05:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.023 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:07 np0005486808 podman[319476]: 2025-10-14 09:05:07.12721482 +0000 UTC m=+0.063214326 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:05:07 np0005486808 podman[319477]: 2025-10-14 09:05:07.135734759 +0000 UTC m=+0.065676176 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:05:07 np0005486808 nova_compute[259627]: 2025-10-14 09:05:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:07 np0005486808 nova_compute[259627]: 2025-10-14 09:05:07.269 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated VIF entry in instance network info cache for port 58429c4c-bdab-4d51-8440-95fb6e0fab00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:07 np0005486808 nova_compute[259627]: 2025-10-14 09:05:07.269 2 DEBUG nova.network.neutron [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:07 np0005486808 nova_compute[259627]: 2025-10-14 09:05:07.292 2 DEBUG oslo_concurrency.lockutils [req-58ca31a3-974c-4021-9b35-1967b1e1fb52 req-45515b9a-03f7-4224-bb1f-9ce62132760a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 05:05:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:08Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 05:05:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:08Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:79:ab 10.100.0.3
Oct 14 05:05:08 np0005486808 nova_compute[259627]: 2025-10-14 09:05:08.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 146 op/s
Oct 14 05:05:10 np0005486808 nova_compute[259627]: 2025-10-14 09:05:10.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Oct 14 05:05:12 np0005486808 nova_compute[259627]: 2025-10-14 09:05:12.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:13Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:a5:80 10.100.0.3
Oct 14 05:05:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:13Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:a5:80 10.100.0.3
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.655 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.656 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.674 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:05:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.794 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.796 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.811 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.812 2 INFO nova.compute.claims [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:05:13 np0005486808 nova_compute[259627]: 2025-10-14 09:05:13.999 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894978399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.474 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.479 2 DEBUG nova.compute.provider_tree [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.500 2 DEBUG nova.scheduler.client.report [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.530 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.531 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.590 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.591 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.608 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.628 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.717 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.718 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.718 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating image(s)#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.743 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.768 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.791 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.796 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.881 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.882 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.883 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.883 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.929 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.934 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:14 np0005486808 nova_compute[259627]: 2025-10-14 09:05:14.998 2 DEBUG nova.policy [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64a22b9370d049c0b189508f3f58f0ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55e6e0201a064f1390a998f830140354', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.228 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.317 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] resizing rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.440 2 DEBUG nova.objects.instance [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'migration_context' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.462 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.463 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Ensure instance console log exists: /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.464 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.465 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.465 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 217 op/s
Oct 14 05:05:15 np0005486808 nova_compute[259627]: 2025-10-14 09:05:15.807 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Successfully created port: 282dfd9e-9e84-450c-a306-8bc55428feb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.246 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.248 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.248 2 DEBUG nova.objects.instance [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.280 2 DEBUG nova.objects.instance [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.299 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.781 2 DEBUG nova.policy [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.872 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Successfully updated port: 282dfd9e-9e84-450c-a306-8bc55428feb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.894 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.895 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.895 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.970 2 DEBUG nova.compute.manager [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.970 2 DEBUG nova.compute.manager [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing instance network info cache due to event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:16 np0005486808 nova_compute[259627]: 2025-10-14 09:05:16.971 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:17 np0005486808 nova_compute[259627]: 2025-10-14 09:05:17.069 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:05:17 np0005486808 nova_compute[259627]: 2025-10-14 09:05:17.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:17 np0005486808 nova_compute[259627]: 2025-10-14 09:05:17.437 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:05:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.151 2 DEBUG nova.network.neutron [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.159 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.171 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance network_info: |[{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.172 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.175 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start _get_guest_xml network_info=[{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.177 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.177 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.178 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.182 2 WARNING nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.187 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.188 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.201 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.202 2 DEBUG nova.virt.libvirt.host [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.202 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.203 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.204 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.204 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.205 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.205 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.206 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.206 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.207 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.208 2 DEBUG nova.virt.hardware [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.212 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.342 2 WARNING nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.427 2 DEBUG nova.compute.manager [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.428 2 DEBUG nova.compute.manager [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-e3bc3ac3-6147-40d0-a19c-df111dcf23a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.428 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:05:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727335350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.707 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.742 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:18 np0005486808 nova_compute[259627]: 2025-10-14 09:05:18.747 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:05:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178235096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.249 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.252 2 DEBUG nova.virt.libvirt.vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.253 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.254 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.257 2 DEBUG nova.objects.instance [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.285 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <uuid>65c7e6ed-131f-4bca-af69-a1241d048bdb</uuid>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <name>instance-00000037</name>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestManualDisk-server-100646485</nova:name>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:05:18</nova:creationTime>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:user uuid="64a22b9370d049c0b189508f3f58f0ca">tempest-ServersTestManualDisk-748280037-project-member</nova:user>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:project uuid="55e6e0201a064f1390a998f830140354">tempest-ServersTestManualDisk-748280037</nova:project>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <nova:port uuid="282dfd9e-9e84-450c-a306-8bc55428feb4">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="serial">65c7e6ed-131f-4bca-af69-a1241d048bdb</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="uuid">65c7e6ed-131f-4bca-af69-a1241d048bdb</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/65c7e6ed-131f-4bca-af69-a1241d048bdb_disk">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:de:50:93"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <target dev="tap282dfd9e-9e"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/console.log" append="off"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:05:19 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:19 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:19 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:19 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.288 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Preparing to wait for external event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.288 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.289 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.289 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.290 2 DEBUG nova.virt.libvirt.vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.291 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.292 2 DEBUG nova.network.os_vif_util [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.293 2 DEBUG os_vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap282dfd9e-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap282dfd9e-9e, col_values=(('external_ids', {'iface-id': '282dfd9e-9e84-450c-a306-8bc55428feb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:50:93', 'vm-uuid': '65c7e6ed-131f-4bca-af69-a1241d048bdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:19 np0005486808 NetworkManager[44885]: <info>  [1760432719.3070] manager: (tap282dfd9e-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.318 2 INFO os_vif [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e')#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.395 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.396 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.397 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] No VIF found with MAC fa:16:3e:de:50:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.397 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Using config drive#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.431 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 224 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.839 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Creating config drive at /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.852 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7ac9h4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.906 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updated VIF entry in instance network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.907 2 DEBUG nova.network.neutron [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.926 2 DEBUG oslo_concurrency.lockutils [req-ab09288e-0e5f-4955-95ed-1552b848b6d3 req-8663a462-1055-4319-ab85-596d2825084f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:19 np0005486808 nova_compute[259627]: 2025-10-14 09:05:19.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.019 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7ac9h4b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.066 2 DEBUG nova.storage.rbd_utils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] rbd image 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.070 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.316 2 DEBUG oslo_concurrency.processutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config 65c7e6ed-131f-4bca-af69-a1241d048bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.318 2 INFO nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deleting local config drive /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb/disk.config because it was imported into RBD.#033[00m
Oct 14 05:05:20 np0005486808 kernel: tap282dfd9e-9e: entered promiscuous mode
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.3758] manager: (tap282dfd9e-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00531|binding|INFO|Claiming lport 282dfd9e-9e84-450c-a306-8bc55428feb4 for this chassis.
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00532|binding|INFO|282dfd9e-9e84-450c-a306-8bc55428feb4: Claiming fa:16:3e:de:50:93 10.100.0.10
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.385 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:50:93 10.100.0.10'], port_security=['fa:16:3e:de:50:93 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '65c7e6ed-131f-4bca-af69-a1241d048bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e6e0201a064f1390a998f830140354', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98209f09-275f-46ee-a2c6-16214403e3de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d21053-8b98-4816-ad89-107cc4743794, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=282dfd9e-9e84-450c-a306-8bc55428feb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.387 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 282dfd9e-9e84-450c-a306-8bc55428feb4 in datapath 77fb75b2-483b-47a5-99a5-ae91248b8ed8 bound to our chassis#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.389 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77fb75b2-483b-47a5-99a5-ae91248b8ed8#033[00m
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00533|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 ovn-installed in OVS
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00534|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 up in Southbound
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.401 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff1e6d8-b370-485a-9b11-4a6a8ec2c9a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.403 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77fb75b2-41 in ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.405 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77fb75b2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.405 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e68ae350-3d1a-4e29-8f18-1fef832515f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb337d5-b4b6-4a4d-9bf4-3805f38be0a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 systemd-machined[214636]: New machine qemu-69-instance-00000037.
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.421 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1b811a-43ff-4d6e-bc73-e32068d3cdf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 systemd[1]: Started Virtual Machine qemu-69-instance-00000037.
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.439 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4cbf94-cd9b-49d3-bde3-70338a03660a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 systemd-udevd[319865]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.4599] device (tap282dfd9e-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.4623] device (tap282dfd9e-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:05:20 np0005486808 podman[319839]: 2025-10-14 09:05:20.476307072 +0000 UTC m=+0.065040303 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.476 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[780c1315-d5e6-4ec6-a936-7eff9266caef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.4883] manager: (tap77fb75b2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5299bd08-63d6-405c-a4cf-488c7883dacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 podman[319837]: 2025-10-14 09:05:20.512820831 +0000 UTC m=+0.101612483 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.523 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[316fb446-40f4-43b2-93e5-59c97ede23a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.526 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3480b53-be5d-4eb2-83d3-f3f8eb79de53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.5540] device (tap77fb75b2-40): carrier: link connected
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.559 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92ce509a-9787-4da5-ae95-c3b6ceba1d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.576 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[687d236e-0705-4f1c-b3e3-d09cab6f42ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77fb75b2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:16:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649931, 'reachable_time': 44929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319915, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.593 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cacb4314-fe02-4472-965c-085cb45ee493]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:16e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649931, 'tstamp': 649931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319916, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.608 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9f8c94-83ec-4d35-b15a-1e2fa50ec9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77fb75b2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:16:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649931, 'reachable_time': 44929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319917, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.640 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad9f010-7a0c-49d3-a4d4-24ac23439e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.699 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba158c3-0473-454a-84f7-db2fce86bc2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.700 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77fb75b2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77fb75b2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.7043] manager: (tap77fb75b2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 14 05:05:20 np0005486808 kernel: tap77fb75b2-40: entered promiscuous mode
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77fb75b2-40, col_values=(('external_ids', {'iface-id': '43fea039-06e3-4d15-8c57-031bfdc08664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00535|binding|INFO|Releasing lport 43fea039-06e3-4d15-8c57-031bfdc08664 from this chassis (sb_readonly=0)
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.724 2 DEBUG nova.compute.manager [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.724 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG oslo_concurrency.lockutils [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.725 2 DEBUG nova.compute.manager [req-f8b91534-e8f9-44e0-bb7f-86ce1e9234fa req-5231b6aa-5ac2-45b9-9706-70bdc631b472 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Processing event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.737 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6b4088-f5da-429d-9c83-85c6c3e2d044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.739 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-77fb75b2-483b-47a5-99a5-ae91248b8ed8
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/77fb75b2-483b-47a5-99a5-ae91248b8ed8.pid.haproxy
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 77fb75b2-483b-47a5-99a5-ae91248b8ed8
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.741 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'env', 'PROCESS_TAG=haproxy-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77fb75b2-483b-47a5-99a5-ae91248b8ed8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.811 2 DEBUG nova.network.neutron [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.834 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.836 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.837 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.844 2 DEBUG nova.virt.libvirt.vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.845 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.846 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.846 2 DEBUG os_vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3bc3ac3-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3bc3ac3-61, col_values=(('external_ids', {'iface-id': 'e3bc3ac3-6147-40d0-a19c-df111dcf23a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:94:02', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.8532] manager: (tape3bc3ac3-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.859 2 INFO os_vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.860 2 DEBUG nova.virt.libvirt.vif [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.861 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.861 2 DEBUG nova.network.os_vif_util [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.864 2 DEBUG nova.virt.libvirt.guest [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:54:94:02"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <target dev="tape3bc3ac3-61"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:05:20 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:05:20 np0005486808 kernel: tape3bc3ac3-61: entered promiscuous mode
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.8745] manager: (tape3bc3ac3-61): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00536|binding|INFO|Claiming lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for this chassis.
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00537|binding|INFO|e3bc3ac3-6147-40d0-a19c-df111dcf23a5: Claiming fa:16:3e:54:94:02 10.100.0.4
Oct 14 05:05:20 np0005486808 systemd-udevd[319904]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:20.887 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:94:02 10.100.0.4'], port_security=['fa:16:3e:54:94:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e3bc3ac3-6147-40d0-a19c-df111dcf23a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.8941] device (tape3bc3ac3-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:05:20 np0005486808 NetworkManager[44885]: <info>  [1760432720.8951] device (tape3bc3ac3-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00538|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 ovn-installed in OVS
Oct 14 05:05:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:20Z|00539|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 up in Southbound
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.964 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.965 2 DEBUG nova.virt.libvirt.driver [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:20 np0005486808 nova_compute[259627]: 2025-10-14 09:05:20.992 2 DEBUG nova.virt.libvirt.guest [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:20</nova:creationTime>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:20 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 05:05:20 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:20 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:20 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:20 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.024 2 DEBUG oslo_concurrency.lockutils [None req-f1c6ce23-cf61-4384-9863-5f586d1d8396 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:21 np0005486808 podman[319953]: 2025-10-14 09:05:21.127624482 +0000 UTC m=+0.066813656 container create 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 05:05:21 np0005486808 podman[319953]: 2025-10-14 09:05:21.087221287 +0000 UTC m=+0.026410491 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:05:21 np0005486808 systemd[1]: Started libpod-conmon-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope.
Oct 14 05:05:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8ef72225e5c259b1a8bbd67e6e8947df3e042df52c25d7dffcdfcd0e7b97cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:21 np0005486808 podman[319953]: 2025-10-14 09:05:21.217530286 +0000 UTC m=+0.156719480 container init 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 05:05:21 np0005486808 podman[319953]: 2025-10-14 09:05:21.224108289 +0000 UTC m=+0.163297453 container start 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 05:05:21 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : New worker (320017) forked
Oct 14 05:05:21 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : Loading success.
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.292 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6049538-3459-46cc-9b4b-364fb6dc6e07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2bf88c-2efb-4ba7-a721-0dcfdf8a667c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.350 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc36a96-2a4a-4b1e-ade4-30b4542086e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.377 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4646b6-6108-4466-9ad8-0b871770dfc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.395 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[344aa24b-5f55-48a3-8f64-cf0b89b87382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320031, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f988e5c1-ab05-41b7-a548-644914e0c080]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320032, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320032, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.412 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.416 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.416 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:21.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.694 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.694389, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.695 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Started (Lifecycle Event)#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.698 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.701 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.704 2 INFO nova.virt.libvirt.driver [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance spawned successfully.#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.704 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:05:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 6.1 MiB/s wr, 156 op/s
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.738 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.738 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.739 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.740 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.740 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.741 2 DEBUG nova.virt.libvirt.driver [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.745 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.748 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.813 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.814 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.694497, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.845 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432721.701136, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.845 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.850 2 INFO nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.851 2 DEBUG nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.865 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.868 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.895 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.914 2 INFO nova.compute.manager [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 8.16 seconds to build instance.#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.928 2 DEBUG oslo_concurrency.lockutils [None req-2aed99fb-edb7-4d89-93bc-1c03d35f9804 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:21 np0005486808 nova_compute[259627]: 2025-10-14 09:05:21.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.258 2 DEBUG nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.258 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.259 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.260 2 DEBUG oslo_concurrency.lockutils [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.260 2 DEBUG nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.261 2 WARNING nova.compute.manager [req-26d8ff9f-75d8-4868-930c-21cdc33b4a2b req-2d9575dc-6d5e-47bf-9d1c-3eddda0e70ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:22 np0005486808 nova_compute[259627]: 2025-10-14 09:05:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.004 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:23Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:94:02 10.100.0.4
Oct 14 05:05:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:23Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:94:02 10.100.0.4
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.228 2 DEBUG nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.229 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG oslo_concurrency.lockutils [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.230 2 DEBUG nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.231 2 WARNING nova.compute.manager [req-a73c5357-9237-4c17-b459-c80553936cf0 req-5f59dff7-d981-47c6-a260-383ac89d4c90 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received unexpected event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.345 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.345 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.346 2 DEBUG nova.objects.instance [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066470348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.523 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.616 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.617 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.620 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.622 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.622 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:05:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 247 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.805 2 DEBUG nova.objects.instance [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.825 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.838 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.839 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.876243591308594GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.839 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.840 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.931 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port e3bc3ac3-6147-40d0-a19c-df111dcf23a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.931 2 DEBUG nova.network.neutron [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.947 2 DEBUG oslo_concurrency.lockutils [req-0c7393a9-714f-4565-88ed-7b182557e0c7 req-8a06e4e4-ef4b-43d9-8643-bb14f65edda3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.976 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.977 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 47257c6e-4d10-4d8e-af5a-b57db20048ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.977 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 65c7e6ed-131f-4bca-af69-a1241d048bdb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:05:23 np0005486808 nova_compute[259627]: 2025-10-14 09:05:23.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.026 2 DEBUG nova.policy [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.064 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546375787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.520 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.525 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.541 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.560 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:05:24 np0005486808 nova_compute[259627]: 2025-10-14 09:05:24.560 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 05:05:25 np0005486808 nova_compute[259627]: 2025-10-14 09:05:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:26 np0005486808 nova_compute[259627]: 2025-10-14 09:05:26.201 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully created port: b624404b-6681-4fc8-a870-dc9418e2de0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.291 2 DEBUG nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.291 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.292 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.292 2 DEBUG oslo_concurrency.lockutils [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.293 2 DEBUG nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.293 2 WARNING nova.compute.manager [req-ae1621d3-df36-4282-b160-9d4b50dffdbc req-20fbc73a-9635-4a6c-ab18-03b14136eecf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.634 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: b624404b-6681-4fc8-a870-dc9418e2de0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.655 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.656 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.656 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.871 2 WARNING nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:27 np0005486808 nova_compute[259627]: 2025-10-14 09:05:27.871 2 WARNING nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:28 np0005486808 nova_compute[259627]: 2025-10-14 09:05:28.989 2 DEBUG nova.compute.manager [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:28 np0005486808 nova_compute[259627]: 2025-10-14 09:05:28.989 2 DEBUG nova.compute.manager [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing instance network info cache due to event network-changed-282dfd9e-9e84-450c-a306-8bc55428feb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:28 np0005486808 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:28 np0005486808 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:28 np0005486808 nova_compute[259627]: 2025-10-14 09:05:28.990 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Refreshing network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:29 np0005486808 nova_compute[259627]: 2025-10-14 09:05:29.560 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:29 np0005486808 nova_compute[259627]: 2025-10-14 09:05:29.561 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:05:29 np0005486808 nova_compute[259627]: 2025-10-14 09:05:29.562 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:05:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 717 KiB/s wr, 76 op/s
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.039 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.040 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.040 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.598 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updated VIF entry in instance network info cache for port 282dfd9e-9e84-450c-a306-8bc55428feb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.599 2 DEBUG nova.network.neutron [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [{"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.617 2 DEBUG oslo_concurrency.lockutils [req-c0b4a458-49fe-4a5d-bc42-c15c187610af req-cd876a1a-f52c-45f0-9f8a-49d405f0234c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-65c7e6ed-131f-4bca-af69-a1241d048bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.741 2 DEBUG nova.compute.manager [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.742 2 DEBUG nova.compute.manager [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-b624404b-6681-4fc8-a870-dc9418e2de0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.743 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:30 np0005486808 nova_compute[259627]: 2025-10-14 09:05:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.244 2 DEBUG nova.network.neutron [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.265 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.266 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.270 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.272 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.273 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port b624404b-6681-4fc8-a870-dc9418e2de0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.278 2 DEBUG nova.virt.libvirt.vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.279 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.280 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.281 2 DEBUG os_vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.296 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb624404b-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb624404b-66, col_values=(('external_ids', {'iface-id': 'b624404b-6681-4fc8-a870-dc9418e2de0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:02:94', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 NetworkManager[44885]: <info>  [1760432731.3054] manager: (tapb624404b-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.317 2 INFO os_vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66')#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.318 2 DEBUG nova.virt.libvirt.vif [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.318 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.319 2 DEBUG nova.network.os_vif_util [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.321 2 DEBUG nova.virt.libvirt.guest [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:bb:02:94"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <target dev="tapb624404b-66"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:05:31 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:05:31 np0005486808 kernel: tapb624404b-66: entered promiscuous mode
Oct 14 05:05:31 np0005486808 NetworkManager[44885]: <info>  [1760432731.3464] manager: (tapb624404b-66): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct 14 05:05:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:31Z|00540|binding|INFO|Claiming lport b624404b-6681-4fc8-a870-dc9418e2de0f for this chassis.
Oct 14 05:05:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:31Z|00541|binding|INFO|b624404b-6681-4fc8-a870-dc9418e2de0f: Claiming fa:16:3e:bb:02:94 10.100.0.9
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.359 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:02:94 10.100.0.9'], port_security=['fa:16:3e:bb:02:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b624404b-6681-4fc8-a870-dc9418e2de0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.362 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b624404b-6681-4fc8-a870-dc9418e2de0f in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.366 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.368 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.369 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.377 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.378 2 INFO nova.compute.claims [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:05:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:31Z|00542|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f ovn-installed in OVS
Oct 14 05:05:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:31Z|00543|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f up in Southbound
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[216c130a-7112-4e78-a96f-ec3d8e7b8df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 systemd-udevd[320085]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:05:31 np0005486808 NetworkManager[44885]: <info>  [1760432731.4260] device (tapb624404b-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:05:31 np0005486808 NetworkManager[44885]: <info>  [1760432731.4274] device (tapb624404b-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.454 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d5aad-ac26-4ad1-a127-7a0a1355bb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.459 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[92413a12-66c9-4972-86ad-76306d8f6503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.474 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.475 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.476 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.477 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.477 2 DEBUG nova.virt.libvirt.driver [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:bb:02:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[29dfe148-4f73-4b17-8217-226249fd8ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.505 2 DEBUG nova.virt.libvirt.guest [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:31</nova:creationTime>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:31 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:31 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:31 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:31 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d170d8ed-4503-470a-9680-89db84052cfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320092, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.536 2 DEBUG oslo_concurrency.lockutils [None req-b839d101-c7d4-4ca0-a654-7b22359ff5e3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.540 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af54e1ad-26fb-4325-bf52-0908ad594719]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320093, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320093, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:31.549 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.552 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 76 op/s
Oct 14 05:05:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242352205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:31 np0005486808 nova_compute[259627]: 2025-10-14 09:05:31.995 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.000 2 DEBUG nova.compute.provider_tree [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.027 2 DEBUG nova.scheduler.client.report [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.099 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.100 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.152 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.152 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.169 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.183 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [{"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.198 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-1ce7a863-d0bf-4ea3-80f5-18675b16ac93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.202 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.203 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.286 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.288 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.289 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating image(s)#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.325 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.360 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.390 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.394 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.436 2 DEBUG nova.policy [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.519 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.520 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.521 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.522 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.548 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.554 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.724 2 DEBUG nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.726 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.727 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.728 2 DEBUG oslo_concurrency.lockutils [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.729 2 DEBUG nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.730 2 WARNING nova.compute.manager [req-0ec652a0-09fc-46a7-b9a1-09f313c720c4 req-5d7c0b4f-3dbc-4483-bdc5-d7ae457087c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:05:32
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.848 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.911 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:05:32 np0005486808 nova_compute[259627]: 2025-10-14 09:05:32.996 2 DEBUG nova.objects.instance [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.016 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.016 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ensure instance console log exists: /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.017 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.095 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Successfully created port: 31797867-f0bc-4632-a658-bd0fae609c23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.404 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port b624404b-6681-4fc8-a870-dc9418e2de0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.405 2 DEBUG nova.network.neutron [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.430 2 DEBUG oslo_concurrency.lockutils [req-c70ba67e-64f8-40f0-970a-b1e4fe5f6c6a req-46faa1ce-6199-4fbb-a531-32a1b4b3ca76 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 247 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.2 KiB/s wr, 73 op/s
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.844 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Successfully updated port: 31797867-f0bc-4632-a658-bd0fae609c23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:33 np0005486808 nova_compute[259627]: 2025-10-14 09:05:33.871 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:33Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:50:93 10.100.0.10
Oct 14 05:05:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:33Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:50:93 10.100.0.10
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.028 2 DEBUG nova.compute.manager [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-changed-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.029 2 DEBUG nova.compute.manager [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Refreshing instance network info cache due to event network-changed-31797867-f0bc-4632-a658-bd0fae609c23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.029 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.070 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:05:34 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:34Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:02:94 10.100.0.9
Oct 14 05:05:34 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:34Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:02:94 10.100.0.9
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.806 2 DEBUG nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.807 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.807 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.808 2 DEBUG oslo_concurrency.lockutils [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.808 2 DEBUG nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:34 np0005486808 nova_compute[259627]: 2025-10-14 09:05:34.809 2 WARNING nova.compute.manager [req-5d21e53d-f485-4bbc-bd30-4a6585238b41 req-223fe539-43f7-40c9-87c1-05d880c86560 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.390 2 DEBUG nova.network.neutron [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.409 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.410 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance network_info: |[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.411 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.411 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Refreshing network info cache for port 31797867-f0bc-4632-a658-bd0fae609c23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.416 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start _get_guest_xml network_info=[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.423 2 WARNING nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.429 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.430 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.434 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.435 2 DEBUG nova.virt.libvirt.host [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.435 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.436 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.437 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.437 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.438 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.438 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.439 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.440 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.440 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.441 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.441 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.442 2 DEBUG nova.virt.hardware [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.447 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.529 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.530 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.531 2 DEBUG nova.objects.instance [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 05:05:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:05:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078694914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.944 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.970 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:35 np0005486808 nova_compute[259627]: 2025-10-14 09:05:35.974 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:05:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944170748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.416 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.418 2 DEBUG nova.virt.libvirt.vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:32Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.419 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.421 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.423 2 DEBUG nova.objects.instance [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.441 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <uuid>ec31b9ab-88ab-4085-a46b-76cb9825061a</uuid>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <name>instance-00000038</name>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-412231426</nova:name>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:05:35</nova:creationTime>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <nova:port uuid="31797867-f0bc-4632-a658-bd0fae609c23">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="serial">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="uuid">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:01:3d:6b"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <target dev="tap31797867-f0"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log" append="off"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:05:36 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:36 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:36 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:36 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.444 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Preparing to wait for external event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.445 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.445 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.446 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.448 2 DEBUG nova.virt.libvirt.vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:05:32Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.449 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.450 2 DEBUG nova.network.os_vif_util [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.451 2 DEBUG os_vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31797867-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31797867-f0, col_values=(('external_ids', {'iface-id': '31797867-f0bc-4632-a658-bd0fae609c23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:3d:6b', 'vm-uuid': 'ec31b9ab-88ab-4085-a46b-76cb9825061a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:36 np0005486808 NetworkManager[44885]: <info>  [1760432736.4625] manager: (tap31797867-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.473 2 INFO os_vif [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.537 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.538 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.539 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:01:3d:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.539 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Using config drive#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.572 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.820 2 DEBUG nova.objects.instance [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:36 np0005486808 nova_compute[259627]: 2025-10-14 09:05:36.851 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.239 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating config drive at /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.249 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl5ihuf9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.407 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl5ihuf9" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.438 2 DEBUG nova.storage.rbd_utils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.443 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.540 2 DEBUG nova.policy [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.546 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updated VIF entry in instance network info cache for port 31797867-f0bc-4632-a658-bd0fae609c23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.547 2 DEBUG nova.network.neutron [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.581 2 DEBUG oslo_concurrency.lockutils [req-deea44ea-5ab1-4c22-bcbb-8d05cf9673f9 req-0aa19476-444d-4121-9d1a-709a0c5f5567 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ec31b9ab-88ab-4085-a46b-76cb9825061a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.636 2 DEBUG oslo_concurrency.processutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.637 2 INFO nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting local config drive /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config because it was imported into RBD.#033[00m
Oct 14 05:05:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:37 np0005486808 podman[320401]: 2025-10-14 09:05:37.667919503 +0000 UTC m=+0.078350470 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:05:37 np0005486808 podman[320405]: 2025-10-14 09:05:37.668621191 +0000 UTC m=+0.074527977 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:05:37 np0005486808 kernel: tap31797867-f0: entered promiscuous mode
Oct 14 05:05:37 np0005486808 NetworkManager[44885]: <info>  [1760432737.6892] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct 14 05:05:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:37Z|00544|binding|INFO|Claiming lport 31797867-f0bc-4632-a658-bd0fae609c23 for this chassis.
Oct 14 05:05:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:37Z|00545|binding|INFO|31797867-f0bc-4632-a658-bd0fae609c23: Claiming fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.701 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.702 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.704 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:05:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc76a4d-e989-4fa1-869e-17355d966261]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 systemd-udevd[320457]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:05:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:37Z|00546|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 ovn-installed in OVS
Oct 14 05:05:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:37Z|00547|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 up in Southbound
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 systemd-machined[214636]: New machine qemu-70-instance-00000038.
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 NetworkManager[44885]: <info>  [1760432737.7418] device (tap31797867-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:05:37 np0005486808 NetworkManager[44885]: <info>  [1760432737.7426] device (tap31797867-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:05:37 np0005486808 systemd[1]: Started Virtual Machine qemu-70-instance-00000038.
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.761 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38ed69bf-17ee-4971-9e83-956dae57dd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.764 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[37d00dd9-737f-4d86-8439-806a210bed45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.792 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e13cd21-df20-4980-bfcd-95aff84c453f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.811 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b046edea-1984-4c80-ad51-3d5855d31262]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320469, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.828 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bb134a-f7b9-483f-a20b-47ac5f8c4f27]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320470, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320470, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.829 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 nova_compute[259627]: 2025-10-14 09:05:37.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.833 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:37.834 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.529 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432738.528594, ec31b9ab-88ab-4085-a46b-76cb9825061a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.530 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Started (Lifecycle Event)#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.551 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.556 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432738.5288987, ec31b9ab-88ab-4085-a46b-76cb9825061a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.573 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.578 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.598 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.614 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.834 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Successfully updated port: be6943f6-df97-4a84-854b-858cc7d3ea1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.862 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.862 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.863 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.935 2 DEBUG nova.compute.manager [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-changed-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.936 2 DEBUG nova.compute.manager [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing instance network info cache due to event network-changed-be6943f6-df97-4a84-854b-858cc7d3ea1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:05:38 np0005486808 nova_compute[259627]: 2025-10-14 09:05:38.936 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:39 np0005486808 nova_compute[259627]: 2025-10-14 09:05:39.115 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:39 np0005486808 nova_compute[259627]: 2025-10-14 09:05:39.116 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:39 np0005486808 nova_compute[259627]: 2025-10-14 09:05:39.117 2 WARNING nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:05:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 14 05:05:41 np0005486808 nova_compute[259627]: 2025-10-14 09:05:41.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 05:05:42 np0005486808 nova_compute[259627]: 2025-10-14 09:05:42.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002629157814840138 of space, bias 1.0, pg target 0.7887473444520414 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:05:42 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.680 2 DEBUG nova.network.neutron [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.701 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.703 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.703 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Refreshing network info cache for port be6943f6-df97-4a84-854b-858cc7d3ea1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.708 2 DEBUG nova.virt.libvirt.vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.709 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.710 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.711 2 DEBUG os_vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe6943f6-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe6943f6-df, col_values=(('external_ids', {'iface-id': 'be6943f6-df97-4a84-854b-858cc7d3ea1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:ae:0e', 'vm-uuid': '47257c6e-4d10-4d8e-af5a-b57db20048ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 NetworkManager[44885]: <info>  [1760432743.7515] manager: (tapbe6943f6-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.761 2 INFO os_vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df')#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.762 2 DEBUG nova.virt.libvirt.vif [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.763 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.764 2 DEBUG nova.network.os_vif_util [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.768 2 DEBUG nova.virt.libvirt.guest [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:a4:ae:0e"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <target dev="tapbe6943f6-df"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:05:43 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:05:43 np0005486808 kernel: tapbe6943f6-df: entered promiscuous mode
Oct 14 05:05:43 np0005486808 NetworkManager[44885]: <info>  [1760432743.7878] manager: (tapbe6943f6-df): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct 14 05:05:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:43Z|00548|binding|INFO|Claiming lport be6943f6-df97-4a84-854b-858cc7d3ea1a for this chassis.
Oct 14 05:05:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:43Z|00549|binding|INFO|be6943f6-df97-4a84-854b-858cc7d3ea1a: Claiming fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.802 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:ae:0e 10.100.0.14'], port_security=['fa:16:3e:a4:ae:0e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be6943f6-df97-4a84-854b-858cc7d3ea1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.805 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be6943f6-df97-4a84-854b-858cc7d3ea1a in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.809 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:43Z|00550|binding|INFO|Setting lport be6943f6-df97-4a84-854b-858cc7d3ea1a ovn-installed in OVS
Oct 14 05:05:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:43Z|00551|binding|INFO|Setting lport be6943f6-df97-4a84-854b-858cc7d3ea1a up in Southbound
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.834 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1009ad7-e8ed-4885-b347-4c101924dd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 systemd-udevd[320522]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:05:43 np0005486808 NetworkManager[44885]: <info>  [1760432743.8743] device (tapbe6943f6-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:05:43 np0005486808 NetworkManager[44885]: <info>  [1760432743.8759] device (tapbe6943f6-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.878 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa55af88-ecdb-4028-9f2f-52ee6cb2635f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.883 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[413abdce-a805-4f7b-8a3b-90fa35720837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.918 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.919 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.919 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9a:79:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.920 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:54:94:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.921 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:bb:02:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.921 2 DEBUG nova.virt.libvirt.driver [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:a4:ae:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.922 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[75bc0043-8508-40d9-8ec6-88e27c65eaa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.947 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1fda5e3-467e-4d3c-895a-12722dd78789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320528, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.963 2 DEBUG nova.virt.libvirt.guest [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:43 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:43 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:43 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:43 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.968 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5475455e-6a04-4ac4-9656-849ca3fdaf03]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320529, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320529, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.970 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 nova_compute[259627]: 2025-10-14 09:05:43.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:43.975 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.000 2 DEBUG oslo_concurrency.lockutils [None req-a55d4471-0c48-490f-bea6-3261eec70016 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-be6943f6-df97-4a84-854b-858cc7d3ea1a" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.653 2 DEBUG nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.654 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.654 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.655 2 DEBUG oslo_concurrency.lockutils [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.655 2 DEBUG nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.656 2 WARNING nova.compute.manager [req-f9678190-bed6-432d-b498-6c24d0e5ba38 req-e5c4f3f9-6709-47a1-a30d-fad8d54ddb26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:44.771 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:44.772 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:05:44 np0005486808 nova_compute[259627]: 2025-10-14 09:05:44.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.483 2 DEBUG nova.compute.manager [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.484 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.485 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.485 2 DEBUG oslo_concurrency.lockutils [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.486 2 DEBUG nova.compute.manager [req-29868350-49f9-4820-a782-8a031a506e9f req-e53e4ce6-965a-4a25-bb87-b1dfed6d8f86 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Processing event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.487 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.498 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432745.497848, ec31b9ab-88ab-4085-a46b-76cb9825061a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.498 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.501 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.506 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance spawned successfully.#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.507 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.525 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.533 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.538 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.539 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.540 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.540 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.541 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.542 2 DEBUG nova.virt.libvirt.driver [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.556 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.610 2 INFO nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 13.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.610 2 DEBUG nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.671 2 INFO nova.compute.manager [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 14.33 seconds to build instance.#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.687 2 DEBUG oslo_concurrency.lockutils [None req-f5644573-b9c1-4a9d-91bf-2e5614a8ef19 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.942 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updated VIF entry in instance network info cache for port be6943f6-df97-4a84-854b-858cc7d3ea1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.943 2 DEBUG nova.network.neutron [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:45 np0005486808 nova_compute[259627]: 2025-10-14 09:05:45.964 2 DEBUG oslo_concurrency.lockutils [req-203aecfe-ae4b-4775-a71e-cee18813aaab req-a613fc90-b079-4c32-bf01-82f25eb27e9f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.247 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.248 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.248 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.249 2 INFO nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Terminating instance#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.250 2 DEBUG nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:05:46 np0005486808 kernel: tap282dfd9e-9e (unregistering): left promiscuous mode
Oct 14 05:05:46 np0005486808 NetworkManager[44885]: <info>  [1760432746.3040] device (tap282dfd9e-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00552|binding|INFO|Releasing lport 282dfd9e-9e84-450c-a306-8bc55428feb4 from this chassis (sb_readonly=0)
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00553|binding|INFO|Setting lport 282dfd9e-9e84-450c-a306-8bc55428feb4 down in Southbound
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00554|binding|INFO|Removing iface tap282dfd9e-9e ovn-installed in OVS
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.376 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:50:93 10.100.0.10'], port_security=['fa:16:3e:de:50:93 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '65c7e6ed-131f-4bca-af69-a1241d048bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e6e0201a064f1390a998f830140354', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98209f09-275f-46ee-a2c6-16214403e3de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d21053-8b98-4816-ad89-107cc4743794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=282dfd9e-9e84-450c-a306-8bc55428feb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.377 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 282dfd9e-9e84-450c-a306-8bc55428feb4 in datapath 77fb75b2-483b-47a5-99a5-ae91248b8ed8 unbound from our chassis#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77fb75b2-483b-47a5-99a5-ae91248b8ed8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6ed1ff-4b40-4c52-b20e-014a1f584aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 namespace which is not needed anymore#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.382 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.382 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:ae:0e 10.100.0.14
Oct 14 05:05:46 np0005486808 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 14 05:05:46 np0005486808 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000037.scope: Consumed 12.805s CPU time.
Oct 14 05:05:46 np0005486808 systemd-machined[214636]: Machine qemu-69-instance-00000037 terminated.
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.401 2 DEBUG nova.objects.instance [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.423 2 DEBUG nova.virt.libvirt.vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.423 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.424 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.430 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.432 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.436 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.437 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:54:94:02"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <target dev="tape3bc3ac3-61"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.443 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.447 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <name>instance-00000036</name>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tap971d99c2-5a'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:54:94:02'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tape3bc3ac3-61'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:bb:02:94'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tapb624404b-66'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tapbe6943f6-df'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.450 2 INFO nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the persistent domain config.#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.451 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tape3bc3ac3-61 with device alias net1 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.451 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:54:94:02"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <target dev="tape3bc3ac3-61"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.492 2 INFO nova.virt.libvirt.driver [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Instance destroyed successfully.#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.493 2 DEBUG nova.objects.instance [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lazy-loading 'resources' on Instance uuid 65c7e6ed-131f-4bca-af69-a1241d048bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.507 2 DEBUG nova.virt.libvirt.vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:05:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-100646485',display_name='tempest-ServersTestManualDisk-server-100646485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-100646485',id=55,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFq4tcZMNkmwexYMay7CcSUnt45X5jGbu/ngQCrGssdHqitjMlfE2R1DP+cztwj+Jbcg255ZB+kgmwp3pbM6el/CrOnrVr2V0onKRN9dF6T7lO2ORJc789YDLKzPg0Nog==',key_name='tempest-keypair-532072477',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55e6e0201a064f1390a998f830140354',ramdisk_id='',reservation_id='r-5wf160ka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-748280037',owner_user_name='tempest-ServersTestManualDisk-748280037-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:05:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='64a22b9370d049c0b189508f3f58f0ca',uuid=65c7e6ed-131f-4bca-af69-a1241d048bdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.507 2 DEBUG nova.network.os_vif_util [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converting VIF {"id": "282dfd9e-9e84-450c-a306-8bc55428feb4", "address": "fa:16:3e:de:50:93", "network": {"id": "77fb75b2-483b-47a5-99a5-ae91248b8ed8", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-2140863924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e6e0201a064f1390a998f830140354", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282dfd9e-9e", "ovs_interfaceid": "282dfd9e-9e84-450c-a306-8bc55428feb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.508 2 DEBUG nova.network.os_vif_util [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.508 2 DEBUG os_vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap282dfd9e-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.517 2 INFO os_vif [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:50:93,bridge_name='br-int',has_traffic_filtering=True,id=282dfd9e-9e84-450c-a306-8bc55428feb4,network=Network(77fb75b2-483b-47a5-99a5-ae91248b8ed8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282dfd9e-9e')#033[00m
Oct 14 05:05:46 np0005486808 kernel: tape3bc3ac3-61 (unregistering): left promiscuous mode
Oct 14 05:05:46 np0005486808 NetworkManager[44885]: <info>  [1760432746.5574] device (tape3bc3ac3-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.569 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432746.5693483, 47257c6e-4d10-4d8e-af5a-b57db20048ea => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00555|binding|INFO|Releasing lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 from this chassis (sb_readonly=0)
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00556|binding|INFO|Setting lport e3bc3ac3-6147-40d0-a19c-df111dcf23a5 down in Southbound
Oct 14 05:05:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:46Z|00557|binding|INFO|Removing iface tape3bc3ac3-61 ovn-installed in OVS
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.579 2 DEBUG nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tape3bc3ac3-61 with device alias net1 for instance 47257c6e-4d10-4d8e-af5a-b57db20048ea _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 14 05:05:46 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : haproxy version is 2.8.14-c23fe91
Oct 14 05:05:46 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [NOTICE]   (320015) : path to executable is /usr/sbin/haproxy
Oct 14 05:05:46 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [WARNING]  (320015) : Exiting Master process...
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.580 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:46 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [ALERT]    (320015) : Current worker (320017) exited with code 143 (Terminated)
Oct 14 05:05:46 np0005486808 neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8[320010]: [WARNING]  (320015) : All workers exited. Exiting... (0)
Oct 14 05:05:46 np0005486808 systemd[1]: libpod-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope: Deactivated successfully.
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.587 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:94:02 10.100.0.4'], port_security=['fa:16:3e:54:94:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e3bc3ac3-6147-40d0-a19c-df111dcf23a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:46 np0005486808 podman[320558]: 2025-10-14 09:05:46.59491666 +0000 UTC m=+0.089830413 container died 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.598 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <name>instance-00000036</name>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:43</nova:creationTime>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="e3bc3ac3-6147-40d0-a19c-df111dcf23a5">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tap971d99c2-5a'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:bb:02:94'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tapb624404b-66'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target dev='tapbe6943f6-df'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='net3'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.599 2 INFO nova.virt.libvirt.driver [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tape3bc3ac3-61 from instance 47257c6e-4d10-4d8e-af5a-b57db20048ea from the live domain config.#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.600 2 DEBUG nova.virt.libvirt.vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.601 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.602 2 DEBUG nova.network.os_vif_util [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.603 2 DEBUG os_vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3bc3ac3-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.612 2 INFO os_vif [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.613 2 DEBUG nova.virt.libvirt.guest [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:46 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:46 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:46 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:05:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d-userdata-shm.mount: Deactivated successfully.
Oct 14 05:05:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bc8ef72225e5c259b1a8bbd67e6e8947df3e042df52c25d7dffcdfcd0e7b97cd-merged.mount: Deactivated successfully.
Oct 14 05:05:46 np0005486808 podman[320558]: 2025-10-14 09:05:46.646598403 +0000 UTC m=+0.141512156 container cleanup 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:05:46 np0005486808 systemd[1]: libpod-conmon-4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d.scope: Deactivated successfully.
Oct 14 05:05:46 np0005486808 podman[320607]: 2025-10-14 09:05:46.712114896 +0000 UTC m=+0.042131478 container remove 4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.723 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d15b8f0e-a6d3-4fe6-91ce-67c024c07df3]: (4, ('Tue Oct 14 09:05:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 (4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d)\n4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d\nTue Oct 14 09:05:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 (4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d)\n4891121de78e076c2453309749fcecf35a0d3ead085622b9efd8eb66b64aeb4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2abaf2a-87ee-4bb8-a4b2-88dd5c5c098f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77fb75b2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 kernel: tap77fb75b2-40: left promiscuous mode
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4b1df0-582e-4e37-8e3e-2f10d7364c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de879427-2cff-4fa1-8210-f2ae941c0494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2efda798-bb92-4a39-a761-3b4ee4d6772d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.782 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f23c88d-3d27-4df2-9c2f-a63079ce4436]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649923, 'reachable_time': 42307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320622, 'error': None, 'target': 'ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 systemd[1]: run-netns-ovnmeta\x2d77fb75b2\x2d483b\x2d47a5\x2d99a5\x2dae91248b8ed8.mount: Deactivated successfully.
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.786 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77fb75b2-483b-47a5-99a5-ae91248b8ed8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.786 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7ef62e-6f2c-4aa1-be82-4c84993434fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.787 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.789 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e1474c50-2308-4562-a915-35987ed8d302]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.846 2 DEBUG nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.847 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.847 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8b2511-45d1-49b5-ba94-235d2281658f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG oslo_concurrency.lockutils [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.848 2 DEBUG nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.848 2 WARNING nova.compute.manager [req-d8b59368-ff0d-4b96-913a-89709495df09 req-44dea7d5-cb50-4469-b83b-ea3abfee54f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-be6943f6-df97-4a84-854b-858cc7d3ea1a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.852 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c53a2-71a2-4d48-b59a-97f49ece8d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.884 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0db64d-65de-4638-a4f1-3326086cca03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1321533e-1eee-46e1-b800-29be6c80ebde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320631, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dd413c-75e8-4a18-bc49-3a0f059268bc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320632, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:46.935 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.960 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.961 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.961 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.964 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.965 2 DEBUG nova.objects.instance [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'flavor' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.975 2 INFO nova.virt.libvirt.driver [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deleting instance files /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb_del#033[00m
Oct 14 05:05:46 np0005486808 nova_compute[259627]: 2025-10-14 09:05:46.976 2 INFO nova.virt.libvirt.driver [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deletion of /var/lib/nova/instances/65c7e6ed-131f-4bca-af69-a1241d048bdb_del complete#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.007 2 DEBUG nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.034 2 INFO nova.compute.manager [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.034 2 DEBUG oslo.service.loopingcall [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.035 2 DEBUG nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.035 2 DEBUG nova.network.neutron [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.541 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.541 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.542 2 DEBUG nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.629 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 WARNING nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.630 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-unplugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.631 2 DEBUG oslo_concurrency.lockutils [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.632 2 DEBUG nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] No waiting events found dispatching network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:47 np0005486808 nova_compute[259627]: 2025-10-14 09:05:47.632 2 WARNING nova.compute.manager [req-f49ab6d4-8054-470f-ae7e-857d6f70261c req-ed96da9b-6b9c-4897-abf7-0cb49a0d1d52 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received unexpected event network-vif-plugged-282dfd9e-9e84-450c-a306-8bc55428feb4 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:05:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.010 2 DEBUG nova.network.neutron [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.030 2 INFO nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.084 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.085 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.186 2 DEBUG oslo_concurrency.processutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990524553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.670 2 DEBUG oslo_concurrency.processutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.681 2 DEBUG nova.compute.provider_tree [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.709 2 DEBUG nova.scheduler.client.report [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.747 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.781 2 INFO nova.scheduler.client.report [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Deleted allocations for instance 65c7e6ed-131f-4bca-af69-a1241d048bdb#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.826 162547 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0e55393a-241b-4e02-88ac-00433344f70e with type ""#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.827 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:ae:0e 10.100.0.14'], port_security=['fa:16:3e:a4:ae:0e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-4377624', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=be6943f6-df97-4a84-854b-858cc7d3ea1a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.828 162547 INFO neutron.agent.ovn.metadata.agent [-] Port be6943f6-df97-4a84-854b-858cc7d3ea1a in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:48Z|00558|binding|INFO|Removing iface tapbe6943f6-df ovn-installed in OVS
Oct 14 05:05:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:48Z|00559|binding|INFO|Removing lport be6943f6-df97-4a84-854b-858cc7d3ea1a ovn-installed in OVS
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.876 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8d4988-93d3-4214-b8fe-84014cf0cd75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.877 2 INFO nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Port e3bc3ac3-6147-40d0-a19c-df111dcf23a5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.888 2 DEBUG oslo_concurrency.lockutils [None req-4d64a793-cbd3-4be1-9919-01d80a98ba00 64a22b9370d049c0b189508f3f58f0ca 55e6e0201a064f1390a998f830140354 - - default default] Lock "65c7e6ed-131f-4bca-af69-a1241d048bdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[51e06dcb-082b-4124-914b-f451ed2dd85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.919 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a55b1c-6829-426c-a519-a6ba9a2965b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.945 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.946 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.946 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.947 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.947 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.948 2 WARNING nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-unplugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.948 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.948 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG oslo_concurrency.lockutils [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.949 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.950 2 WARNING nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.950 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-e3bc3ac3-6147-40d0-a19c-df111dcf23a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.951 2 INFO nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Neutron deleted interface e3bc3ac3-6147-40d0-a19c-df111dcf23a5; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.951 2 DEBUG nova.network.neutron [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.958 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3d1b34-47d6-44c4-b912-fac68af83c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:48 np0005486808 nova_compute[259627]: 2025-10-14 09:05:48.973 2 DEBUG nova.objects.instance [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:48.982 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[142d9c40-16bf-48ea-b865-c6f3d6ca2fc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320762, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.002 2 DEBUG nova.objects.instance [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.003 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d079179-1bc3-45dd-90dd-c96aedac38ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320772, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320772, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.005 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.009 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.010 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.011 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.011 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.029 2 DEBUG nova.virt.libvirt.vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.029 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.030 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.035 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.040 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <name>instance-00000036</name>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tap971d99c2-5a'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:bb:02:94'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tapb624404b-66'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tapbe6943f6-df'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.043 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.045 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.046 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.046 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.047 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.047 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.048 2 INFO nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Terminating instance#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.049 2 DEBUG nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.057 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:54:94:02"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape3bc3ac3-61"/></interface>not found in domain: <domain type='kvm' id='67'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <name>instance-00000036</name>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <uuid>47257c6e-4d10-4d8e-af5a-b57db20048ea</uuid>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:46</nova:creationTime>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='serial'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='uuid'>47257c6e-4d10-4d8e-af5a-b57db20048ea</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk' index='2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/47257c6e-4d10-4d8e-af5a-b57db20048ea_disk.config' index='1'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9a:79:ab'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tap971d99c2-5a'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:bb:02:94'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tapb624404b-66'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:a4:ae:0e'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target dev='tapbe6943f6-df'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='net3'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea/console.log' append='off'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c314,c342</label>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c314,c342</imagelabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.061 2 WARNING nova.virt.libvirt.driver [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Detaching interface fa:16:3e:54:94:02 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape3bc3ac3-61' not found.#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.063 2 DEBUG nova.virt.libvirt.vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.063 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "address": "fa:16:3e:54:94:02", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3bc3ac3-61", "ovs_interfaceid": "e3bc3ac3-6147-40d0-a19c-df111dcf23a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.065 2 DEBUG nova.network.os_vif_util [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.066 2 DEBUG os_vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3bc3ac3-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.071 2 INFO os_vif [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:94:02,bridge_name='br-int',has_traffic_filtering=True,id=e3bc3ac3-6147-40d0-a19c-df111dcf23a5,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3bc3ac3-61')#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.072 2 DEBUG nova.virt.libvirt.guest [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:name>tempest-AttachInterfacesTestJSON-server-766543543</nova:name>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:05:49</nova:creationTime>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="971d99c2-5a60-4cac-8f99-e819d71e419c">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="b624404b-6681-4fc8-a870-dc9418e2de0f">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    <nova:port uuid="be6943f6-df97-4a84-854b-858cc7d3ea1a">
Oct 14 05:05:49 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:05:49 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:05:49 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.075 2 DEBUG nova.compute.manager [req-47dc7a75-dcac-4a29-ad18-69b3dc59c6a8 req-c62babbc-8b6f-428b-b18e-4975f5c31b10 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Received event network-vif-deleted-282dfd9e-9e84-450c-a306-8bc55428feb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:49 np0005486808 kernel: tap971d99c2-5a (unregistering): left promiscuous mode
Oct 14 05:05:49 np0005486808 NetworkManager[44885]: <info>  [1760432749.1254] device (tap971d99c2-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00560|binding|INFO|Releasing lport 971d99c2-5a60-4cac-8f99-e819d71e419c from this chassis (sb_readonly=0)
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00561|binding|INFO|Setting lport 971d99c2-5a60-4cac-8f99-e819d71e419c down in Southbound
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00562|binding|INFO|Removing iface tap971d99c2-5a ovn-installed in OVS
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.149 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:79:ab 10.100.0.3'], port_security=['fa:16:3e:9a:79:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c96b2336-ed00-4da6-b121-ce1c9aa6f017', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=971d99c2-5a60-4cac-8f99-e819d71e419c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.151 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 971d99c2-5a60-4cac-8f99-e819d71e419c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.154 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:05:49 np0005486808 kernel: tapb624404b-66 (unregistering): left promiscuous mode
Oct 14 05:05:49 np0005486808 NetworkManager[44885]: <info>  [1760432749.1653] device (tapb624404b-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5312c828-f5c5-43de-9618-02ad51c39bfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00563|binding|INFO|Releasing lport b624404b-6681-4fc8-a870-dc9418e2de0f from this chassis (sb_readonly=0)
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00564|binding|INFO|Setting lport b624404b-6681-4fc8-a870-dc9418e2de0f down in Southbound
Oct 14 05:05:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:49Z|00565|binding|INFO|Removing iface tapb624404b-66 ovn-installed in OVS
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.183 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:02:94 10.100.0.9'], port_security=['fa:16:3e:bb:02:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47257c6e-4d10-4d8e-af5a-b57db20048ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b624404b-6681-4fc8-a870-dc9418e2de0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:49 np0005486808 kernel: tapbe6943f6-df (unregistering): left promiscuous mode
Oct 14 05:05:49 np0005486808 NetworkManager[44885]: <info>  [1760432749.2048] device (tapbe6943f6-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.223 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[def7ee9f-9db3-4b05-ac1b-558473daeaca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.227 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d90a313-c399-443c-a413-afe743c5c818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27db2119-07d4-4b91-b779-dd1a3a97e210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 14 05:05:49 np0005486808 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000036.scope: Consumed 14.566s CPU time.
Oct 14 05:05:49 np0005486808 systemd-machined[214636]: Machine qemu-67-instance-00000036 terminated.
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.283 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f61496-b923-42a6-9d23-553ab9df2513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647447, 'reachable_time': 19128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320801, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cff62ce2-7d35-4014-b56c-bf31ca34f74e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647459, 'tstamp': 647459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320804, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647463, 'tstamp': 647463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320804, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.303 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 NetworkManager[44885]: <info>  [1760432749.3096] manager: (tapb624404b-66): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Oct 14 05:05:49 np0005486808 NetworkManager[44885]: <info>  [1760432749.3169] manager: (tapbe6943f6-df): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.325 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.325 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b624404b-6681-4fc8-a870-dc9418e2de0f in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.331 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e30b31a8-c25c-4fad-a877-c7702eee1123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.347 2 INFO nova.virt.libvirt.driver [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Instance destroyed successfully.#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.347 2 DEBUG nova.objects.instance [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 47257c6e-4d10-4d8e-af5a-b57db20048ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.379 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.379 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.380 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.381 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971d99c2-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.393 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:79:ab,bridge_name='br-int',has_traffic_filtering=True,id=971d99c2-5a60-4cac-8f99-e819d71e419c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap971d99c2-5a')#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.394 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.394 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.395 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.395 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb624404b-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.403 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:02:94,bridge_name='br-int',has_traffic_filtering=True,id=b624404b-6681-4fc8-a870-dc9418e2de0f,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb624404b-66')#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.404 2 DEBUG nova.virt.libvirt.vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-766543543',display_name='tempest-AttachInterfacesTestJSON-server-766543543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-766543543',id=54,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQrZCrZaptXuotj+57tBn3/OGSvBqxv5B+H31iXFpSI71zcU5A1JwpwDksSj1zemT2KJ76DfcdGv/IVNbfXVnjr4ntmsdxnv83TIXCvdXfabQIpcV4CIqh+n8+7v/6SjQ==',key_name='tempest-keypair-1747841174',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-fxki54rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=47257c6e-4d10-4d8e-af5a-b57db20048ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.404 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.405 2 DEBUG nova.network.os_vif_util [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.406 2 DEBUG os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6943f6-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.414 2 INFO os_vif [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:ae:0e,bridge_name='br-int',has_traffic_filtering=True,id=be6943f6-df97-4a84-854b-858cc7d3ea1a,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe6943f6-df')#033[00m
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:05:49 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : haproxy version is 2.8.14-c23fe91
Oct 14 05:05:49 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [NOTICE]   (319328) : path to executable is /usr/sbin/haproxy
Oct 14 05:05:49 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [WARNING]  (319328) : Exiting Master process...
Oct 14 05:05:49 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [ALERT]    (319328) : Current worker (319330) exited with code 143 (Terminated)
Oct 14 05:05:49 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[319321]: [WARNING]  (319328) : All workers exited. Exiting... (0)
Oct 14 05:05:49 np0005486808 systemd[1]: libpod-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope: Deactivated successfully.
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:05:49 np0005486808 podman[320876]: 2025-10-14 09:05:49.510521263 +0000 UTC m=+0.050273569 container died 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e626c556-85d5-404a-8e2e-b7dc993cefd4 does not exist
Oct 14 05:05:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e1add75d-7bb7-41bd-ad9d-8219cc6836fd does not exist
Oct 14 05:05:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a2c7db3-d1fc-4234-b570-814908acebbd does not exist
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:05:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3-userdata-shm.mount: Deactivated successfully.
Oct 14 05:05:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-85f03912ee5e1d4e48b9e053b6890e84bd070633917fc91412b280f99baf2363-merged.mount: Deactivated successfully.
Oct 14 05:05:49 np0005486808 podman[320876]: 2025-10-14 09:05:49.556581848 +0000 UTC m=+0.096334144 container cleanup 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:05:49 np0005486808 systemd[1]: libpod-conmon-7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3.scope: Deactivated successfully.
Oct 14 05:05:49 np0005486808 podman[320929]: 2025-10-14 09:05:49.635539312 +0000 UTC m=+0.058434690 container remove 7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d32840c-1212-4a1d-b23b-72ca7a8555ad]: (4, ('Tue Oct 14 09:05:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3)\n7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3\nTue Oct 14 09:05:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3)\n7a551c3cce38d761683a15909fa8c9299ff9f816597b206d46d50efd286d6eb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f62f71b6-2ce9-4b47-b720-93894ceb8b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.647 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:49 np0005486808 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.674 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60d73d7d-6ba3-4237-a6e4-1fe3affa48f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df039852-63de-4763-9738-f31916cc927e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[851ff194-a27f-4898-a361-b9b22ab38ed6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.712 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.712 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.713 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.714 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f961b8d0-b59d-4393-bc1e-3ab92fa24f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647439, 'reachable_time': 28959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320996, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.715 2 WARNING nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-971d99c2-5a60-4cac-8f99-e819d71e419c for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.715 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.716 2 DEBUG oslo_concurrency.lockutils [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:49 np0005486808 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.717 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 326 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 37 KiB/s wr, 12 op/s
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.717 2 DEBUG nova.compute.manager [req-912e0ebc-4d3a-4b7c-83f0-a3abe84a3ca3 req-04a5e7cb-76f8-47d2-b9d0-413724f205af 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-unplugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.718 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:05:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:49.718 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[13a9c168-3405-4de1-b6ce-05b26f392044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.852 2 INFO nova.virt.libvirt.driver [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deleting instance files /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea_del#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.853 2 INFO nova.virt.libvirt.driver [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deletion of /var/lib/nova/instances/47257c6e-4d10-4d8e-af5a-b57db20048ea_del complete#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.914 2 INFO nova.compute.manager [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.915 2 DEBUG oslo.service.loopingcall [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.916 2 DEBUG nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:05:49 np0005486808 nova_compute[259627]: 2025-10-14 09:05:49.916 2 DEBUG nova.network.neutron [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.107228139 +0000 UTC m=+0.037048634 container create b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:05:50 np0005486808 systemd[1]: Started libpod-conmon-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope.
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.091987893 +0000 UTC m=+0.021808408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.207089938 +0000 UTC m=+0.136910443 container init b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.217089884 +0000 UTC m=+0.146910409 container start b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.220997021 +0000 UTC m=+0.150817536 container attach b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:50 np0005486808 busy_nobel[321080]: 167 167
Oct 14 05:05:50 np0005486808 systemd[1]: libpod-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope: Deactivated successfully.
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.223167154 +0000 UTC m=+0.152987649 container died b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 05:05:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a2a91237c91560e6ddf1a565d87879be4336ff570fa77eab1610301310c752ab-merged.mount: Deactivated successfully.
Oct 14 05:05:50 np0005486808 podman[321064]: 2025-10-14 09:05:50.262658747 +0000 UTC m=+0.192479252 container remove b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:05:50 np0005486808 systemd[1]: libpod-conmon-b5d8a2a28e097d584ae7137389fdcdb8a677adf6424b0fb02d999f5c437b8856.scope: Deactivated successfully.
Oct 14 05:05:50 np0005486808 nova_compute[259627]: 2025-10-14 09:05:50.317 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port be6943f6-df97-4a84-854b-858cc7d3ea1a could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct 14 05:05:50 np0005486808 nova_compute[259627]: 2025-10-14 09:05:50.318 2 DEBUG nova.network.neutron [-] Unable to show port be6943f6-df97-4a84-854b-858cc7d3ea1a as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Oct 14 05:05:50 np0005486808 podman[321103]: 2025-10-14 09:05:50.512329445 +0000 UTC m=+0.074939566 container create 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:05:50 np0005486808 systemd[1]: Started libpod-conmon-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope.
Oct 14 05:05:50 np0005486808 podman[321103]: 2025-10-14 09:05:50.487539565 +0000 UTC m=+0.050149686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:50 np0005486808 podman[321103]: 2025-10-14 09:05:50.631421978 +0000 UTC m=+0.194032109 container init 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:05:50 np0005486808 podman[321103]: 2025-10-14 09:05:50.638615136 +0000 UTC m=+0.201225217 container start 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:05:50 np0005486808 podman[321103]: 2025-10-14 09:05:50.642698296 +0000 UTC m=+0.205308407 container attach 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:05:50 np0005486808 podman[321119]: 2025-10-14 09:05:50.643752042 +0000 UTC m=+0.085397754 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:05:50 np0005486808 podman[321117]: 2025-10-14 09:05:50.687450368 +0000 UTC m=+0.131841978 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.079 2 DEBUG nova.network.neutron [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "address": "fa:16:3e:a4:ae:0e", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe6943f6-df", "ovs_interfaceid": "be6943f6-df97-4a84-854b-858cc7d3ea1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.103 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-47257c6e-4d10-4d8e-af5a-b57db20048ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.137 2 DEBUG oslo_concurrency.lockutils [None req-6cbe471d-9e15-4b51-bd04-f7d25da1ec49 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-47257c6e-4d10-4d8e-af5a-b57db20048ea-e3bc3ac3-6147-40d0-a19c-df111dcf23a5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.191 2 DEBUG nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-be6943f6-df97-4a84-854b-858cc7d3ea1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.192 2 INFO nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Neutron deleted interface be6943f6-df97-4a84-854b-858cc7d3ea1a; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.193 2 DEBUG nova.network.neutron [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [{"id": "971d99c2-5a60-4cac-8f99-e819d71e419c", "address": "fa:16:3e:9a:79:ab", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap971d99c2-5a", "ovs_interfaceid": "971d99c2-5a60-4cac-8f99-e819d71e419c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b624404b-6681-4fc8-a870-dc9418e2de0f", "address": "fa:16:3e:bb:02:94", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb624404b-66", "ovs_interfaceid": "b624404b-6681-4fc8-a870-dc9418e2de0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.219 2 DEBUG nova.compute.manager [req-9add1c9a-008a-40f9-a517-2700431f6bd6 req-933a9304-8036-4206-9777-20625cd897d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Detach interface failed, port_id=be6943f6-df97-4a84-854b-858cc7d3ea1a, reason: Instance 47257c6e-4d10-4d8e-af5a-b57db20048ea could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:05:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 40 KiB/s wr, 131 op/s
Oct 14 05:05:51 np0005486808 distracted_pascal[321132]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:05:51 np0005486808 distracted_pascal[321132]: --> relative data size: 1.0
Oct 14 05:05:51 np0005486808 distracted_pascal[321132]: --> All data devices are unavailable
Oct 14 05:05:51 np0005486808 systemd[1]: libpod-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Deactivated successfully.
Oct 14 05:05:51 np0005486808 systemd[1]: libpod-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Consumed 1.083s CPU time.
Oct 14 05:05:51 np0005486808 podman[321103]: 2025-10-14 09:05:51.793811714 +0000 UTC m=+1.356421825 container died 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:05:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-af863575d0e18abe218bca6d0bd5ecf07c1d32ffd7a883805ec7f69f3bb3e71b-merged.mount: Deactivated successfully.
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.832 2 DEBUG nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.833 2 DEBUG oslo_concurrency.lockutils [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.835 2 DEBUG nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] No waiting events found dispatching network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:05:51 np0005486808 nova_compute[259627]: 2025-10-14 09:05:51.836 2 WARNING nova.compute.manager [req-7a40f110-454c-4ef0-a4e0-ec870662757e req-871bcf16-b738-462a-8184-a59a90117f21 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received unexpected event network-vif-plugged-b624404b-6681-4fc8-a870-dc9418e2de0f for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:05:51 np0005486808 podman[321103]: 2025-10-14 09:05:51.857629316 +0000 UTC m=+1.420239397 container remove 4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:05:51 np0005486808 systemd[1]: libpod-conmon-4491b97dd11836285fda951cc3a08b68571dbdb0967fc1afd2890a29b7137f6c.scope: Deactivated successfully.
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.401 2 DEBUG nova.network.neutron [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.428 2 INFO nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Took 2.51 seconds to deallocate network for instance.#033[00m
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.498 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.498 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.607404271 +0000 UTC m=+0.045778678 container create 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:05:52 np0005486808 nova_compute[259627]: 2025-10-14 09:05:52.607 2 DEBUG oslo_concurrency.processutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:05:52 np0005486808 systemd[1]: Started libpod-conmon-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope.
Oct 14 05:05:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.583766509 +0000 UTC m=+0.022140966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.718507278 +0000 UTC m=+0.156881705 container init 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.728968525 +0000 UTC m=+0.167342932 container start 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.732166264 +0000 UTC m=+0.170540671 container attach 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:05:52 np0005486808 lucid_satoshi[321363]: 167 167
Oct 14 05:05:52 np0005486808 systemd[1]: libpod-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope: Deactivated successfully.
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.736031359 +0000 UTC m=+0.174405776 container died 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:05:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-31fcad3c161ead70133cb8861b37b5d1ca947bb44f35f1a757bb6fe91677b8ff-merged.mount: Deactivated successfully.
Oct 14 05:05:52 np0005486808 podman[321345]: 2025-10-14 09:05:52.773399859 +0000 UTC m=+0.211774276 container remove 17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:05:52 np0005486808 systemd[1]: libpod-conmon-17474af3a58e7e73a91998799d6e1816607d5ab6351abd7c43aff4e6f533ccb3.scope: Deactivated successfully.
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.004562112 +0000 UTC m=+0.054615916 container create 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:05:53 np0005486808 systemd[1]: Started libpod-conmon-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope.
Oct 14 05:05:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:05:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273249229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:52.977969438 +0000 UTC m=+0.028023332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.080 2 DEBUG oslo_concurrency.processutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.091 2 DEBUG nova.compute.provider_tree [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:05:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.113 2 DEBUG nova.scheduler.client.report [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.119971325 +0000 UTC m=+0.170025149 container init 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.132544534 +0000 UTC m=+0.182598368 container start 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.137052895 +0000 UTC m=+0.187106719 container attach 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.158 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.186 2 INFO nova.scheduler.client.report [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 47257c6e-4d10-4d8e-af5a-b57db20048ea#033[00m
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.291 2 DEBUG oslo_concurrency.lockutils [None req-1d40ca53-ef45-4e96-826e-531fd35404bd 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "47257c6e-4d10-4d8e-af5a-b57db20048ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.336 2 DEBUG nova.compute.manager [req-4c24f124-f3bd-411c-984c-573a990e904e req-7d0ba147-42a2-4942-8668-c5041f55e739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-b624404b-6681-4fc8-a870-dc9418e2de0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:53 np0005486808 nova_compute[259627]: 2025-10-14 09:05:53.337 2 DEBUG nova.compute.manager [req-4c24f124-f3bd-411c-984c-573a990e904e req-7d0ba147-42a2-4942-8668-c5041f55e739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Received event network-vif-deleted-971d99c2-5a60-4cac-8f99-e819d71e419c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:05:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 167 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 05:05:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:53.774 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]: {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    "0": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "devices": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "/dev/loop3"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            ],
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_name": "ceph_lv0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_size": "21470642176",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "name": "ceph_lv0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "tags": {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_name": "ceph",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.crush_device_class": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.encrypted": "0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_id": "0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.vdo": "0"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            },
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "vg_name": "ceph_vg0"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        }
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    ],
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    "1": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "devices": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "/dev/loop4"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            ],
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_name": "ceph_lv1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_size": "21470642176",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "name": "ceph_lv1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "tags": {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_name": "ceph",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.crush_device_class": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.encrypted": "0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_id": "1",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.vdo": "0"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            },
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "vg_name": "ceph_vg1"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        }
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    ],
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    "2": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "devices": [
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "/dev/loop5"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            ],
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_name": "ceph_lv2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_size": "21470642176",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "name": "ceph_lv2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "tags": {
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.cluster_name": "ceph",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.crush_device_class": "",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.encrypted": "0",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osd_id": "2",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:                "ceph.vdo": "0"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            },
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "type": "block",
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:            "vg_name": "ceph_vg2"
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:        }
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]:    ]
Oct 14 05:05:53 np0005486808 naughty_roentgen[321419]: }
Oct 14 05:05:53 np0005486808 systemd[1]: libpod-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope: Deactivated successfully.
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.911681173 +0000 UTC m=+0.961735027 container died 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:05:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4482da78a91e7f8021ed624beef42956cfcab71a067d37c282899f7df8d6975f-merged.mount: Deactivated successfully.
Oct 14 05:05:53 np0005486808 podman[321404]: 2025-10-14 09:05:53.996361868 +0000 UTC m=+1.046415712 container remove 30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:05:54 np0005486808 systemd[1]: libpod-conmon-30cd1b98728fc4e3a46085d2eb865b5ccee743c0397f69cd3ae7d53db6aa491a.scope: Deactivated successfully.
Oct 14 05:05:54 np0005486808 nova_compute[259627]: 2025-10-14 09:05:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:54Z|00566|binding|INFO|Releasing lport a7f44223-dee5-4a2f-b975-1f04f03b78f7 from this chassis (sb_readonly=0)
Oct 14 05:05:54 np0005486808 nova_compute[259627]: 2025-10-14 09:05:54.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.776222494 +0000 UTC m=+0.047069230 container create b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:54 np0005486808 systemd[1]: Started libpod-conmon-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope.
Oct 14 05:05:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.757893013 +0000 UTC m=+0.028739799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.854385759 +0000 UTC m=+0.125232535 container init b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.865420341 +0000 UTC m=+0.136267087 container start b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.869114912 +0000 UTC m=+0.139961698 container attach b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:05:54 np0005486808 charming_wiles[321597]: 167 167
Oct 14 05:05:54 np0005486808 systemd[1]: libpod-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope: Deactivated successfully.
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.871367967 +0000 UTC m=+0.142214753 container died b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 14 05:05:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-009bb0019069b9a70717fd636c9d836b03ad0a55afa41f021eaffef872c079f5-merged.mount: Deactivated successfully.
Oct 14 05:05:54 np0005486808 podman[321580]: 2025-10-14 09:05:54.911502836 +0000 UTC m=+0.182349582 container remove b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:05:54 np0005486808 systemd[1]: libpod-conmon-b3ae6da816a8a12ea0a58ade8791ed4e5dee68f853b3499dacdd7e10acb46232.scope: Deactivated successfully.
Oct 14 05:05:55 np0005486808 podman[321621]: 2025-10-14 09:05:55.131532894 +0000 UTC m=+0.062009849 container create 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:05:55 np0005486808 systemd[1]: Started libpod-conmon-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope.
Oct 14 05:05:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:05:55 np0005486808 podman[321621]: 2025-10-14 09:05:55.106839595 +0000 UTC m=+0.037316640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:05:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:05:55 np0005486808 podman[321621]: 2025-10-14 09:05:55.222382001 +0000 UTC m=+0.152858996 container init 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:05:55 np0005486808 podman[321621]: 2025-10-14 09:05:55.240487097 +0000 UTC m=+0.170964052 container start 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:55 np0005486808 podman[321621]: 2025-10-14 09:05:55.24467098 +0000 UTC m=+0.175147975 container attach 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:05:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.3 KiB/s wr, 120 op/s
Oct 14 05:05:56 np0005486808 angry_burnell[321638]: {
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_id": 2,
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "type": "bluestore"
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    },
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_id": 1,
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "type": "bluestore"
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    },
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_id": 0,
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:        "type": "bluestore"
Oct 14 05:05:56 np0005486808 angry_burnell[321638]:    }
Oct 14 05:05:56 np0005486808 angry_burnell[321638]: }
Oct 14 05:05:56 np0005486808 systemd[1]: libpod-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Deactivated successfully.
Oct 14 05:05:56 np0005486808 systemd[1]: libpod-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Consumed 1.033s CPU time.
Oct 14 05:05:56 np0005486808 podman[321672]: 2025-10-14 09:05:56.330773038 +0000 UTC m=+0.022643369 container died 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:05:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3f1be7dae2d998ee5791d7061f420967fa622ddc585e3d5c2b95045eea425985-merged.mount: Deactivated successfully.
Oct 14 05:05:56 np0005486808 podman[321672]: 2025-10-14 09:05:56.386225994 +0000 UTC m=+0.078096285 container remove 206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:05:56 np0005486808 systemd[1]: libpod-conmon-206f99410824ebbbd75a093bddfb312004972ccc677d09c779fdf085616a55f5.scope: Deactivated successfully.
Oct 14 05:05:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:05:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:05:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 513df777-fbca-47a6-9604-d15c356c4c0c does not exist
Oct 14 05:05:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 320f7e08-2c41-41ed-8536-052e0e1a8fb1 does not exist
Oct 14 05:05:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:56Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 05:05:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:56Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 05:05:57 np0005486808 nova_compute[259627]: 2025-10-14 09:05:57.075 2 DEBUG nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:05:57 np0005486808 nova_compute[259627]: 2025-10-14 09:05:57.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:05:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:05:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 05:05:59 np0005486808 nova_compute[259627]: 2025-10-14 09:05:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 167 MiB data, 548 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 119 op/s
Oct 14 05:05:59 np0005486808 kernel: tap31797867-f0 (unregistering): left promiscuous mode
Oct 14 05:05:59 np0005486808 NetworkManager[44885]: <info>  [1760432759.9560] device (tap31797867-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:05:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:59Z|00567|binding|INFO|Releasing lport 31797867-f0bc-4632-a658-bd0fae609c23 from this chassis (sb_readonly=0)
Oct 14 05:05:59 np0005486808 nova_compute[259627]: 2025-10-14 09:05:59.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:05:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:59Z|00568|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 down in Southbound
Oct 14 05:05:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:05:59Z|00569|binding|INFO|Removing iface tap31797867-f0 ovn-installed in OVS
Oct 14 05:05:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.980 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:05:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis#033[00m
Oct 14 05:05:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:05:59.985 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71043b8a-f09f-43a4-aea9-7f5eb9414fc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 14 05:06:00 np0005486808 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000038.scope: Consumed 12.366s CPU time.
Oct 14 05:06:00 np0005486808 systemd-machined[214636]: Machine qemu-70-instance-00000038 terminated.
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.050 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a24d984-75f8-4e31-8cc2-13b91ce88388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.053 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba4ff7-f1cc-4f26-8025-a4dc9cbc6e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.095 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1075c7c0-6659-4a65-a436-bc754cc5a7d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.116 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01e5c3d2-568c-4a56-a764-8e31be967520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321749, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e64c342-0833-47a5-a332-1442463af942]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321750, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321750, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.148 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.155 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.156 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.157 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:00.158 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.195 2 INFO nova.virt.libvirt.driver [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.201 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.202 2 DEBUG nova.objects.instance [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'numa_topology' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.217 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.218 2 DEBUG oslo_concurrency.lockutils [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.218 2 DEBUG nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.218 2 WARNING nova.compute.manager [req-8e9bb163-9c7b-4ff7-bcac-ced074977563 req-dbc5516f-4ff0-4536-943d-db0dd99e920d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.220 2 DEBUG nova.compute.manager [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:00 np0005486808 nova_compute[259627]: 2025-10-14 09:06:00.282 2 DEBUG oslo_concurrency.lockutils [None req-132eae2f-601c-4eec-98ad-88e0ceb8bea4 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.485 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432746.483594, 65c7e6ed-131f-4bca-af69-a1241d048bdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.486 2 INFO nova.compute.manager [-] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.510 2 DEBUG nova.compute.manager [None req-7beafdc1-0bdb-44b7-b400-48b22d2d1265 - - - - - -] [instance: 65c7e6ed-131f-4bca-af69-a1241d048bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.934 2 INFO nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Rebuilding instance#033[00m
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.992 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:01 np0005486808 nova_compute[259627]: 2025-10-14 09:06:01.992 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.019 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.108 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.117 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.117 2 INFO nova.compute.claims [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.280 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'trusted_certs' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.298 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.301 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.352 2 DEBUG nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.353 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.353 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.354 2 DEBUG oslo_concurrency.lockutils [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.354 2 DEBUG nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.355 2 WARNING nova.compute.manager [req-365626f2-2d9e-4194-9fe7-76d6cc903760 req-535e6bf5-ec9b-4144-9017-cd8627d55467 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state rebuilding.#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.406 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_requests' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.418 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.429 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.437 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.453 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.456 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance already shutdown.#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.463 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.469 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.470 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:01Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.470 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.471 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.471 2 DEBUG os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31797867-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.479 2 INFO os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')#033[00m
Oct 14 05:06:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390791536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.779 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.786 2 DEBUG nova.compute.provider_tree [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.813 2 DEBUG nova.scheduler.client.report [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.830 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.831 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.872 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.872 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.879 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting instance files /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.880 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deletion of /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del complete#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.887 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:02 np0005486808 nova_compute[259627]: 2025-10-14 09:06:02.915 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.020 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.021 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.022 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating image(s)#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.039 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.057 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.078 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.081 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.122 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.123 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating image(s)#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.144 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.164 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.184 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.188 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.230 2 DEBUG nova.policy [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.233 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.233 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.234 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.234 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.254 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.257 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.285 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.286 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.286 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.287 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.310 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.313 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:03 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.624 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.678 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf ec31b9ab-88ab-4085-a46b-76cb9825061a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.740 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.786 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.883 2 DEBUG nova.objects.instance [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.923 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.924 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Ensure instance console log exists: /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.925 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.925 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.926 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.928 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start _get_guest_xml network_info=[{"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:03 np0005486808 nova_compute[259627]: 2025-10-14 09:06:03.932 2 WARNING nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.030 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.031 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.062 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.libvirt.host [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.063 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.064 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.064 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.065 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.066 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.066 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.067 2 DEBUG nova.virt.hardware [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.068 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'vcpu_model' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.108 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.108 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Ensure instance console log exists: /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.109 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.110 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.122 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.346 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432749.342114, 47257c6e-4d10-4d8e-af5a-b57db20048ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.348 2 INFO nova.compute.manager [-] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.371 2 DEBUG nova.compute.manager [None req-7204f379-0335-478a-b117-3ebf82d8f1c3 - - - - - -] [instance: 47257c6e-4d10-4d8e-af5a-b57db20048ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.516 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully created port: dffa5a1f-657b-498e-bbe5-6540fead7fb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297244430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.591 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.617 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:04 np0005486808 nova_compute[259627]: 2025-10-14 09:06:04.621 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528567082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.062 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.064 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.065 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.067 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.073 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <uuid>ec31b9ab-88ab-4085-a46b-76cb9825061a</uuid>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <name>instance-00000038</name>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-412231426</nova:name>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:03</nova:creationTime>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <nova:port uuid="31797867-f0bc-4632-a658-bd0fae609c23">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="serial">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="uuid">ec31b9ab-88ab-4085-a46b-76cb9825061a</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:01:3d:6b"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <target dev="tap31797867-f0"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/console.log" append="off"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:05 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:05 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:05 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:05 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.075 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Preparing to wait for external event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.076 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.077 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.077 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.079 2 DEBUG nova.virt.libvirt.vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.079 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.080 2 DEBUG nova.network.os_vif_util [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.081 2 DEBUG os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31797867-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31797867-f0, col_values=(('external_ids', {'iface-id': '31797867-f0bc-4632-a658-bd0fae609c23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:3d:6b', 'vm-uuid': 'ec31b9ab-88ab-4085-a46b-76cb9825061a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:05 np0005486808 NetworkManager[44885]: <info>  [1760432765.0917] manager: (tap31797867-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.097 2 INFO os_vif [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.166 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.167 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.168 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:01:3d:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.168 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Using config drive#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.204 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.229 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'ec2_ids' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:05 np0005486808 nova_compute[259627]: 2025-10-14 09:06:05.277 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'keypairs' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:06:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/651548379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:06:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.389 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully updated port: dffa5a1f-657b-498e-bbe5-6540fead7fb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.414 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Creating config drive at /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.423 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cd_7y4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.461 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.462 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.463 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.512 2 DEBUG nova.compute.manager [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.513 2 DEBUG nova.compute.manager [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.513 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.565 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cd_7y4e" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.589 2 DEBUG nova.storage.rbd_utils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.593 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.620 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.764 2 DEBUG oslo_concurrency.processutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config ec31b9ab-88ab-4085-a46b-76cb9825061a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.765 2 INFO nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting local config drive /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:06 np0005486808 kernel: tap31797867-f0: entered promiscuous mode
Oct 14 05:06:06 np0005486808 NetworkManager[44885]: <info>  [1760432766.8304] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 14 05:06:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:06Z|00570|binding|INFO|Claiming lport 31797867-f0bc-4632-a658-bd0fae609c23 for this chassis.
Oct 14 05:06:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:06Z|00571|binding|INFO|31797867-f0bc-4632-a658-bd0fae609c23: Claiming fa:16:3e:01:3d:6b 10.100.0.5
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.841 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.843 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:06 np0005486808 systemd-udevd[322272]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.864 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ce6e97-195c-41bf-bea0-fc72ae95c090]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:06Z|00572|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 ovn-installed in OVS
Oct 14 05:06:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:06Z|00573|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 up in Southbound
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:06 np0005486808 systemd-machined[214636]: New machine qemu-71-instance-00000038.
Oct 14 05:06:06 np0005486808 NetworkManager[44885]: <info>  [1760432766.8810] device (tap31797867-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:06 np0005486808 NetworkManager[44885]: <info>  [1760432766.8818] device (tap31797867-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:06 np0005486808 systemd[1]: Started Virtual Machine qemu-71-instance-00000038.
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.900 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0b263b14-8b9e-4066-a723-45d58d204427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.905 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b96b2597-c7e2-4e3d-9b8e-0e3a4a259db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.930 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c79ef636-ff32-4be3-b943-91b856b02ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.950 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[367f7224-72f5-4c79-bd4c-4b9aced97d59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322285, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f44213b2-cc9c-4f3a-93ce-64d2a666ad46]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322287, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322287, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.971 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:06 np0005486808 nova_compute[259627]: 2025-10-14 09:06:06.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.973 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:06.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.024 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.217 2 DEBUG nova.compute.manager [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.218 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.218 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.219 2 DEBUG oslo_concurrency.lockutils [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.220 2 DEBUG nova.compute.manager [req-d97bd3df-f1e0-440b-938b-318b71ac5f77 req-578ffb40-1c8e-45cd-80ca-495b85b4c4ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Processing event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.385 2 DEBUG nova.network.neutron [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.422 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.423 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance network_info: |[{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.424 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.424 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.426 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start _get_guest_xml network_info=[{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.433 2 WARNING nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.437 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.437 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.444 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.446 2 DEBUG nova.virt.libvirt.host [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.447 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.448 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.449 2 DEBUG nova.virt.hardware [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.452 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 05:06:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292982815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.921 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.944 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.948 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.975 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.976 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for ec31b9ab-88ab-4085-a46b-76cb9825061a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.976 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.9354346, ec31b9ab-88ab-4085-a46b-76cb9825061a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.976 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.980 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.982 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance spawned successfully.#033[00m
Oct 14 05:06:07 np0005486808 nova_compute[259627]: 2025-10-14 09:06:07.982 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.020 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.022 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.022 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.023 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.024 2 DEBUG nova.virt.libvirt.driver [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.029 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.062 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.063 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.9356983, ec31b9ab-88ab-4085-a46b-76cb9825061a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.063 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.090 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.094 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432767.979517, ec31b9ab-88ab-4085-a46b-76cb9825061a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.094 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.111 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.119 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.121 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.153 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.161 2 INFO nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] bringing vm to original state: 'stopped'#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.220 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.221 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.221 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.225 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:06:08 np0005486808 kernel: tap31797867-f0 (unregistering): left promiscuous mode
Oct 14 05:06:08 np0005486808 NetworkManager[44885]: <info>  [1760432768.2667] device (tap31797867-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:08Z|00574|binding|INFO|Releasing lport 31797867-f0bc-4632-a658-bd0fae609c23 from this chassis (sb_readonly=0)
Oct 14 05:06:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:08Z|00575|binding|INFO|Setting lport 31797867-f0bc-4632-a658-bd0fae609c23 down in Southbound
Oct 14 05:06:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:08Z|00576|binding|INFO|Removing iface tap31797867-f0 ovn-installed in OVS
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:3d:6b 10.100.0.5'], port_security=['fa:16:3e:01:3d:6b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ec31b9ab-88ab-4085-a46b-76cb9825061a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31797867-f0bc-4632-a658-bd0fae609c23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.296 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31797867-f0bc-4632-a658-bd0fae609c23 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.299 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 14 05:06:08 np0005486808 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000038.scope: Consumed 1.110s CPU time.
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f32c478-88ed-4f04-b040-7ec3fe8bd947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 systemd-machined[214636]: Machine qemu-71-instance-00000038 terminated.
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.339 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac1bf6d-680a-4ed4-be3d-5e272e6bc4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.342 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5e7b53-9aaf-4851-8359-3d4287c95b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 podman[322390]: 2025-10-14 09:06:08.358719273 +0000 UTC m=+0.065815541 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:06:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327380294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.378 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[463a2360-204b-48b5-b3c7-237105f7f287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 podman[322395]: 2025-10-14 09:06:08.386690712 +0000 UTC m=+0.082991875 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.388 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.389 2 DEBUG nova.virt.libvirt.vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.390 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.391 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.392 2 DEBUG nova.objects.instance [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3cccba09-16b6-4c15-ae8b-91170c167e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322440, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.408 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <name>instance-00000039</name>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:07</nova:creationTime>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="serial">a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="uuid">a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b4:40:de"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <target dev="tapdffa5a1f-65"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log" append="off"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:08 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:08 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Preparing to wait for external event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.409 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.410 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.410 2 DEBUG nova.virt.libvirt.vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.411 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.411 2 DEBUG nova.network.os_vif_util [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.412 2 DEBUG os_vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdffa5a1f-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdffa5a1f-65, col_values=(('external_ids', {'iface-id': 'dffa5a1f-657b-498e-bbe5-6540fead7fb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:40:de', 'vm-uuid': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 NetworkManager[44885]: <info>  [1760432768.4188] manager: (tapdffa5a1f-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.421 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56a3fa7d-ebb9-4dcc-b595-b11712020405]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322441, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322441, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.423 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.426 2 INFO os_vif [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65')#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.433 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:08.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:08 np0005486808 NetworkManager[44885]: <info>  [1760432768.4463] manager: (tap31797867-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.453 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.454 2 DEBUG nova.compute.manager [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.515 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.523 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.524 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.524 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:b4:40:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.524 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Using config drive#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.553 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.562 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.563 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.563 2 DEBUG nova.objects.instance [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.661 2 DEBUG oslo_concurrency.lockutils [None req-d2d239ba-b2bf-49ee-a3c7-62ec7af16c79 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.984 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:08 np0005486808 nova_compute[259627]: 2025-10-14 09:06:08.985 2 DEBUG nova.network.neutron [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.004 2 DEBUG oslo_concurrency.lockutils [req-67da2cf9-7eb6-467e-9c70-977fab2d0e4a req-122c47f3-79c5-4368-85bc-c1ae63a0b8c1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.113 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Creating config drive at /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.123 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxhlo5n0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.271 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxhlo5n0" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.315 2 DEBUG nova.storage.rbd_utils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.319 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.375 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.376 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.377 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.378 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.378 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.379 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.380 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.380 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.381 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.382 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.382 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.383 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-unplugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.384 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.384 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.385 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.386 2 DEBUG oslo_concurrency.lockutils [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.386 2 DEBUG nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] No waiting events found dispatching network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.387 2 WARNING nova.compute.manager [req-47aa5ea0-511f-4dfe-bf46-9c6c13c7539f req-99584f61-5ed8-4c38-afef-940671fd9999 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received unexpected event network-vif-plugged-31797867-f0bc-4632-a658-bd0fae609c23 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.517 2 DEBUG oslo_concurrency.processutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.518 2 INFO nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deleting local config drive /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.5841] manager: (tapdffa5a1f-65): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct 14 05:06:09 np0005486808 systemd-udevd[322277]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:09 np0005486808 kernel: tapdffa5a1f-65: entered promiscuous mode
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:09Z|00577|binding|INFO|Claiming lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 for this chassis.
Oct 14 05:06:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:09Z|00578|binding|INFO|dffa5a1f-657b-498e-bbe5-6540fead7fb6: Claiming fa:16:3e:b4:40:de 10.100.0.8
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.605 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:40:de 10.100.0.8'], port_security=['fa:16:3e:b4:40:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dffa5a1f-657b-498e-bbe5-6540fead7fb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.607 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dffa5a1f-657b-498e-bbe5-6540fead7fb6 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.6074] device (tapdffa5a1f-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.608 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.6105] device (tapdffa5a1f-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:09 np0005486808 systemd-machined[214636]: New machine qemu-72-instance-00000039.
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[260f0b8f-5acd-4740-8acf-7def4ebcafeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.627 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc2d149f-a1 in ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:09Z|00579|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 up in Southbound
Oct 14 05:06:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:09Z|00580|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 ovn-installed in OVS
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.630 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc2d149f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.630 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b12460ce-3873-458a-971d-8baecf560a3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a92cde-f141-4e25-90d9-e20436e88940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 systemd[1]: Started Virtual Machine qemu-72-instance-00000039.
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.649 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[40bc4ee7-6b9f-46bc-a5fd-a7b39cefdd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c2760e-2eb5-47bd-96e0-f54813032ecc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.703 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed334ad-96c7-48dd-8e10-ece9f5181485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6347d271-5f30-45b9-81fc-d3b37d4f6717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.7142] manager: (tapfc2d149f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Oct 14 05:06:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 5.7 MiB/s wr, 147 op/s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.754 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[572114a0-1cc4-4e86-9954-94f872e5dcf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.756 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9a693c6a-aa7a-42f0-ba0a-1d4090e73c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.7813] device (tapfc2d149f-a0): carrier: link connected
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.785 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8966265a-595b-4902-bc00-20074f99e212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45a0b58b-304b-4371-bbdf-32d80b65cccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 26994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322561, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.820 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d07ef93-f315-4105-bbb1-f8a9f4857e7c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e73e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654854, 'tstamp': 654854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322562, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.837 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6437eb22-466d-4df0-9ac6-9218db57a5fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 26994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322563, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93a77616-6792-4dea-a5a3-a1f46f838e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.922 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b27565be-df30-42f2-89d6-f8cd945a9d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.923 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.924 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 NetworkManager[44885]: <info>  [1760432769.9270] manager: (tapfc2d149f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 14 05:06:09 np0005486808 kernel: tapfc2d149f-a0: entered promiscuous mode
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.934 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:09Z|00581|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 nova_compute[259627]: 2025-10-14 09:06:09.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.963 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.964 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbc39f1-ebab-4f8c-90fa-38587495581c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.964 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/fc2d149f-aebf-406a-aed2-5161dd22b079.pid.haproxy
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID fc2d149f-aebf-406a-aed2-5161dd22b079
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:06:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:09.965 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'env', 'PROCESS_TAG=haproxy-fc2d149f-aebf-406a-aed2-5161dd22b079', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc2d149f-aebf-406a-aed2-5161dd22b079.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:06:10 np0005486808 podman[322643]: 2025-10-14 09:06:10.402221319 +0000 UTC m=+0.059754553 container create 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:06:10 np0005486808 systemd[1]: Started libpod-conmon-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope.
Oct 14 05:06:10 np0005486808 podman[322643]: 2025-10-14 09:06:10.366393767 +0000 UTC m=+0.023927051 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:06:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:06:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b9ef6811a39d51e9705ae92738e1dfb3f318ac7ca2b4d0ea3d271eb3fe0ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:06:10 np0005486808 podman[322643]: 2025-10-14 09:06:10.482572718 +0000 UTC m=+0.140105972 container init 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:06:10 np0005486808 podman[322643]: 2025-10-14 09:06:10.488058423 +0000 UTC m=+0.145591657 container start 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:06:10 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : New worker (322664) forked
Oct 14 05:06:10 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : Loading success.
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.545 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432770.5443027, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.545 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.569 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.573 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432770.544775, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.574 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.591 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.594 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:10 np0005486808 nova_compute[259627]: 2025-10-14 09:06:10.615 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.525 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.526 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.526 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.527 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.527 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Processing event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.528 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.528 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.529 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.530 2 DEBUG oslo_concurrency.lockutils [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.530 2 DEBUG nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.531 2 WARNING nova.compute.manager [req-7f078095-0e65-44c1-8548-87267424c212 req-533f6a61-408b-46b5-8e87-b4fd1fff87fe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-dffa5a1f-657b-498e-bbe5-6540fead7fb6 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.532 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.536 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432772.5359058, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.536 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.540 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.545 2 INFO nova.virt.libvirt.driver [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance spawned successfully.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.545 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.570 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.579 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.586 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.587 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.587 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.588 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.589 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.589 2 DEBUG nova.virt.libvirt.driver [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.619 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.657 2 INFO nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 9.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.657 2 DEBUG nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.732 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.732 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.733 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.734 2 INFO nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Terminating instance#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.736 2 DEBUG nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.743 2 INFO nova.virt.libvirt.driver [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Instance destroyed successfully.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.745 2 DEBUG nova.objects.instance [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid ec31b9ab-88ab-4085-a46b-76cb9825061a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.756 2 INFO nova.compute.manager [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 10.69 seconds to build instance.#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.773 2 DEBUG nova.virt.libvirt.vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-412231426',display_name='tempest-tempest.common.compute-instance-412231426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-412231426',id=56,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-v50tykwv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:08Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=ec31b9ab-88ab-4085-a46b-76cb9825061a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.774 2 DEBUG nova.network.os_vif_util [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "31797867-f0bc-4632-a658-bd0fae609c23", "address": "fa:16:3e:01:3d:6b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31797867-f0", "ovs_interfaceid": "31797867-f0bc-4632-a658-bd0fae609c23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.775 2 DEBUG nova.network.os_vif_util [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.775 2 DEBUG os_vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31797867-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.780 2 DEBUG oslo_concurrency.lockutils [None req-681cd631-f2d8-4083-9916-5d4d5442b09b 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:12 np0005486808 nova_compute[259627]: 2025-10-14 09:06:12.786 2 INFO os_vif [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:3d:6b,bridge_name='br-int',has_traffic_filtering=True,id=31797867-f0bc-4632-a658-bd0fae609c23,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31797867-f0')#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.196 2 INFO nova.virt.libvirt.driver [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deleting instance files /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.198 2 INFO nova.virt.libvirt.driver [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deletion of /var/lib/nova/instances/ec31b9ab-88ab-4085-a46b-76cb9825061a_del complete#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.282 2 INFO nova.compute.manager [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.283 2 DEBUG oslo.service.loopingcall [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.284 2 DEBUG nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.285 2 DEBUG nova.network.neutron [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.577 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.578 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.600 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.682 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.683 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.695 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.696 2 INFO nova.compute.claims [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 214 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.868 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.916 2 DEBUG nova.network.neutron [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.934 2 INFO nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct 14 05:06:13 np0005486808 nova_compute[259627]: 2025-10-14 09:06:13.976 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270437326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.351 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.357 2 DEBUG nova.compute.provider_tree [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.372 2 DEBUG nova.scheduler.client.report [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.398 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.399 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.403 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.467 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.468 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.488 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.511 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.544 2 DEBUG oslo_concurrency.processutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.645 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.648 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.649 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating image(s)#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.685 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.717 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.740 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.743 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.781 2 DEBUG nova.policy [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.786 2 DEBUG nova.compute.manager [req-b1510d54-4efa-4083-b982-b6ddd24a2d2f req-da68a5f3-ce98-4562-812c-c7d56eb33f72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Received event network-vif-deleted-31797867-f0bc-4632-a658-bd0fae609c23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.822 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.823 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.824 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.824 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.849 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.852 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69457542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.976 2 DEBUG oslo_concurrency.processutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:14 np0005486808 nova_compute[259627]: 2025-10-14 09:06:14.982 2 DEBUG nova.compute.provider_tree [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.006 2 DEBUG nova.scheduler.client.report [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.045 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.089 2 INFO nova.scheduler.client.report [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance ec31b9ab-88ab-4085-a46b-76cb9825061a#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.145 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.199 2 DEBUG oslo_concurrency.lockutils [None req-166fc99d-8a10-48c0-8c66-4187af98fd8e d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "ec31b9ab-88ab-4085-a46b-76cb9825061a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.205 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.292 2 DEBUG nova.objects.instance [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.309 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Ensure instance console log exists: /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.310 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.311 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.473 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Successfully created port: 1550cd45-1c1e-4505-8762-fb1668990b8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.514 2 DEBUG nova.compute.manager [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.515 2 DEBUG nova.compute.manager [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:15 np0005486808 nova_compute[259627]: 2025-10-14 09:06:15.516 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 218 op/s
Oct 14 05:06:16 np0005486808 nova_compute[259627]: 2025-10-14 09:06:16.245 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Successfully updated port: 1550cd45-1c1e-4505-8762-fb1668990b8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:16 np0005486808 nova_compute[259627]: 2025-10-14 09:06:16.266 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:16 np0005486808 nova_compute[259627]: 2025-10-14 09:06:16.267 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:16 np0005486808 nova_compute[259627]: 2025-10-14 09:06:16.267 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:16 np0005486808 nova_compute[259627]: 2025-10-14 09:06:16.519 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.002 2 DEBUG nova.compute.manager [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-changed-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.003 2 DEBUG nova.compute.manager [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Refreshing instance network info cache due to event network-changed-1550cd45-1c1e-4505-8762-fb1668990b8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.003 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.142 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.143 2 DEBUG nova.network.neutron [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.167 2 DEBUG oslo_concurrency.lockutils [req-554431a8-d8c8-434a-a779-4a07ca2a9a3b req-9d4ece7e-c96b-48e4-a8ac-c67b01702d9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.719 2 DEBUG nova.network.neutron [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.760 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.761 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance network_info: |[{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.763 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.763 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Refreshing network info cache for port 1550cd45-1c1e-4505-8762-fb1668990b8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.768 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start _get_guest_xml network_info=[{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.775 2 WARNING nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.781 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.782 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.795 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.796 2 DEBUG nova.virt.libvirt.host [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.797 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.797 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.798 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.799 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.800 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.800 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.801 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.801 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.802 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.802 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.803 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.804 2 DEBUG nova.virt.hardware [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.808 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.882 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.883 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.904 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.974 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.974 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.981 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:17 np0005486808 nova_compute[259627]: 2025-10-14 09:06:17.982 2 INFO nova.compute.claims [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.148 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510970167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.266 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.290 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.294 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757796224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.674 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.683 2 DEBUG nova.compute.provider_tree [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.707 2 DEBUG nova.scheduler.client.report [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.728 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updated VIF entry in instance network info cache for port 1550cd45-1c1e-4505-8762-fb1668990b8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.729 2 DEBUG nova.network.neutron [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [{"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.738 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.739 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.746 2 DEBUG oslo_concurrency.lockutils [req-eb3ab860-32b2-4d78-b25b-62b4c4f4975f req-22e98623-ca12-4710-98c8-91910e53ecbb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568574214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.798 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.799 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.812 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.814 2 DEBUG nova.virt.libvirt.vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:14Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.815 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.816 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.818 2 DEBUG nova.objects.instance [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.821 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.839 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <uuid>97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</uuid>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <name>instance-0000003a</name>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersTestJSON-server-1220502864</nova:name>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:17</nova:creationTime>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <nova:port uuid="1550cd45-1c1e-4505-8762-fb1668990b8f">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="serial">97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="uuid">97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c5:61:e3"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <target dev="tap1550cd45-1c"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/console.log" append="off"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:18 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:18 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:18 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:18 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.850 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Preparing to wait for external event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.850 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.851 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.851 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.852 2 DEBUG nova.virt.libvirt.vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:14Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.853 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.854 2 DEBUG nova.network.os_vif_util [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.854 2 DEBUG os_vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.861 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.867 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1550cd45-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1550cd45-1c, col_values=(('external_ids', {'iface-id': '1550cd45-1c1e-4505-8762-fb1668990b8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:61:e3', 'vm-uuid': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:18 np0005486808 NetworkManager[44885]: <info>  [1760432778.9140] manager: (tap1550cd45-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.925 2 INFO os_vif [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c')#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.951 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.953 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.954 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating image(s)#033[00m
Oct 14 05:06:18 np0005486808 nova_compute[259627]: 2025-10-14 09:06:18.991 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.027 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.062 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.067 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.136 2 DEBUG nova.policy [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.160 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.160 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.161 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.161 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.184 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.188 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.220 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.221 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.221 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:c5:61:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.222 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Using config drive#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.248 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.455 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.515 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] resizing rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.652 2 DEBUG nova.objects.instance [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'migration_context' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.664 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.665 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Ensure instance console log exists: /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.666 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.666 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.667 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 200 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.781 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Creating config drive at /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.791 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvfr0hdm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.954 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvfr0hdm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:19 np0005486808 nova_compute[259627]: 2025-10-14 09:06:19.998 2 DEBUG nova.storage.rbd_utils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.004 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.046 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully created port: 350a3bec-5dbd-4a83-8d80-5796be0319fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.182 2 DEBUG oslo_concurrency.processutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.183 2 INFO nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deleting local config drive /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.2250] manager: (tap1550cd45-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct 14 05:06:20 np0005486808 kernel: tap1550cd45-1c: entered promiscuous mode
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:20Z|00582|binding|INFO|Claiming lport 1550cd45-1c1e-4505-8762-fb1668990b8f for this chassis.
Oct 14 05:06:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:20Z|00583|binding|INFO|1550cd45-1c1e-4505-8762-fb1668990b8f: Claiming fa:16:3e:c5:61:e3 10.100.0.9
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.278 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:61:e3 10.100.0.9'], port_security=['fa:16:3e:c5:61:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1550cd45-1c1e-4505-8762-fb1668990b8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.279 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1550cd45-1c1e-4505-8762-fb1668990b8f in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.280 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88#033[00m
Oct 14 05:06:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:20Z|00584|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f ovn-installed in OVS
Oct 14 05:06:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:20Z|00585|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f up in Southbound
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.300 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c81774-9bc6-42a1-a32f-795f022fc506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.301 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:06:20 np0005486808 systemd-machined[214636]: New machine qemu-73-instance-0000003a.
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.303 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.303 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[931ae6e0-2572-4d68-aeef-b9880caf1d3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2c3fac-b5e7-4347-a0bc-b116691cd3a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 systemd[1]: Started Virtual Machine qemu-73-instance-0000003a.
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.314 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ce274265-7eb9-4868-ad3e-7b4ba7af8af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 systemd-udevd[323229]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcf0b1b-cc4b-4bce-ab16-72c31d6ec2a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.3421] device (tap1550cd45-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.3429] device (tap1550cd45-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.360 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[96a6ef19-39ee-4494-a7ec-6723523bb5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.365 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[10af2398-9734-4465-b47e-7a26a896094d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.3662] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Oct 14 05:06:20 np0005486808 systemd-udevd[323233]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.408 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[180a7f93-12d1-45d7-9d38-d9db2e160b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.411 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4a50f6cc-8781-4ebb-b05a-d78e42a90b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.4315] device (tap0a07d59e-b0): carrier: link connected
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.435 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7505253c-1888-4ebc-9e39-177b6aa12fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b92e6009-66ee-4217-9a14-1c584a31a343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655919, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16d51723-c8c1-4fec-8811-3795b67a8c36]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655919, 'tstamp': 655919}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323260, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd93ee1-8268-47ec-9227-d02ae541eb26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655919, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323261, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4115a7c-c777-49d2-8c99-5f8cb10cbdae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.601 2 DEBUG nova.compute.manager [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.603 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.603 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.604 2 DEBUG oslo_concurrency.lockutils [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.605 2 DEBUG nova.compute.manager [req-b52afbf6-46a5-4cba-a48d-f9ee2ec2d506 req-138f48fd-516f-40e0-840c-6c1aedd2dcf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Processing event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25d245-c90b-4304-ad3c-4f2c8ea05625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.617 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:20 np0005486808 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 05:06:20 np0005486808 NetworkManager[44885]: <info>  [1760432780.6209] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.622 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:20Z|00586|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.639 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5916cc-49ec-491c-b920-c78c7a48862b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.642 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:06:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:20.643 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:20 np0005486808 nova_compute[259627]: 2025-10-14 09:06:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:21 np0005486808 podman[323335]: 2025-10-14 09:06:21.089060287 +0000 UTC m=+0.065171836 container create f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:06:21 np0005486808 podman[323335]: 2025-10-14 09:06:21.057235133 +0000 UTC m=+0.033346642 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:06:21 np0005486808 systemd[1]: Started libpod-conmon-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope.
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.176 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully updated port: 350a3bec-5dbd-4a83-8d80-5796be0319fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.195 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfe46c2d430d558e2727a9bdbf703eb7324aebea7de1e5e4f99b74c1e21eb6fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:06:21 np0005486808 podman[323335]: 2025-10-14 09:06:21.223265252 +0000 UTC m=+0.199376771 container init f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:06:21 np0005486808 podman[323335]: 2025-10-14 09:06:21.232637323 +0000 UTC m=+0.208748822 container start f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:06:21 np0005486808 podman[323348]: 2025-10-14 09:06:21.236776145 +0000 UTC m=+0.089369452 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.247 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2466352, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.247 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.249 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.252 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.255 2 INFO nova.virt.libvirt.driver [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance spawned successfully.#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.255 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:21 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : New worker (323394) forked
Oct 14 05:06:21 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : Loading success.
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.273 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.275 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.289 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.290 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.291 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.292 2 DEBUG nova.virt.libvirt.driver [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.296 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.297 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2471485, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.297 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:21 np0005486808 podman[323347]: 2025-10-14 09:06:21.307778473 +0000 UTC m=+0.163110988 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.325 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.328 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432781.2509403, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.328 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.354 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.356 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.365 2 INFO nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.366 2 DEBUG nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.374 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.380 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.419 2 INFO nova.compute.manager [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 7.77 seconds to build instance.#033[00m
Oct 14 05:06:21 np0005486808 nova_compute[259627]: 2025-10-14 09:06:21.436 2 DEBUG oslo_concurrency.lockutils [None req-f84099ac-01fd-41c3-a436-a72e958173e5 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.423 2 DEBUG nova.network.neutron [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.451 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.452 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance network_info: |[{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.454 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start _get_guest_xml network_info=[{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.459 2 WARNING nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.467 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.468 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.472 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.libvirt.host [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.473 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.474 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.474 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.475 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.476 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.477 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.477 2 DEBUG nova.virt.hardware [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.481 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.722 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.722 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.723 2 WARNING nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received unexpected event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.723 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG nova.compute.manager [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.724 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504711210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.909 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.929 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.933 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:22 np0005486808 nova_compute[259627]: 2025-10-14 09:06:22.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.054 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.054 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.073 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.130 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.139 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.139 2 INFO nova.compute.claims [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.288 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.289 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.291 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.291 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.292 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.293 2 INFO nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Terminating instance#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.294 2 DEBUG nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:06:23 np0005486808 kernel: tap1550cd45-1c (unregistering): left promiscuous mode
Oct 14 05:06:23 np0005486808 NetworkManager[44885]: <info>  [1760432783.3502] device (tap1550cd45-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.350 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:23Z|00587|binding|INFO|Releasing lport 1550cd45-1c1e-4505-8762-fb1668990b8f from this chassis (sb_readonly=0)
Oct 14 05:06:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:23Z|00588|binding|INFO|Setting lport 1550cd45-1c1e-4505-8762-fb1668990b8f down in Southbound
Oct 14 05:06:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:23Z|00589|binding|INFO|Removing iface tap1550cd45-1c ovn-installed in OVS
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.367 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:61:e3 10.100.0.9'], port_security=['fa:16:3e:c5:61:e3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1550cd45-1c1e-4505-8762-fb1668990b8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.368 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1550cd45-1c1e-4505-8762-fb1668990b8f in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.369 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5de8ab-2995-4eaf-af55-ef797764113b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.371 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 14 05:06:23 np0005486808 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003a.scope: Consumed 2.894s CPU time.
Oct 14 05:06:23 np0005486808 systemd-machined[214636]: Machine qemu-73-instance-0000003a terminated.
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989097755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.429 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.430 2 DEBUG nova.virt.libvirt.vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.430 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.431 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.433 2 DEBUG nova.objects.instance [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.451 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432768.4496799, ec31b9ab-88ab-4085-a46b-76cb9825061a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.451 2 INFO nova.compute.manager [-] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424773956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.460 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <name>instance-0000003b</name>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:22</nova:creationTime>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="serial">2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="uuid">2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9c:3f:ae"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <target dev="tap350a3bec-5d"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log" append="off"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:23 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:23 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:23 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:23 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.467 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Preparing to wait for external event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.468 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.468 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.469 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.470 2 DEBUG nova.virt.libvirt.vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.470 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.471 2 DEBUG nova.network.os_vif_util [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.471 2 DEBUG os_vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.476 2 DEBUG nova.compute.manager [None req-6fdef719-9673-41b2-b5ce-10bea6d7191d - - - - - -] [instance: ec31b9ab-88ab-4085-a46b-76cb9825061a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap350a3bec-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap350a3bec-5d, col_values=(('external_ids', {'iface-id': '350a3bec-5dbd-4a83-8d80-5796be0319fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:3f:ae', 'vm-uuid': '2189eac5-238f-4f09-ae1c-1cf47c3b6030'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.479 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:23 np0005486808 NetworkManager[44885]: <info>  [1760432783.4805] manager: (tap350a3bec-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.488 2 INFO os_vif [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d')#033[00m
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : haproxy version is 2.8.14-c23fe91
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [NOTICE]   (323388) : path to executable is /usr/sbin/haproxy
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : Exiting Master process...
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : Exiting Master process...
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [ALERT]    (323388) : Current worker (323394) exited with code 143 (Terminated)
Oct 14 05:06:23 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[323363]: [WARNING]  (323388) : All workers exited. Exiting... (0)
Oct 14 05:06:23 np0005486808 systemd[1]: libpod-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope: Deactivated successfully.
Oct 14 05:06:23 np0005486808 podman[323512]: 2025-10-14 09:06:23.517412501 +0000 UTC m=+0.051985301 container died f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.538 2 INFO nova.virt.libvirt.driver [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Instance destroyed successfully.#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.539 2 DEBUG nova.objects.instance [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb-userdata-shm.mount: Deactivated successfully.
Oct 14 05:06:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cfe46c2d430d558e2727a9bdbf703eb7324aebea7de1e5e4f99b74c1e21eb6fb-merged.mount: Deactivated successfully.
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.556 2 DEBUG nova.virt.libvirt.vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1220502864',display_name='tempest-DeleteServersTestJSON-server-1220502864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1220502864',id=58,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-a7evba7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:21Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.557 2 DEBUG nova.network.os_vif_util [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1550cd45-1c1e-4505-8762-fb1668990b8f", "address": "fa:16:3e:c5:61:e3", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1550cd45-1c", "ovs_interfaceid": "1550cd45-1c1e-4505-8762-fb1668990b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.558 2 DEBUG nova.network.os_vif_util [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.558 2 DEBUG os_vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1550cd45-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.574 2 INFO os_vif [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=1550cd45-1c1e-4505-8762-fb1668990b8f,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1550cd45-1c')#033[00m
Oct 14 05:06:23 np0005486808 podman[323512]: 2025-10-14 09:06:23.579472009 +0000 UTC m=+0.114044799 container cleanup f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.600 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.600 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.601 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9c:3f:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.602 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Using config drive#033[00m
Oct 14 05:06:23 np0005486808 systemd[1]: libpod-conmon-f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb.scope: Deactivated successfully.
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.628 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:23 np0005486808 podman[323592]: 2025-10-14 09:06:23.669470855 +0000 UTC m=+0.063028342 container remove f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91c83b0f-d956-425e-9964-16a2424a2fa9]: (4, ('Tue Oct 14 09:06:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb)\nf152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb\nTue Oct 14 09:06:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (f152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb)\nf152fd1e1eb1fa828c6a2cae8c2de75044db7a8c8ecf17faacf9e65472d66aeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f2d926-a85c-415d-9ac5-fed947636ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.680 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.709 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.709 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d75fcaf-f2fc-4466-9c89-0ff331af734d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.713 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.713 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.717 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.717 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:06:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 260 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15a32f35-5d2a-4a2f-a98c-6156a19ef6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b84a851e-401a-4a94-9cf5-16cc6b2c3a3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.758 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9962208e-f0be-4f98-a927-1b371fa5beda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655911, 'reachable_time': 21762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323633, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.763 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:06:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:23.763 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef93816-bf2f-45b2-93ce-294c724dcb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843737089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.915 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.916 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3652MB free_disk=59.8802490234375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.916 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.925 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.930 2 DEBUG nova.compute.provider_tree [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.943 2 DEBUG nova.scheduler.client.report [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.967 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.967 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:23 np0005486808 nova_compute[259627]: 2025-10-14 09:06:23.970 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.041 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.041 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.057 2 INFO nova.virt.libvirt.driver [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deleting instance files /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_del#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.058 2 INFO nova.virt.libvirt.driver [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deletion of /var/lib/nova/instances/97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce_del complete#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.062 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.078 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.078 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance dd55716e-2330-42a4-8963-33bdc9c7bbf8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.079 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.080 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.084 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.104 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Creating config drive at /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.114 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnheo7sr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.157 2 INFO nova.compute.manager [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.158 2 DEBUG oslo.service.loopingcall [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.158 2 DEBUG nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.159 2 DEBUG nova.network.neutron [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.201 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.202 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.203 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating image(s)#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.223 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.248 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.275 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.279 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.327 2 DEBUG nova.policy [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd952679a4e6a4fc6bacf42c02d3e92d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.329 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnheo7sr" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.362 2 DEBUG nova.storage.rbd_utils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] rbd image 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.367 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.410 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.411 2 DEBUG nova.network.neutron [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.419 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.420 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.421 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.421 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.448 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.453 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.513 2 DEBUG oslo_concurrency.lockutils [req-df36d1ce-7bb4-4e8d-87d7-9a816eae4ac4 req-4b769a66-cf70-433b-9d28-3844893820f6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.530 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.583 2 DEBUG oslo_concurrency.processutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config 2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.585 2 INFO nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deleting local config drive /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:24 np0005486808 kernel: tap350a3bec-5d: entered promiscuous mode
Oct 14 05:06:24 np0005486808 NetworkManager[44885]: <info>  [1760432784.6465] manager: (tap350a3bec-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct 14 05:06:24 np0005486808 systemd-udevd[323497]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:24Z|00590|binding|INFO|Claiming lport 350a3bec-5dbd-4a83-8d80-5796be0319fd for this chassis.
Oct 14 05:06:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:24Z|00591|binding|INFO|350a3bec-5dbd-4a83-8d80-5796be0319fd: Claiming fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 05:06:24 np0005486808 NetworkManager[44885]: <info>  [1760432784.6588] device (tap350a3bec-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.657 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:3f:ae 10.100.0.6'], port_security=['fa:16:3e:9c:3f:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=350a3bec-5dbd-4a83-8d80-5796be0319fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.659 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 350a3bec-5dbd-4a83-8d80-5796be0319fd in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:06:24 np0005486808 NetworkManager[44885]: <info>  [1760432784.6604] device (tap350a3bec-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.662 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:24Z|00592|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd ovn-installed in OVS
Oct 14 05:06:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:24Z|00593|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd up in Southbound
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7dc762-1897-48cf-b94d-da6e8b40a937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:24 np0005486808 systemd-machined[214636]: New machine qemu-74-instance-0000003b.
Oct 14 05:06:24 np0005486808 systemd[1]: Started Virtual Machine qemu-74-instance-0000003b.
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.721 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[01120e2e-1356-4e74-9611-31c6334a2348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.727 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[313b1781-f359-4d35-a6b1-56612ff5680e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.757 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b98cf3b2-0680-4504-88f5-4dc9027589e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.774 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[08a4f83a-c187-4015-9bd2-7fa3d32b1e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323814, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.782 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb87511f-ec7a-4bb2-b5b5-ef45a5076cd2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323816, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323816, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.793 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:24.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.839 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] resizing rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.867 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.867 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-unplugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.868 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG oslo_concurrency.lockutils [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.869 2 DEBUG nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] No waiting events found dispatching network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.869 2 WARNING nova.compute.manager [req-92c9545a-ff41-4a5e-b132-b6c5ac034899 req-dfa0f997-6298-4009-bb9b-60d264d9167b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received unexpected event network-vif-plugged-1550cd45-1c1e-4505-8762-fb1668990b8f for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.942 2 DEBUG nova.network.neutron [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.952 2 DEBUG nova.objects.instance [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'migration_context' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.966 2 INFO nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Took 0.81 seconds to deallocate network for instance.#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.971 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.971 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Ensure instance console log exists: /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:24 np0005486808 nova_compute[259627]: 2025-10-14 09:06:24.972 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.001 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286670197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.024 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.037 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.056 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.069 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Successfully created port: deb48802-bfee-42af-882c-3632b1fbb2cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:25Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:40:de 10.100.0.8
Oct 14 05:06:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:25Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:40:de 10.100.0.8
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.234 2 DEBUG oslo_concurrency.processutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.277 2 DEBUG nova.compute.manager [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.277 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG oslo_concurrency.lockutils [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.278 2 DEBUG nova.compute.manager [req-e99c7aba-7c7f-4a63-a270-4070fdbe5519 req-f4925044-c723-4ab1-87a7-2d7be0564b47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Processing event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2414840640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.699 2 DEBUG oslo_concurrency.processutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.707 2 DEBUG nova.compute.provider_tree [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.5 MiB/s wr, 339 op/s
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.742 2 DEBUG nova.scheduler.client.report [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.775 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.812 2 INFO nova.scheduler.client.report [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.893 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Successfully updated port: deb48802-bfee-42af-882c-3632b1fbb2cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.897 2 DEBUG oslo_concurrency.lockutils [None req-a9e906bd-4bf8-4025-a1c1-dd1e50507aa3 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.913 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.913 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:25 np0005486808 nova_compute[259627]: 2025-10-14 09:06:25.915 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.057 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.058 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.111 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.263 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2628214, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.263 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.266 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.270 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.273 2 INFO nova.virt.libvirt.driver [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance spawned successfully.#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.273 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.288 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.297 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.303 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.304 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.305 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.305 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.306 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.307 2 DEBUG nova.virt.libvirt.driver [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.322 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.323 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2629719, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.365 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.369 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432786.2688997, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.369 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.402 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.406 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.421 2 INFO nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 7.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.421 2 DEBUG nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.434 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.516 2 INFO nova.compute.manager [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 8.56 seconds to build instance.#033[00m
Oct 14 05:06:26 np0005486808 nova_compute[259627]: 2025-10-14 09:06:26.534 2 DEBUG oslo_concurrency.lockutils [None req-ff0bac45-e7dc-4510-9d47-968500b01ff7 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.030 2 DEBUG nova.network.neutron [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.060 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.060 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance network_info: |[{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.065 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start _get_guest_xml network_info=[{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.069 2 WARNING nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.075 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.076 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.083 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.libvirt.host [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.084 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.085 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.086 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.087 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.088 2 DEBUG nova.virt.hardware [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.093 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.331 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Received event network-vif-deleted-1550cd45-1c1e-4505-8762-fb1668990b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG nova.compute.manager [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing instance network info cache due to event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.332 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.333 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.333 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230294850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.557 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.576 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.579 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:27 np0005486808 nova_compute[259627]: 2025-10-14 09:06:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.041 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.041 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440266417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.106 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.108 2 DEBUG nova.virt.libvirt.vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:24Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.109 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.111 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.113 2 DEBUG nova.objects.instance [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'pci_devices' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.130 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <uuid>dd55716e-2330-42a4-8963-33bdc9c7bbf8</uuid>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <name>instance-0000003c</name>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestOtherA-server-138998221</nova:name>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:27</nova:creationTime>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:user uuid="d952679a4e6a4fc6bacf42c02d3e92d0">tempest-ServerActionsTestOtherA-894139105-project-member</nova:user>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:project uuid="4e47722c609640d3a70fee8dd6ff94cc">tempest-ServerActionsTestOtherA-894139105</nova:project>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <nova:port uuid="deb48802-bfee-42af-882c-3632b1fbb2cf">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="serial">dd55716e-2330-42a4-8963-33bdc9c7bbf8</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="uuid">dd55716e-2330-42a4-8963-33bdc9c7bbf8</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:e0:3f:2b"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <target dev="tapdeb48802-bf"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/console.log" append="off"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:28 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:28 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:28 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:28 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.130 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Preparing to wait for external event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.131 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.132 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.133 2 DEBUG nova.virt.libvirt.vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:24Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.133 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.134 2 DEBUG nova.network.os_vif_util [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.134 2 DEBUG os_vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdeb48802-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdeb48802-bf, col_values=(('external_ids', {'iface-id': 'deb48802-bfee-42af-882c-3632b1fbb2cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3f:2b', 'vm-uuid': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:28 np0005486808 NetworkManager[44885]: <info>  [1760432788.1448] manager: (tapdeb48802-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.154 2 INFO os_vif [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf')#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.183 2 DEBUG oslo_concurrency.lockutils [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.184 2 DEBUG nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.184 2 WARNING nova.compute.manager [req-f5ae087b-46ed-40f9-8a3b-fd04bc8beda6 req-82495ac5-90b8-4ea8-a72b-239232056c30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.203 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.205 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.205 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] No VIF found with MAC fa:16:3e:e0:3f:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.206 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Using config drive#033[00m
Oct 14 05:06:28 np0005486808 nova_compute[259627]: 2025-10-14 09:06:28.230 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.370 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Creating config drive at /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.374 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizfx1fli execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.506 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizfx1fli" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.532 2 DEBUG nova.storage.rbd_utils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] rbd image dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.535 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.690 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updated VIF entry in instance network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.692 2 DEBUG nova.network.neutron [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.728 2 DEBUG oslo_concurrency.lockutils [req-1ea95aa5-9c3d-4cc9-aeab-fe841a04816d req-aeb823b2-2bf5-4331-8a7e-26aabb410479 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 291 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 MiB/s wr, 222 op/s
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.758 2 DEBUG oslo_concurrency.processutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config dd55716e-2330-42a4-8963-33bdc9c7bbf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.759 2 INFO nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deleting local config drive /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:29 np0005486808 NetworkManager[44885]: <info>  [1760432789.8202] manager: (tapdeb48802-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Oct 14 05:06:29 np0005486808 kernel: tapdeb48802-bf: entered promiscuous mode
Oct 14 05:06:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:29Z|00594|binding|INFO|Claiming lport deb48802-bfee-42af-882c-3632b1fbb2cf for this chassis.
Oct 14 05:06:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:29Z|00595|binding|INFO|deb48802-bfee-42af-882c-3632b1fbb2cf: Claiming fa:16:3e:e0:3f:2b 10.100.0.9
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.834 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1858677-a8a5-4b0c-b70b-3875847a67c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.835 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.839 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:29 np0005486808 systemd-udevd[324092]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.868 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b154422-7708-47c6-9c0c-66c750ee150a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:29Z|00596|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf up in Southbound
Oct 14 05:06:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:29Z|00597|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf ovn-installed in OVS
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 systemd-machined[214636]: New machine qemu-75-instance-0000003c.
Oct 14 05:06:29 np0005486808 NetworkManager[44885]: <info>  [1760432789.8819] device (tapdeb48802-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:29 np0005486808 NetworkManager[44885]: <info>  [1760432789.8830] device (tapdeb48802-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:29 np0005486808 systemd[1]: Started Virtual Machine qemu-75-instance-0000003c.
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.902 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a23b2566-d9c7-45e8-9a71-ca5c19749bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.906 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aadac84c-cbe6-4af6-8515-21b419de18c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[899c5ce2-ae09-4ebf-bd65-573abb45fea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.955 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb94a0f9-3857-46f8-92eb-868842c8861a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324104, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a30fb6ba-42a8-4be2-ba28-e0d0e3c493f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324106, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324106, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.976 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 nova_compute[259627]: 2025-10-14 09:06:29.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:29.979 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.049 2 DEBUG nova.compute.manager [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.050 2 DEBUG nova.compute.manager [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.050 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.051 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.051 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.961 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432790.9609225, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.962 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.985 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.990 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432790.961066, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:30 np0005486808 nova_compute[259627]: 2025-10-14 09:06:30.990 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.008 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.012 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.035 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.322 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.323 2 DEBUG nova.network.neutron [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.348 2 DEBUG oslo_concurrency.lockutils [req-667fd1d2-47be-4f15-a16f-2c59b7f20a8f req-247ac8a2-59af-4c9c-8cb2-1b09dbf6b3d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 299 op/s
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.924 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.925 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.946 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:31 np0005486808 nova_compute[259627]: 2025-10-14 09:06:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.028 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.028 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.037 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.038 2 INFO nova.compute.claims [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.175 2 DEBUG nova.compute.manager [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.176 2 DEBUG nova.compute.manager [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.176 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.177 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.177 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.240 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670485446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.699 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.706 2 DEBUG nova.compute.provider_tree [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.728 2 DEBUG nova.scheduler.client.report [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:06:32
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta']
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:06:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.766 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.767 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.811 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.811 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.829 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.849 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.949 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.950 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.951 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating image(s)#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.969 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:32 np0005486808 nova_compute[259627]: 2025-10-14 09:06:32.991 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.011 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.015 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.102 2 DEBUG nova.policy [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.111 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.112 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.130 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.133 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.386 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.444 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.475 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.476 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.476 2 DEBUG nova.objects.instance [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.556 2 DEBUG nova.objects.instance [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.572 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.572 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Ensure instance console log exists: /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.573 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 293 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.956 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.957 2 DEBUG nova.network.neutron [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.961 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Successfully created port: 26dd6404-2018-471b-a387-2b045d236164 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:33 np0005486808 nova_compute[259627]: 2025-10-14 09:06:33.983 2 DEBUG oslo_concurrency.lockutils [req-81a023e9-249f-40d2-9dce-9dac0ed1fb01 req-9b61009e-7ce2-4571-9bb6-ca80a54174ce 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:34 np0005486808 nova_compute[259627]: 2025-10-14 09:06:34.072 2 DEBUG nova.objects.instance [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:34 np0005486808 nova_compute[259627]: 2025-10-14 09:06:34.086 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.069 2 DEBUG nova.policy [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.370 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Successfully updated port: 26dd6404-2018-471b-a387-2b045d236164 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.385 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.385 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.386 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.408 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.409 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.409 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.410 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.410 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Processing event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.411 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.412 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.412 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.413 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.413 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.416 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.422 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432795.421274, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.424 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.431 2 INFO nova.virt.libvirt.driver [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance spawned successfully.#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.431 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.449 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.471 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.475 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.476 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.477 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.477 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.478 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.478 2 DEBUG nova.virt.libvirt.driver [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.527 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.568 2 INFO nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 11.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.569 2 DEBUG nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.690 2 INFO nova.compute.manager [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 12.58 seconds to build instance.#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.705 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.715 2 DEBUG oslo_concurrency.lockutils [None req-0fe27df0-33db-45d6-aac4-9b90abf8d6c3 d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.970 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Successfully updated port: df1ec4d8-f543-4899-9d98-b60a6a46cc7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:35 np0005486808 nova_compute[259627]: 2025-10-14 09:06:35.999 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.000 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.000 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.108 2 DEBUG nova.compute.manager [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-changed-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.108 2 DEBUG nova.compute.manager [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Refreshing instance network info cache due to event network-changed-26dd6404-2018-471b-a387-2b045d236164. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.109 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.177 2 WARNING nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.628 2 DEBUG nova.network.neutron [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.635 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.635 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.651 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.651 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.652 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.652 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.653 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.654 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance network_info: |[{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.654 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.655 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Refreshing network info cache for port 26dd6404-2018-471b-a387-2b045d236164 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.658 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start _get_guest_xml network_info=[{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.662 2 WARNING nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.666 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.667 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.673 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.674 2 DEBUG nova.virt.libvirt.host [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.674 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.675 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.675 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.676 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.677 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.678 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.678 2 DEBUG nova.virt.hardware [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:36 np0005486808 nova_compute[259627]: 2025-10-14 09:06:36.680 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302745539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.163 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.184 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.188 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80336960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.723 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.725 2 DEBUG nova.virt.libvirt.vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:32Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.726 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.727 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.729 2 DEBUG nova.objects.instance [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.744 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <uuid>73d6be04-84dc-4b80-81f8-a9bbf9938051</uuid>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <name>instance-0000003e</name>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersTestJSON-server-715549256</nova:name>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:36</nova:creationTime>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <nova:port uuid="26dd6404-2018-471b-a387-2b045d236164">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="serial">73d6be04-84dc-4b80-81f8-a9bbf9938051</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="uuid">73d6be04-84dc-4b80-81f8-a9bbf9938051</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/73d6be04-84dc-4b80-81f8-a9bbf9938051_disk">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:95:a0:6d"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <target dev="tap26dd6404-20"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/console.log" append="off"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:37 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:37 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.746 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Preparing to wait for external event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.747 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.747 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.748 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.750 2 DEBUG nova.virt.libvirt.vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:32Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.750 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.752 2 DEBUG nova.network.os_vif_util [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.752 2 DEBUG os_vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26dd6404-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26dd6404-20, col_values=(('external_ids', {'iface-id': '26dd6404-2018-471b-a387-2b045d236164', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:a0:6d', 'vm-uuid': '73d6be04-84dc-4b80-81f8-a9bbf9938051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 NetworkManager[44885]: <info>  [1760432797.7655] manager: (tap26dd6404-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.772 2 INFO os_vif [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20')#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.815 2 DEBUG nova.network.neutron [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.819 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.820 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.820 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:95:a0:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.820 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Using config drive#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.838 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.844 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.844 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.845 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.849 2 DEBUG nova.virt.libvirt.vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.850 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.850 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.851 2 DEBUG os_vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf1ec4d8-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.870 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf1ec4d8-f5, col_values=(('external_ids', {'iface-id': 'df1ec4d8-f543-4899-9d98-b60a6a46cc7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:db:60', 'vm-uuid': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:37 np0005486808 NetworkManager[44885]: <info>  [1760432797.9247] manager: (tapdf1ec4d8-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG nova.compute.manager [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG nova.compute.manager [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.941 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.943 2 INFO os_vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.944 2 DEBUG nova.virt.libvirt.vif [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.945 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.945 2 DEBUG nova.network.os_vif_util [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.949 2 DEBUG nova.virt.libvirt.guest [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:37 np0005486808 kernel: tapdf1ec4d8-f5: entered promiscuous mode
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:37 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:37 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:06:37 np0005486808 NetworkManager[44885]: <info>  [1760432797.9637] manager: (tapdf1ec4d8-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 14 05:06:37 np0005486808 nova_compute[259627]: 2025-10-14 09:06:37.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:37Z|00598|binding|INFO|Claiming lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c for this chassis.
Oct 14 05:06:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:37Z|00599|binding|INFO|df1ec4d8-f543-4899-9d98-b60a6a46cc7c: Claiming fa:16:3e:38:db:60 10.100.0.12
Oct 14 05:06:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.981 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.985 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:06:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:37.987 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:38 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.013 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a8cd74-a8a2-4a16-8337-49ba6e3f3820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00600|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c ovn-installed in OVS
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00601|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c up in Southbound
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 systemd-udevd[324432]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.0514] device (tapdf1ec4d8-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.0526] device (tapdf1ec4d8-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.067 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5439e85b-90ea-4e69-8628-9ddc1f8336d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.074 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[58d03d06-0dd6-4bd5-a0a8-26f9f7308957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.094 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:b4:40:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.095 2 DEBUG nova.virt.libvirt.driver [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:38:db:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.118 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f099b6fa-5cb7-42bd-8197-79369eac313c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.139 2 DEBUG nova.virt.libvirt.guest [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 05:06:38 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:38 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:38 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:38 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:38 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[039723d2-1aef-475f-b20c-2b66a2f000a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324441, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.172 2 DEBUG oslo_concurrency.lockutils [None req-4c94f742-d115-48b0-b0ee-0fd95df438bc 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.183 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be36841-9e71-4797-8539-d7357b15502b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324442, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324442, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.189 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.196 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.290 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Creating config drive at /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.300 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbfwhdte execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.452 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updated VIF entry in instance network info cache for port 26dd6404-2018-471b-a387-2b045d236164. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.453 2 DEBUG nova.network.neutron [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [{"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.463 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzbfwhdte" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.504 2 DEBUG nova.storage.rbd_utils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.510 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.579 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432783.5275261, 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.580 2 INFO nova.compute.manager [-] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.583 2 DEBUG oslo_concurrency.lockutils [req-d47760ce-3470-490d-9a3f-3985bd84be00 req-079a45d1-3533-480f-97f0-fa9a542ef5cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-73d6be04-84dc-4b80-81f8-a9bbf9938051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.603 2 DEBUG nova.compute.manager [None req-58735ac5-659f-4bd0-84bd-3d42920154d2 - - - - - -] [instance: 97dc4b0f-55a5-44d7-bf0e-0b6fea3565ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:38 np0005486808 podman[324466]: 2025-10-14 09:06:38.669888145 +0000 UTC m=+0.074126006 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 05:06:38 np0005486808 podman[324467]: 2025-10-14 09:06:38.682291691 +0000 UTC m=+0.069082492 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.733 2 DEBUG oslo_concurrency.processutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config 73d6be04-84dc-4b80-81f8-a9bbf9938051_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.734 2 INFO nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deleting local config drive /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051/disk.config because it was imported into RBD.#033[00m
Oct 14 05:06:38 np0005486808 kernel: tap26dd6404-20: entered promiscuous mode
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.7928] manager: (tap26dd6404-20): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Oct 14 05:06:38 np0005486808 systemd-udevd[324436]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00602|binding|INFO|Claiming lport 26dd6404-2018-471b-a387-2b045d236164 for this chassis.
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00603|binding|INFO|26dd6404-2018-471b-a387-2b045d236164: Claiming fa:16:3e:95:a0:6d 10.100.0.12
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.802 2 DEBUG nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG oslo_concurrency.lockutils [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.803 2 DEBUG nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.803 2 WARNING nova.compute.manager [req-05afcb3c-8612-4b17-8d60-5a7575edb2d5 req-75197282-af0e-4d08-91e6-296b5b1deac6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:a0:6d 10.100.0.12'], port_security=['fa:16:3e:95:a0:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '73d6be04-84dc-4b80-81f8-a9bbf9938051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26dd6404-2018-471b-a387-2b045d236164) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.809 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26dd6404-2018-471b-a387-2b045d236164 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.810 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88#033[00m
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.8134] device (tap26dd6404-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.8154] device (tap26dd6404-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00604|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 ovn-installed in OVS
Oct 14 05:06:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:38Z|00605|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 up in Southbound
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 nova_compute[259627]: 2025-10-14 09:06:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.822 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68b12a3a-99e7-4fa2-bfa4-4bed2c19b994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.829 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.835 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9773d949-5c64-43c9-8dd0-b8d817c42a4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 systemd-machined[214636]: New machine qemu-76-instance-0000003e.
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f351b6ed-36e0-43d7-abed-8787a619dffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.858 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[618d36e3-75bb-4ed3-aa08-53c4a9be5021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 systemd[1]: Started Virtual Machine qemu-76-instance-0000003e.
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.882 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[05aa377c-da5f-4944-9a8b-44cacc5a80a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.911 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[323b0c7d-bc87-440b-867a-282473219ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.917 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd4dde8-c93f-4f85-a8b8-a25398b1a4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.9193] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.962 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bffeab60-ea6a-480a-bd05-07b9f32e072e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:38.965 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3978780b-1b50-4c72-9796-f2818b50a9a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:38 np0005486808 NetworkManager[44885]: <info>  [1760432798.9913] device (tap0a07d59e-b0): carrier: link connected
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.003 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ba71907c-c197-4b37-b215-8f06b68b8cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.022 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[627a1e21-f952-450b-8e9f-26d896d03d02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657775, 'reachable_time': 18074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324565, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1627ee3b-ea20-49d2-95d2-2e463f1d529b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657775, 'tstamp': 657775}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324566, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12221c71-9daf-4522-8ba4-f6621128e9a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657775, 'reachable_time': 18074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324567, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:3f:ae 10.100.0.6
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e581e01-437e-4c7c-8cab-7bd177e85c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.246 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e1352d88-6aa1-4511-a46c-250e929c8a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 NetworkManager[44885]: <info>  [1760432799.2518] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Oct 14 05:06:39 np0005486808 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.260 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00606|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.293 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.294 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8397ca08-0357-49a4-a8e6-16fbc0414dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.295 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.296 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.308 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.308 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.328 2 DEBUG nova.objects.instance [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.352 2 DEBUG nova.virt.libvirt.vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.352 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.361 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.366 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.369 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.374 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.374 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.383 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.386 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='72'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <name>instance-00000039</name>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='serial'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='uuid'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk' index='2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config' index='1'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:b4:40:de'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='tapdffa5a1f-65'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:38:db:60'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='tapdf1ec4d8-f5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='net1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c397,c877</label>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c397,c877</imagelabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.387 2 INFO nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the persistent domain config.#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.387 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tapdf1ec4d8-f5 with device alias net1 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.388 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.471 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.472 2 DEBUG nova.network.neutron [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:39 np0005486808 kernel: tapdf1ec4d8-f5 (unregistering): left promiscuous mode
Oct 14 05:06:39 np0005486808 NetworkManager[44885]: <info>  [1760432799.5029] device (tapdf1ec4d8-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.507 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.507 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.508 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.508 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.509 2 DEBUG oslo_concurrency.lockutils [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.509 2 DEBUG nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.509 2 WARNING nova.compute.manager [req-2dcae301-e495-4c86-aa4c-9c4ca4b82c5d req-4574ad6e-5185-40fd-9f59-c52e6e3f2682 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.510 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.513 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00607|binding|INFO|Releasing lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c from this chassis (sb_readonly=0)
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00608|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c down in Southbound
Oct 14 05:06:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:39Z|00609|binding|INFO|Removing iface tapdf1ec4d8-f5 ovn-installed in OVS
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:39.525 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.527 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432799.5268383, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.527 2 DEBUG nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tapdf1ec4d8-f5 with device alias net1 for instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.528 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.536 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='72'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <name>instance-00000039</name>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <uuid>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</uuid>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:38</nova:creationTime>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='serial'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='uuid'>a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk' index='2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_disk.config' index='1'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:b4:40:de'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target dev='tapdffa5a1f-65'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8/console.log' append='off'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c397,c877</label>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c397,c877</imagelabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.536 2 INFO nova.virt.libvirt.driver [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 from the live domain config.#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.537 2 DEBUG nova.virt.libvirt.vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.537 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.538 2 DEBUG nova.network.os_vif_util [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.538 2 DEBUG os_vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.550 2 INFO os_vif [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')#033[00m
Oct 14 05:06:39 np0005486808 nova_compute[259627]: 2025-10-14 09:06:39.551 2 DEBUG nova.virt.libvirt.guest [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-1758133214</nova:name>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:39</nova:creationTime>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    <nova:port uuid="dffa5a1f-657b-498e-bbe5-6540fead7fb6">
Oct 14 05:06:39 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:39 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:39 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:06:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 339 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 14 05:06:39 np0005486808 podman[324602]: 2025-10-14 09:06:39.869321685 +0000 UTC m=+0.112654296 container create 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:06:39 np0005486808 podman[324602]: 2025-10-14 09:06:39.806661461 +0000 UTC m=+0.049994132 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:06:39 np0005486808 systemd[1]: Started libpod-conmon-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope.
Oct 14 05:06:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:06:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ebe327e9b036dd3fe73fd0e1546dc84881dfa7dffba911ed9994c43ab1d96a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:06:40 np0005486808 podman[324602]: 2025-10-14 09:06:40.003744815 +0000 UTC m=+0.247077446 container init 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:06:40 np0005486808 podman[324602]: 2025-10-14 09:06:40.010643665 +0000 UTC m=+0.253976266 container start 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 14 05:06:40 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : New worker (324623) forked
Oct 14 05:06:40 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : Loading success.
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.109 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.110 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.121 2 DEBUG nova.compute.manager [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG nova.compute.manager [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing instance network info cache due to event network-changed-deb48802-bfee-42af-882c-3632b1fbb2cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.122 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Refreshing network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.124 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[27ebe0f6-3ee1-402d-9d38-4d10025bd6e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.150 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4143b-c17b-432a-8256-16accd1026ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.153 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04b04c9f-0bb9-4d20-a624-1092d53e7751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.186 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[68762ebe-b631-4b67-ad4e-23fc1e224361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37c05338-da39-4565-8d49-a64f279655f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324637, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.215 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.216 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.218 2 INFO nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Terminating instance#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.218 2 DEBUG nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.222 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aec952-db56-47c7-95c2-c5f8930a6158]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324638, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324638, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.225 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.230 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.231 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 kernel: tapdeb48802-bf (unregistering): left promiscuous mode
Oct 14 05:06:40 np0005486808 NetworkManager[44885]: <info>  [1760432800.2681] device (tapdeb48802-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00610|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=0)
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00611|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down in Southbound
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00612|binding|INFO|Removing iface tapdeb48802-bf ovn-installed in OVS
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.290 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.293 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.310 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88a40c6c-6ad5-4492-907c-18a5141e8a32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 14 05:06:40 np0005486808 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003c.scope: Consumed 5.852s CPU time.
Oct 14 05:06:40 np0005486808 systemd-machined[214636]: Machine qemu-75-instance-0000003c terminated.
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ad729ba7-896e-4901-a8b0-be76db64d7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.354 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6f23a12c-a10c-49c0-8c88-2519db38182e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[55b8a40c-dd93-48d2-8f71-f3135c8a10a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.411 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7048650-72cd-43c1-aefa-992523d6f8d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324690, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.428 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8d2d5-0a69-4fa9-9029-44986c2d3714]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324692, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324692, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.430 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.439 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.440 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 kernel: tapdeb48802-bf: entered promiscuous mode
Oct 14 05:06:40 np0005486808 NetworkManager[44885]: <info>  [1760432800.4430] manager: (tapdeb48802-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Oct 14 05:06:40 np0005486808 kernel: tapdeb48802-bf (unregistering): left promiscuous mode
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00613|binding|INFO|Claiming lport deb48802-bfee-42af-882c-3632b1fbb2cf for this chassis.
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00614|binding|INFO|deb48802-bfee-42af-882c-3632b1fbb2cf: Claiming fa:16:3e:e0:3f:2b 10.100.0.9
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.457 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.458 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 bound to our chassis#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.459 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.474 2 INFO nova.virt.libvirt.driver [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Instance destroyed successfully.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.474 2 DEBUG nova.objects.instance [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid dd55716e-2330-42a4-8963-33bdc9c7bbf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00615|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf ovn-installed in OVS
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00616|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf up in Southbound
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.478 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[449c8e2c-4928-4c25-b17d-4f1c910fccef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00617|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=1)
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00618|binding|INFO|Removing iface tapdeb48802-bf ovn-installed in OVS
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00619|if_status|INFO|Not setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down as sb is readonly
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00620|binding|INFO|Releasing lport deb48802-bfee-42af-882c-3632b1fbb2cf from this chassis (sb_readonly=0)
Oct 14 05:06:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:40Z|00621|binding|INFO|Setting lport deb48802-bfee-42af-882c-3632b1fbb2cf down in Southbound
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.497 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3f:2b 10.100.0.9'], port_security=['fa:16:3e:e0:3f:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd55716e-2330-42a4-8963-33bdc9c7bbf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=deb48802-bfee-42af-882c-3632b1fbb2cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.507 2 DEBUG nova.virt.libvirt.vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-138998221',display_name='tempest-ServerActionsTestOtherA-server-138998221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-138998221',id=60,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-kgnyrp0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:35Z,user_data=None,user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=dd55716e-2330-42a4-8963-33bdc9c7bbf8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.507 2 DEBUG nova.network.os_vif_util [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.508 2 DEBUG nova.network.os_vif_util [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.508 2 DEBUG os_vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdeb48802-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.517 2 INFO os_vif [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3f:2b,bridge_name='br-int',has_traffic_filtering=True,id=deb48802-bfee-42af-882c-3632b1fbb2cf,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb48802-bf')#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.527 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1662c53a-ac81-4d41-ae07-745a5de23cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.530 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[142804ff-5b36-4a00-a1eb-b71ecdf26c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41e994f2-c619-457f-8a35-03278f299dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.583 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6aba2fde-b222-4222-8bb5-c0ad9e416e8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 916, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324727, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90bd0788-e696-4048-8abf-6fe9ae6ffcb1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324728, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324728, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.602 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.604 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.604 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.605 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.605 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.606 162547 INFO neutron.agent.ovn.metadata.agent [-] Port deb48802-bfee-42af-882c-3632b1fbb2cf in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.607 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3b87118-f516-4f2d-8696-aa7290af9d83#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[253ae6a6-9bd8-49cd-8c44-c6b415de2b79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.649 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[68154786-4523-43d6-9343-4d2ead5e1515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.653 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1003c6db-ff7f-44c5-a589-201ae1415b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.682 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9ded30-fae1-44d9-9165-a96ae70c1f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.710 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1610eb89-9b82-40fb-beb8-13487c40ee65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3b87118-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:43:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 916, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647540, 'reachable_time': 35719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324734, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c334b8b0-7798-4d30-b0c8-d5596f2bbaa6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647552, 'tstamp': 647552}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324736, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3b87118-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647555, 'tstamp': 647555}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324736, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3b87118-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.733 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.733 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3b87118-f0, col_values=(('external_ids', {'iface-id': 'a7f44223-dee5-4a2f-b975-1f04f03b78f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:40.734 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.872 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.872 2 DEBUG nova.network.neutron [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.881 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.882 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.882 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.883 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Processing event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.884 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.885 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.885 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.885 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.886 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.887 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.887 2 DEBUG oslo_concurrency.lockutils [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.888 2 DEBUG nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.888 2 WARNING nova.compute.manager [req-b9bbd1ee-5558-4368-93d0-b9aa82568ff8 req-c984ee21-d15b-4843-8027-24dadc09a262 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.902 2 DEBUG oslo_concurrency.lockutils [req-8916f69c-4dea-474f-a0c6-7140f1c881e8 req-c7ddecf3-e5dd-44ba-9938-a2c44ca1e366 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.913 2 INFO nova.virt.libvirt.driver [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deleting instance files /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8_del#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.914 2 INFO nova.virt.libvirt.driver [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deletion of /var/lib/nova/instances/dd55716e-2330-42a4-8963-33bdc9c7bbf8_del complete#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.957 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.958 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9565084, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.958 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Started (Lifecycle Event)#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.963 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.968 2 INFO nova.virt.libvirt.driver [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance spawned successfully.#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.968 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:06:40 np0005486808 nova_compute[259627]: 2025-10-14 09:06:40.990 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.009 2 INFO nova.compute.manager [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.010 2 DEBUG oslo.service.loopingcall [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.011 2 DEBUG nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.011 2 DEBUG nova.network.neutron [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.022 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.030 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.031 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.032 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.033 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.034 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.035 2 DEBUG nova.virt.libvirt.driver [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.041 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.042 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.042 2 DEBUG nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.053 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9566607, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.054 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.084 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.087 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432800.9620774, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.088 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.106 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.110 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.115 2 INFO nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 8.17 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.116 2 DEBUG nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.131 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.181 2 INFO nova.compute.manager [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 9.19 seconds to build instance.#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.200 2 DEBUG oslo_concurrency.lockutils [None req-c06e3778-14df-4952-bd96-91aea25a0a55 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.431 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updated VIF entry in instance network info cache for port deb48802-bfee-42af-882c-3632b1fbb2cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.432 2 DEBUG nova.network.neutron [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [{"id": "deb48802-bfee-42af-882c-3632b1fbb2cf", "address": "fa:16:3e:e0:3f:2b", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb48802-bf", "ovs_interfaceid": "deb48802-bfee-42af-882c-3632b1fbb2cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:41 np0005486808 nova_compute[259627]: 2025-10-14 09:06:41.453 2 DEBUG oslo_concurrency.lockutils [req-bf0a7f32-df0c-48e5-ac5d-48640f97aee6 req-3ab1d71a-19f5-4075-9dae-232044b7c2e1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-dd55716e-2330-42a4-8963-33bdc9c7bbf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.281 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.282 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.283 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.284 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 WARNING nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.285 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG oslo_concurrency.lockutils [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.286 2 DEBUG nova.compute.manager [req-2124a950-5ebe-47f0-8fce-f6c5626523b5 req-823eab60-4849-4226-a246-1fb591e9e484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-unplugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:06:42 np0005486808 nova_compute[259627]: 2025-10-14 09:06:42.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029738194512274533 of space, bias 1.0, pg target 0.892145835368236 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.168 2 DEBUG nova.network.neutron [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.193 2 INFO nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Took 2.18 seconds to deallocate network for instance.#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.250 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.251 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.392 2 DEBUG oslo_concurrency.processutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 372 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 05:06:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317642531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.889 2 DEBUG oslo_concurrency.processutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.895 2 DEBUG nova.compute.provider_tree [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.910 2 DEBUG nova.scheduler.client.report [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.925 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.949 2 INFO nova.scheduler.client.report [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance dd55716e-2330-42a4-8963-33bdc9c7bbf8#033[00m
Oct 14 05:06:43 np0005486808 nova_compute[259627]: 2025-10-14 09:06:43.999 2 DEBUG oslo_concurrency.lockutils [None req-7a28f01f-b0cd-496a-a841-0b92b99ec64c d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.132 2 INFO nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.133 2 DEBUG nova.network.neutron [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.150 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.174 2 DEBUG oslo_concurrency.lockutils [None req-b965d0f9-3b39-42cb-9fa1-3fad67988ed3 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.233 2 INFO nova.compute.manager [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Pausing#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.235 2 DEBUG nova.objects.instance [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'flavor' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.264 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432804.2639887, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.264 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.270 2 DEBUG nova.compute.manager [None req-d673ad26-1c19-4c12-9521-927131ab42de a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.283 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.287 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.324 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.383 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.384 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.385 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.385 2 DEBUG oslo_concurrency.lockutils [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "dd55716e-2330-42a4-8963-33bdc9c7bbf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.386 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] No waiting events found dispatching network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.386 2 WARNING nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received unexpected event network-vif-plugged-deb48802-bfee-42af-882c-3632b1fbb2cf for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.387 2 DEBUG nova.compute.manager [req-99f0620a-9434-4398-9791-041be0771cb6 req-8f86781f-ebfd-4f07-a45c-a55ff16f32cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Received event network-vif-deleted-deb48802-bfee-42af-882c-3632b1fbb2cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.872 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.873 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.873 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.874 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.874 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.875 2 INFO nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Terminating instance#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.877 2 DEBUG nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:06:44 np0005486808 kernel: tap58429c4c-bd (unregistering): left promiscuous mode
Oct 14 05:06:44 np0005486808 NetworkManager[44885]: <info>  [1760432804.9518] device (tap58429c4c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:44Z|00622|binding|INFO|Releasing lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 from this chassis (sb_readonly=0)
Oct 14 05:06:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:44Z|00623|binding|INFO|Setting lport 58429c4c-bdab-4d51-8440-95fb6e0fab00 down in Southbound
Oct 14 05:06:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:44Z|00624|binding|INFO|Removing iface tap58429c4c-bd ovn-installed in OVS
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:44 np0005486808 nova_compute[259627]: 2025-10-14 09:06:44.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.978 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a5:80 10.100.0.3'], port_security=['fa:16:3e:53:a5:80 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1ce7a863-d0bf-4ea3-80f5-18675b16ac93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3b87118-f516-4f2d-8696-aa7290af9d83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e47722c609640d3a70fee8dd6ff94cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73d8240-1201-4e28-9385-26f0dd3955ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d19cad-5a0d-49ee-a7bf-91e279c3b03d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=58429c4c-bdab-4d51-8440-95fb6e0fab00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.979 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 58429c4c-bdab-4d51-8440-95fb6e0fab00 in datapath f3b87118-f516-4f2d-8696-aa7290af9d83 unbound from our chassis#033[00m
Oct 14 05:06:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.980 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3b87118-f516-4f2d-8696-aa7290af9d83, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:06:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[544cdfae-da75-480c-b6f3-7d0bf1e609ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:44.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 namespace which is not needed anymore#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 14 05:06:45 np0005486808 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000035.scope: Consumed 16.481s CPU time.
Oct 14 05:06:45 np0005486808 systemd-machined[214636]: Machine qemu-68-instance-00000035 terminated.
Oct 14 05:06:45 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : haproxy version is 2.8.14-c23fe91
Oct 14 05:06:45 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [NOTICE]   (319464) : path to executable is /usr/sbin/haproxy
Oct 14 05:06:45 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [WARNING]  (319464) : Exiting Master process...
Oct 14 05:06:45 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [ALERT]    (319464) : Current worker (319466) exited with code 143 (Terminated)
Oct 14 05:06:45 np0005486808 neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83[319460]: [WARNING]  (319464) : All workers exited. Exiting... (0)
Oct 14 05:06:45 np0005486808 systemd[1]: libpod-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope: Deactivated successfully.
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.121 2 INFO nova.virt.libvirt.driver [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Instance destroyed successfully.#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.121 2 DEBUG nova.objects.instance [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lazy-loading 'resources' on Instance uuid 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:45 np0005486808 podman[324782]: 2025-10-14 09:06:45.123652464 +0000 UTC m=+0.044743402 container died 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.139 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG oslo_concurrency.lockutils [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.140 2 DEBUG nova.compute.manager [req-bb19f989-ebe3-4f1d-a7d6-bea6a65e8cfb req-b5755d17-863c-4bb4-a134-4c448f76baed 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-unplugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.142 2 DEBUG nova.virt.libvirt.vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1496520165',display_name='tempest-ServerActionsTestOtherA-server-1496520165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1496520165',id=53,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHn+mln6XiHS3Dbrh5f5r23+s3Q61qobcQwb2UzGhsgS1DhTJSEpJGmS/ZP0w8jiE9rcTktB/Gz7RvHBySi5EJz+HH+wa+mTFVBHeaIG5cz8L5ypIzO20Wa3eu2dAxGK5A==',key_name='tempest-keypair-1288175355',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e47722c609640d3a70fee8dd6ff94cc',ramdisk_id='',reservation_id='r-60w53hyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-894139105',owner_user_name='tempest-ServerActionsTestOtherA-894139105-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:05:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d952679a4e6a4fc6bacf42c02d3e92d0',uuid=1ce7a863-d0bf-4ea3-80f5-18675b16ac93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.142 2 DEBUG nova.network.os_vif_util [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converting VIF {"id": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "address": "fa:16:3e:53:a5:80", "network": {"id": "f3b87118-f516-4f2d-8696-aa7290af9d83", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1427102137-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e47722c609640d3a70fee8dd6ff94cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58429c4c-bd", "ovs_interfaceid": "58429c4c-bdab-4d51-8440-95fb6e0fab00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.143 2 DEBUG nova.network.os_vif_util [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.143 2 DEBUG os_vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58429c4c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.152 2 INFO os_vif [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a5:80,bridge_name='br-int',has_traffic_filtering=True,id=58429c4c-bdab-4d51-8440-95fb6e0fab00,network=Network(f3b87118-f516-4f2d-8696-aa7290af9d83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58429c4c-bd')#033[00m
Oct 14 05:06:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b-userdata-shm.mount: Deactivated successfully.
Oct 14 05:06:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6c59812b275a7f08f721c19ad28122ba9e2d64d1426f442032d01ef6fa1360d6-merged.mount: Deactivated successfully.
Oct 14 05:06:45 np0005486808 podman[324782]: 2025-10-14 09:06:45.177749546 +0000 UTC m=+0.098840494 container cleanup 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:06:45 np0005486808 systemd[1]: libpod-conmon-5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b.scope: Deactivated successfully.
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.245 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:45 np0005486808 podman[324836]: 2025-10-14 09:06:45.262239077 +0000 UTC m=+0.054933644 container remove 5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13d93744-f6f0-4ce3-ad3a-12121c2a2b1d]: (4, ('Tue Oct 14 09:06:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 (5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b)\n5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b\nTue Oct 14 09:06:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 (5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b)\n5ba6fea99da591b3014bdc8d9ea7cd148b4696e847c5038c822c0abbf14bd30b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84b9a732-3b13-42d2-9243-ebdf2a16c823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3b87118-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 kernel: tapf3b87118-f0: left promiscuous mode
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.319 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb4de62-e69b-46d0-bfe8-3d3fdc75b982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[231e6f90-204f-411e-9bef-d5ffcdb39bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa99f71-1268-4c7c-9409-d7fcabe04d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.370 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45343fca-1b34-4673-ba6a-fbadf0dde668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647534, 'reachable_time': 25406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324854, 'error': None, 'target': 'ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.372 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3b87118-f516-4f2d-8696-aa7290af9d83 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.372 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfa4f97-b4ef-468d-8237-7ebff1e4e65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:45.373 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:06:45 np0005486808 systemd[1]: run-netns-ovnmeta\x2df3b87118\x2df516\x2d4f2d\x2d8696\x2daa7290af9d83.mount: Deactivated successfully.
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.488 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.489 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.489 2 DEBUG nova.objects.instance [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.534 2 INFO nova.virt.libvirt.driver [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deleting instance files /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_del#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.534 2 INFO nova.virt.libvirt.driver [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deletion of /var/lib/nova/instances/1ce7a863-d0bf-4ea3-80f5-18675b16ac93_del complete#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.579 2 INFO nova.compute.manager [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.579 2 DEBUG oslo.service.loopingcall [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.580 2 DEBUG nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:06:45 np0005486808 nova_compute[259627]: 2025-10-14 09:06:45.580 2 DEBUG nova.network.neutron [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:06:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 280 op/s
Oct 14 05:06:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:46.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.566 2 DEBUG nova.objects.instance [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'pci_requests' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.591 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.660 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing instance network info cache due to event network-changed-dffa5a1f-657b-498e-bbe5-6540fead7fb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.661 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:46 np0005486808 nova_compute[259627]: 2025-10-14 09:06:46.662 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Refreshing network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.181 2 DEBUG nova.network.neutron [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.197 2 INFO nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Took 1.62 seconds to deallocate network for instance.#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.207 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.207 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.208 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.208 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.209 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.210 2 INFO nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Terminating instance#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.212 2 DEBUG nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.242 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.242 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:47 np0005486808 kernel: tap26dd6404-20 (unregistering): left promiscuous mode
Oct 14 05:06:47 np0005486808 NetworkManager[44885]: <info>  [1760432807.2622] device (tap26dd6404-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:47Z|00625|binding|INFO|Releasing lport 26dd6404-2018-471b-a387-2b045d236164 from this chassis (sb_readonly=0)
Oct 14 05:06:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:47Z|00626|binding|INFO|Setting lport 26dd6404-2018-471b-a387-2b045d236164 down in Southbound
Oct 14 05:06:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:47Z|00627|binding|INFO|Removing iface tap26dd6404-20 ovn-installed in OVS
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:a0:6d 10.100.0.12'], port_security=['fa:16:3e:95:a0:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '73d6be04-84dc-4b80-81f8-a9bbf9938051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26dd6404-2018-471b-a387-2b045d236164) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.295 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26dd6404-2018-471b-a387-2b045d236164 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.296 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.296 2 DEBUG nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.297 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.297 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.298 2 DEBUG oslo_concurrency.lockutils [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.298 2 DEBUG nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] No waiting events found dispatching network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.298 2 WARNING nova.compute.manager [req-9a0c4ed4-7f9f-436b-9e53-88030f966b8a req-e726e686-2c12-438b-80db-a3ab50a2d271 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received unexpected event network-vif-plugged-58429c4c-bdab-4d51-8440-95fb6e0fab00 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.299 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cff322e0-c925-44ee-9cef-3cb2de5751e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.300 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 14 05:06:47 np0005486808 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000003e.scope: Consumed 5.270s CPU time.
Oct 14 05:06:47 np0005486808 systemd-machined[214636]: Machine qemu-76-instance-0000003e terminated.
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.361 2 DEBUG oslo_concurrency.processutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.458 2 INFO nova.virt.libvirt.driver [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Instance destroyed successfully.#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.460 2 DEBUG nova.objects.instance [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 73d6be04-84dc-4b80-81f8-a9bbf9938051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.487 2 DEBUG nova.virt.libvirt.vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-715549256',display_name='tempest-DeleteServersTestJSON-server-715549256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-715549256',id=62,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-vdf1bahn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:44Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=73d6be04-84dc-4b80-81f8-a9bbf9938051,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.488 2 DEBUG nova.network.os_vif_util [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "26dd6404-2018-471b-a387-2b045d236164", "address": "fa:16:3e:95:a0:6d", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26dd6404-20", "ovs_interfaceid": "26dd6404-2018-471b-a387-2b045d236164", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.489 2 DEBUG nova.network.os_vif_util [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.490 2 DEBUG os_vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26dd6404-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:47 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : haproxy version is 2.8.14-c23fe91
Oct 14 05:06:47 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [NOTICE]   (324621) : path to executable is /usr/sbin/haproxy
Oct 14 05:06:47 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [WARNING]  (324621) : Exiting Master process...
Oct 14 05:06:47 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [ALERT]    (324621) : Current worker (324623) exited with code 143 (Terminated)
Oct 14 05:06:47 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[324617]: [WARNING]  (324621) : All workers exited. Exiting... (0)
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:47 np0005486808 systemd[1]: libpod-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope: Deactivated successfully.
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.503 2 INFO os_vif [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:a0:6d,bridge_name='br-int',has_traffic_filtering=True,id=26dd6404-2018-471b-a387-2b045d236164,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26dd6404-20')#033[00m
Oct 14 05:06:47 np0005486808 podman[324878]: 2025-10-14 09:06:47.508379534 +0000 UTC m=+0.075689695 container died 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:06:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:06:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-00ebe327e9b036dd3fe73fd0e1546dc84881dfa7dffba911ed9994c43ab1d96a-merged.mount: Deactivated successfully.
Oct 14 05:06:47 np0005486808 podman[324878]: 2025-10-14 09:06:47.549510717 +0000 UTC m=+0.116820868 container cleanup 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:06:47 np0005486808 systemd[1]: libpod-conmon-026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd.scope: Deactivated successfully.
Oct 14 05:06:47 np0005486808 podman[324950]: 2025-10-14 09:06:47.633989367 +0000 UTC m=+0.053765055 container remove 026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a66ac1b2-18c8-4ebf-8d7f-92d63a59ab16]: (4, ('Tue Oct 14 09:06:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd)\n026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd\nTue Oct 14 09:06:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd)\n026240acfbe22bf6f7403a85591f060a0c3cc116a2c8351b8706a41ff6c53ecd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd460d3-80d4-46a8-b63f-7ac8c8ee0e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.642 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.649 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebc68f0-58fb-4ba2-880f-dc1e0f638e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.656 2 DEBUG nova.policy [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b0f0f2991214b9caed6f475a149c342', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8945d892653642ac9c3d894f8bc3619d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef6bf79-8b1f-4a64-852e-3a922840b71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4be7d14e-4c93-4bac-a2ec-848799ad5cbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1894dc91-4fcf-4529-bdef-057d980dd2ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657766, 'reachable_time': 39793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324967, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.696 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:06:47 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 05:06:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:47.696 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fa289942-25c1-475e-93c0-f2b6dd947adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.916 2 INFO nova.virt.libvirt.driver [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deleting instance files /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051_del#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.917 2 INFO nova.virt.libvirt.driver [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deletion of /var/lib/nova/instances/73d6be04-84dc-4b80-81f8-a9bbf9938051_del complete#033[00m
Oct 14 05:06:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819414765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.941 2 DEBUG oslo_concurrency.processutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.946 2 DEBUG nova.compute.provider_tree [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.958 2 INFO nova.compute.manager [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG oslo.service.loopingcall [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.959 2 DEBUG nova.network.neutron [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.962 2 DEBUG nova.scheduler.client.report [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:47 np0005486808 nova_compute[259627]: 2025-10-14 09:06:47.982 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.034 2 INFO nova.scheduler.client.report [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Deleted allocations for instance 1ce7a863-d0bf-4ea3-80f5-18675b16ac93#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.108 2 DEBUG oslo_concurrency.lockutils [None req-f2bf6072-67c3-4ed1-95a2-41265bc9aabf d952679a4e6a4fc6bacf42c02d3e92d0 4e47722c609640d3a70fee8dd6ff94cc - - default default] Lock "1ce7a863-d0bf-4ea3-80f5-18675b16ac93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.744 2 DEBUG nova.network.neutron [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.761 2 INFO nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.902 2 DEBUG nova.compute.manager [req-c5aff32e-cd7b-4146-9f9b-5974489fc49d req-eaba8b44-a8f1-46ea-9cc5-992308f66606 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Received event network-vif-deleted-58429c4c-bdab-4d51-8440-95fb6e0fab00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.937 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:48 np0005486808 nova_compute[259627]: 2025-10-14 09:06:48.937 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.020 2 DEBUG oslo_concurrency.processutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.064 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Successfully updated port: df1ec4d8-f543-4899-9d98-b60a6a46cc7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.071 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updated VIF entry in instance network info cache for port dffa5a1f-657b-498e-bbe5-6540fead7fb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.072 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [{"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.088 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.089 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.090 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.093 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.094 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.094 2 DEBUG nova.compute.manager [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-350a3bec-5dbd-4a83-8d80-5796be0319fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.095 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.341 2 WARNING nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] fc2d149f-aebf-406a-aed2-5161dd22b079 already exists in list: networks containing: ['fc2d149f-aebf-406a-aed2-5161dd22b079']. ignoring it#033[00m
Oct 14 05:06:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186077694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.523 2 DEBUG oslo_concurrency.processutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.531 2 DEBUG nova.compute.provider_tree [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.562 2 DEBUG nova.scheduler.client.report [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.594 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.623 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.624 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.625 2 WARNING nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-unplugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.625 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.626 2 DEBUG oslo_concurrency.lockutils [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.626 2 DEBUG nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] No waiting events found dispatching network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.626 2 WARNING nova.compute.manager [req-74fbce0d-e4fd-4863-a1ec-76adfcc81f27 req-c545aed6-9fb5-4b16-bcce-e6899bb4fe78 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received unexpected event network-vif-plugged-26dd6404-2018-471b-a387-2b045d236164 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.627 2 INFO nova.scheduler.client.report [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 73d6be04-84dc-4b80-81f8-a9bbf9938051#033[00m
Oct 14 05:06:49 np0005486808 nova_compute[259627]: 2025-10-14 09:06:49.698 2 DEBUG oslo_concurrency.lockutils [None req-c98468f6-4258-4050-949c-049c9ad2d299 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "73d6be04-84dc-4b80-81f8-a9bbf9938051" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 309 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Oct 14 05:06:50 np0005486808 nova_compute[259627]: 2025-10-14 09:06:50.894 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Received event network-vif-deleted-26dd6404-2018-471b-a387-2b045d236164 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:50 np0005486808 nova_compute[259627]: 2025-10-14 09:06:50.894 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:50 np0005486808 nova_compute[259627]: 2025-10-14 09:06:50.895 2 DEBUG nova.compute.manager [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing instance network info cache due to event network-changed-df1ec4d8-f543-4899-9d98-b60a6a46cc7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:50 np0005486808 nova_compute[259627]: 2025-10-14 09:06:50.895 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:51 np0005486808 podman[324994]: 2025-10-14 09:06:51.67146846 +0000 UTC m=+0.077164042 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:06:51 np0005486808 podman[324993]: 2025-10-14 09:06:51.711991028 +0000 UTC m=+0.117743961 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:06:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 283 op/s
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.390 2 DEBUG nova.network.neutron [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.424 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.426 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.427 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.433 2 DEBUG nova.virt.libvirt.vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.434 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.436 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.437 2 DEBUG os_vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf1ec4d8-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf1ec4d8-f5, col_values=(('external_ids', {'iface-id': 'df1ec4d8-f543-4899-9d98-b60a6a46cc7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:db:60', 'vm-uuid': '2189eac5-238f-4f09-ae1c-1cf47c3b6030'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 NetworkManager[44885]: <info>  [1760432812.4483] manager: (tapdf1ec4d8-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.455 2 INFO os_vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.457 2 DEBUG nova.virt.libvirt.vif [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.457 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.458 2 DEBUG nova.network.os_vif_util [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.466 2 DEBUG nova.virt.libvirt.guest [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:52 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:06:52 np0005486808 kernel: tapdf1ec4d8-f5: entered promiscuous mode
Oct 14 05:06:52 np0005486808 NetworkManager[44885]: <info>  [1760432812.4808] manager: (tapdf1ec4d8-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Oct 14 05:06:52 np0005486808 systemd-udevd[325041]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:06:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:52Z|00628|binding|INFO|Claiming lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c for this chassis.
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:52Z|00629|binding|INFO|df1ec4d8-f543-4899-9d98-b60a6a46cc7c: Claiming fa:16:3e:38:db:60 10.100.0.12
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.533 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 bound to our chassis#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.534 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:52 np0005486808 NetworkManager[44885]: <info>  [1760432812.5450] device (tapdf1ec4d8-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:06:52 np0005486808 NetworkManager[44885]: <info>  [1760432812.5457] device (tapdf1ec4d8-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:06:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:52Z|00630|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c ovn-installed in OVS
Oct 14 05:06:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:52Z|00631|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c up in Southbound
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1faebdad-a438-4394-b9d0-022734ef13c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.593 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[111b7490-4d1f-4f3b-be36-6e1ff98cadcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.600 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[894dedba-8680-4323-a998-197920e0462f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.612 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:9c:3f:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.613 2 DEBUG nova.virt.libvirt.driver [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] No VIF found with MAC fa:16:3e:38:db:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.635 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[be6986ea-c8d3-42da-8d9c-4b0ab413a53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.644 2 DEBUG nova.virt.libvirt.guest [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 05:06:52 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:52 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:52 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:52 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:52 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfe2d4e-f7c7-4585-a6e1-928c506fe467]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325050, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.672 2 DEBUG oslo_concurrency.lockutils [None req-c58422a2-c359-4976-b12b-9e63820b2564 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a01ac541-da31-496d-b05e-7221a40350a2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325051, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325051, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:52.694 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:52Z|00632|binding|INFO|Releasing lport 156432ee-35b5-40a9-aded-8066933d9972 from this chassis (sb_readonly=0)
Oct 14 05:06:52 np0005486808 nova_compute[259627]: 2025-10-14 09:06:52.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 200 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 146 op/s
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.104 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.105 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.124 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.206 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.206 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.216 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.217 2 INFO nova.compute.claims [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.341 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.395 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port 350a3bec-5dbd-4a83-8d80-5796be0319fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.396 2 DEBUG nova.network.neutron [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.435 2 DEBUG oslo_concurrency.lockutils [req-855995c1-bfac-45d7-a89d-a891f30cf57f req-1ec9d3ec-5fc8-415f-9324-cd18fc049072 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.436 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.437 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Refreshing network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:06:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737460509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.841 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.849 2 DEBUG nova.compute.provider_tree [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.870 2 DEBUG nova.scheduler.client.report [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.896 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.897 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.947 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.948 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:06:54 np0005486808 nova_compute[259627]: 2025-10-14 09:06:54.971 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.003 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.132 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.133 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.134 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating image(s)#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.161 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.191 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.220 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.224 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.275 2 DEBUG nova.policy [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:06:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:55Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:db:60 10.100.0.12
Oct 14 05:06:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:55Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:db:60 10.100.0.12
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.324 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.324 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.325 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.326 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.350 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.355 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432800.4674513, dd55716e-2330-42a4-8963-33bdc9c7bbf8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.469 2 INFO nova.compute.manager [-] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.501 2 DEBUG nova.compute.manager [None req-2e68dc7e-0c7d-460d-b35e-7ca2d32c5203 - - - - - -] [instance: dd55716e-2330-42a4-8963-33bdc9c7bbf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.643 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.708 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:06:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 89 KiB/s wr, 157 op/s
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.827 2 DEBUG nova.objects.instance [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.841 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.842 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Ensure instance console log exists: /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.843 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.844 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:55 np0005486808 nova_compute[259627]: 2025-10-14 09:06:55.844 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.212 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Successfully created port: 93e6162a-d037-4440-9c8c-1cb9b293f249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.702 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updated VIF entry in instance network info cache for port df1ec4d8-f543-4899-9d98-b60a6a46cc7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.703 2 DEBUG nova.network.neutron [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.734 2 DEBUG oslo_concurrency.lockutils [req-9ab2f455-455a-452f-a560-8669676d9786 req-a2971ba0-4069-4a6a-b172-a9810d6accc3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.941 2 DEBUG nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG oslo_concurrency.lockutils [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.942 2 DEBUG nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:56 np0005486808 nova_compute[259627]: 2025-10-14 09:06:56.942 2 WARNING nova.compute.manager [req-3d661b63-e68d-4265-9270-12385b449894 req-abf9133c-c9ee-4eda-88f5-fcabf30234ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.259 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Successfully updated port: 93e6162a-d037-4440-9c8c-1cb9b293f249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.277 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.278 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.278 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.368 2 DEBUG nova.compute.manager [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.368 2 DEBUG nova.compute.manager [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing instance network info cache due to event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.369 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.448 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:06:57 np0005486808 nova_compute[259627]: 2025-10-14 09:06:57.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:57 np0005486808 podman[325414]: 2025-10-14 09:06:57.643629187 +0000 UTC m=+0.098432135 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:06:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:06:57 np0005486808 podman[325414]: 2025-10-14 09:06:57.745337032 +0000 UTC m=+0.200139870 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:06:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.164 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.164 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.186 2 DEBUG nova.objects.instance [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'flavor' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.211 2 DEBUG nova.virt.libvirt.vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.211 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.213 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.216 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.219 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.225 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Attempting to detach device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.226 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.234 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.238 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='74'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <name>instance-0000003b</name>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='serial'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='uuid'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk' index='2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config' index='1'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9c:3f:ae'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='tap350a3bec-5d'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:38:db:60'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='tapdf1ec4d8-f5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='net1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source path='/dev/pts/2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source path='/dev/pts/2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c87,c674</label>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c674</imagelabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.239 2 INFO nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the persistent domain config.#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.239 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] (1/8): Attempting to detach device tapdf1ec4d8-f5 with device alias net1 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.240 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:38:db:60"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <target dev="tapdf1ec4d8-f5"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:06:58 np0005486808 kernel: tapdf1ec4d8-f5 (unregistering): left promiscuous mode
Oct 14 05:06:58 np0005486808 NetworkManager[44885]: <info>  [1760432818.3459] device (tapdf1ec4d8-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:58Z|00633|binding|INFO|Releasing lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c from this chassis (sb_readonly=0)
Oct 14 05:06:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:58Z|00634|binding|INFO|Setting lport df1ec4d8-f543-4899-9d98-b60a6a46cc7c down in Southbound
Oct 14 05:06:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:06:58Z|00635|binding|INFO|Removing iface tapdf1ec4d8-f5 ovn-installed in OVS
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.370 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760432818.3700905, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.371 2 DEBUG nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Start waiting for the detach event from libvirt for device tapdf1ec4d8-f5 with device alias net1 for instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.372 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.374 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:db:60 10.100.0.12'], port_security=['fa:16:3e:38:db:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-278583686', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '643bc47b-384e-4818-a06b-8ecfda630b09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=df1ec4d8-f543-4899-9d98-b60a6a46cc7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.375 162547 INFO neutron.agent.ovn.metadata.agent [-] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.378 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.378 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:38:db:60"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdf1ec4d8-f5"/></interface>not found in domain: <domain type='kvm' id='74'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <name>instance-0000003b</name>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <uuid>2189eac5-238f-4f09-ae1c-1cf47c3b6030</uuid>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:52</nova:creationTime>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:port uuid="df1ec4d8-f543-4899-9d98-b60a6a46cc7c">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='serial'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='uuid'>2189eac5-238f-4f09-ae1c-1cf47c3b6030</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk' index='2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/2189eac5-238f-4f09-ae1c-1cf47c3b6030_disk.config' index='1'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:9c:3f:ae'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target dev='tap350a3bec-5d'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source path='/dev/pts/2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <source path='/dev/pts/2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030/console.log' append='off'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c87,c674</label>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c674</imagelabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.379 2 INFO nova.virt.libvirt.driver [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully detached device tapdf1ec4d8-f5 from instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030 from the live domain config.#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.380 2 DEBUG nova.virt.libvirt.vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.380 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.381 2 DEBUG nova.network.os_vif_util [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.382 2 DEBUG os_vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.395 2 INFO os_vif [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.396 2 DEBUG nova.virt.libvirt.guest [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:name>tempest-tempest.common.compute-instance-2131193327</nova:name>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:06:58</nova:creationTime>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:user uuid="8b0f0f2991214b9caed6f475a149c342">tempest-AttachInterfacesTestJSON-193524374-project-member</nova:user>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:project uuid="8945d892653642ac9c3d894f8bc3619d">tempest-AttachInterfacesTestJSON-193524374</nova:project>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    <nova:port uuid="350a3bec-5dbd-4a83-8d80-5796be0319fd">
Oct 14 05:06:58 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:06:58 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:06:58 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.398 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9546b44-9c1f-4c4f-90ca-df6cf57044d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f39c19ef-88a4-4cb5-aa62-fe11bf116c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.436 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4bac93-517a-4bac-999f-61203d97480a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:06:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.468 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae564788-0044-4918-90ae-0c8860a71f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.483 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84cac4af-f6c4-414d-a099-68d42e6f7375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325585, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.497 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18bcfa63-2bce-4379-8d37-a09084ecc168]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325592, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325592, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.499 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.502 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:06:58.503 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.584 2 DEBUG nova.network.neutron [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.600 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.601 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance network_info: |[{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.603 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.604 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.611 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Start _get_guest_xml network_info=[{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.624 2 WARNING nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.631 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.633 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.642 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.644 2 DEBUG nova.virt.libvirt.host [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.645 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.645 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.646 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.646 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.647 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.647 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.648 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.649 2 DEBUG nova.virt.hardware [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:06:58 np0005486808 nova_compute[259627]: 2025-10-14 09:06:58.654 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543137327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.092 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.119 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.122 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.201 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.201 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.202 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.203 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.203 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.204 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-unplugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.204 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG oslo_concurrency.lockutils [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.205 2 DEBUG nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.205 2 WARNING nova.compute.manager [req-f540840b-3cd2-4f71-9b5a-ffb12345f989 req-3378f23e-2e7c-438e-b55b-fe937f886478 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-df1ec4d8-f543-4899-9d98-b60a6a46cc7c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:59 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 88bd820f-451c-4a7f-aa33-881c67a4960b does not exist
Oct 14 05:06:59 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 31fe626f-6c82-4b46-a17d-13b4d0ea4349 does not exist
Oct 14 05:06:59 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fb9b39aa-4b7c-4ac2-b09a-7718b7f44dcd does not exist
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:06:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204475229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.621 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.625 2 DEBUG nova.virt.libvirt.vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:55Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.626 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.628 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.631 2 DEBUG nova.objects.instance [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.655 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <uuid>c8d53ba7-c60f-4e5c-899f-fd95996ea742</uuid>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <name>instance-0000003f</name>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersTestJSON-server-1969379492</nova:name>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:06:58</nova:creationTime>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <nova:port uuid="93e6162a-d037-4440-9c8c-1cb9b293f249">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="serial">c8d53ba7-c60f-4e5c-899f-fd95996ea742</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="uuid">c8d53ba7-c60f-4e5c-899f-fd95996ea742</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:13:f7:ed"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <target dev="tap93e6162a-d0"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/console.log" append="off"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:06:59 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:06:59 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:06:59 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:06:59 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.656 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Preparing to wait for external event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.657 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.657 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.658 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.660 2 DEBUG nova.virt.libvirt.vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:06:55Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.660 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.661 2 DEBUG nova.network.os_vif_util [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.662 2 DEBUG os_vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93e6162a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93e6162a-d0, col_values=(('external_ids', {'iface-id': '93e6162a-d037-4440-9c8c-1cb9b293f249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:f7:ed', 'vm-uuid': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:06:59 np0005486808 NetworkManager[44885]: <info>  [1760432819.6821] manager: (tap93e6162a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.692 2 INFO os_vif [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0')#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.751 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.752 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.752 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:13:f7:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.753 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Using config drive#033[00m
Oct 14 05:06:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 74 KiB/s wr, 45 op/s
Oct 14 05:06:59 np0005486808 nova_compute[259627]: 2025-10-14 09:06:59.787 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.116 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432805.115147, 1ce7a863-d0bf-4ea3-80f5-18675b16ac93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.117 2 INFO nova.compute.manager [-] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.145 2 DEBUG nova.compute.manager [None req-26f19152-467c-4624-8edd-49861307e52e - - - - - -] [instance: 1ce7a863-d0bf-4ea3-80f5-18675b16ac93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.225431638 +0000 UTC m=+0.060040749 container create 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:00 np0005486808 systemd[1]: Started libpod-conmon-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope.
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.205158499 +0000 UTC m=+0.039767640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.333422848 +0000 UTC m=+0.168032029 container init 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.346609213 +0000 UTC m=+0.181218314 container start 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.35015382 +0000 UTC m=+0.184762981 container attach 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:07:00 np0005486808 condescending_snyder[325956]: 167 167
Oct 14 05:07:00 np0005486808 systemd[1]: libpod-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope: Deactivated successfully.
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.35542119 +0000 UTC m=+0.190030331 container died 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:07:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ca784fa99719cf0a1715223159a56a3be4c3e34e6880899df326d954924e1da7-merged.mount: Deactivated successfully.
Oct 14 05:07:00 np0005486808 podman[325942]: 2025-10-14 09:07:00.412617938 +0000 UTC m=+0.247227049 container remove 2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:07:00 np0005486808 systemd[1]: libpod-conmon-2c8420a61a5cbd7a903d27c33fbd380b0c68b3d1de48de4f314188da7b121fe3.scope: Deactivated successfully.
Oct 14 05:07:00 np0005486808 podman[325981]: 2025-10-14 09:07:00.616221322 +0000 UTC m=+0.044767743 container create 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 05:07:00 np0005486808 podman[325981]: 2025-10-14 09:07:00.595859921 +0000 UTC m=+0.024406312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:00 np0005486808 systemd[1]: Started libpod-conmon-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope.
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.740 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated VIF entry in instance network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.740 2 DEBUG nova.network.neutron [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:00 np0005486808 podman[325981]: 2025-10-14 09:07:00.743964828 +0000 UTC m=+0.172511229 container init 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:07:00 np0005486808 podman[325981]: 2025-10-14 09:07:00.753466852 +0000 UTC m=+0.182013233 container start 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:00 np0005486808 podman[325981]: 2025-10-14 09:07:00.757370738 +0000 UTC m=+0.185917149 container attach 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.780 2 DEBUG oslo_concurrency.lockutils [req-8f090518-51c4-4f4e-8942-84c6cb1406d0 req-57c6fb81-5c16-4623-ac24-2cd6c68ed593 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.796 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.797 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquired lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.797 2 DEBUG nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:07:00 np0005486808 nova_compute[259627]: 2025-10-14 09:07:00.997 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Creating config drive at /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.007 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7kes8fn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.157 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7kes8fn" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.184 2 DEBUG nova.storage.rbd_utils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.188 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.242 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.243 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.244 2 INFO nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Terminating instance#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.245 2 DEBUG nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:07:01 np0005486808 kernel: tap350a3bec-5d (unregistering): left promiscuous mode
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.3051] device (tap350a3bec-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00636|binding|INFO|Releasing lport 350a3bec-5dbd-4a83-8d80-5796be0319fd from this chassis (sb_readonly=0)
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00637|binding|INFO|Setting lport 350a3bec-5dbd-4a83-8d80-5796be0319fd down in Southbound
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00638|binding|INFO|Removing iface tap350a3bec-5d ovn-installed in OVS
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.331 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:3f:ae 10.100.0.6'], port_security=['fa:16:3e:9c:3f:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2189eac5-238f-4f09-ae1c-1cf47c3b6030', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=350a3bec-5dbd-4a83-8d80-5796be0319fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 350a3bec-5dbd-4a83-8d80-5796be0319fd in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.336 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc2d149f-aebf-406a-aed2-5161dd22b079#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.359 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d761962-34e4-4cdd-b30b-b650dfe2e7b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 14 05:07:01 np0005486808 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003b.scope: Consumed 14.114s CPU time.
Oct 14 05:07:01 np0005486808 systemd-machined[214636]: Machine qemu-74-instance-0000003b terminated.
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.418 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5abc18-24eb-4296-8a3b-a60bdf934b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.423 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c160f1ef-84f8-49b8-821a-028e5b989f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.427 2 DEBUG oslo_concurrency.processutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.428 2 INFO nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deleting local config drive /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742/disk.config because it was imported into RBD.#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.458 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1ed5d4-5082-4d13-9a6a-d0bb865d4964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e234958a-0765-4ad0-aff1-f93a3e09a06d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc2d149f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e7:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654854, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326061, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.499 2 INFO nova.virt.libvirt.driver [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Instance destroyed successfully.#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.499 2 DEBUG nova.objects.instance [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid 2189eac5-238f-4f09-ae1c-1cf47c3b6030 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.5048] manager: (tap93e6162a-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Oct 14 05:07:01 np0005486808 systemd-udevd[326085]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5109b6-0761-4c44-9ced-43e27887d87e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654865, 'tstamp': 654865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326073, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc2d149f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654867, 'tstamp': 654867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326073, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.510 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 kernel: tap93e6162a-d0: entered promiscuous mode
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.514 2 DEBUG nova.virt.libvirt.vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.514 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.515 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.515 2 DEBUG os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap350a3bec-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.5215] device (tap93e6162a-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00639|binding|INFO|Claiming lport 93e6162a-d037-4440-9c8c-1cb9b293f249 for this chassis.
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00640|binding|INFO|93e6162a-d037-4440-9c8c-1cb9b293f249: Claiming fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.5226] device (tap93e6162a-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:f7:ed 10.100.0.13'], port_security=['fa:16:3e:13:f7:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=93e6162a-d037-4440-9c8c-1cb9b293f249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00641|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 ovn-installed in OVS
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00642|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 up in Southbound
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc2d149f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc2d149f-a0, col_values=(('external_ids', {'iface-id': '156432ee-35b5-40a9-aded-8066933d9972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.555 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 93e6162a-d037-4440-9c8c-1cb9b293f249 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.555 2 INFO os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:3f:ae,bridge_name='br-int',has_traffic_filtering=True,id=350a3bec-5dbd-4a83-8d80-5796be0319fd,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap350a3bec-5d')#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.556 2 DEBUG nova.virt.libvirt.vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2131193327',display_name='tempest-tempest.common.compute-instance-2131193327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2131193327',id=59,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-dzeywdhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=2189eac5-238f-4f09-ae1c-1cf47c3b6030,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.556 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.556 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "address": "fa:16:3e:38:db:60", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf1ec4d8-f5", "ovs_interfaceid": "df1ec4d8-f543-4899-9d98-b60a6a46cc7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.557 2 DEBUG nova.network.os_vif_util [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.557 2 DEBUG os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf1ec4d8-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.561 2 INFO os_vif [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:db:60,bridge_name='br-int',has_traffic_filtering=True,id=df1ec4d8-f543-4899-9d98-b60a6a46cc7c,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf1ec4d8-f5')#033[00m
Oct 14 05:07:01 np0005486808 systemd-machined[214636]: New machine qemu-77-instance-0000003f.
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.568 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55623d5c-e7c4-4478-8800-fa751827bc78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.569 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.570 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04c6920b-c0fa-469b-a7d5-365801dd9248]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e85350-26c8-49ab-8ffa-cf3a48307929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 systemd[1]: Started Virtual Machine qemu-77-instance-0000003f.
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.587 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[23aa2b60-ab27-44b1-a327-7aaff715328b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8a58a9-8245-4f34-90c8-31779fb8bfd6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.629 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e62667d1-412a-4b67-8ca3-35a4752dde56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c568cc-2ba4-41c0-b67a-3664da9fa653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.6384] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.669 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba4775b-0ceb-43b6-afdd-a5365a4d4e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.672 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG oslo_concurrency.lockutils [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.673 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3287b4bf-1e38-45cd-b3b4-21661b5156f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.673 2 DEBUG nova.compute.manager [req-e50e99a7-7ae1-4e52-9f55-32911d2d73ed req-3d24af0b-2106-4041-9da2-d569b73079ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-unplugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.7035] device (tap0a07d59e-b0): carrier: link connected
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.708 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c2629d-b819-49f7-8634-b6f9fd8df6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6866a829-d2d0-446a-8f6a-cd43519d7ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660046, 'reachable_time': 29093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326152, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[761a8d1d-e3b6-4c71-923a-1a28cd57c9cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660046, 'tstamp': 660046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326155, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed30fc04-b496-4fb6-828d-ee35eea984f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660046, 'reachable_time': 29093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326156, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 epic_black[325998]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:07:01 np0005486808 epic_black[325998]: --> relative data size: 1.0
Oct 14 05:07:01 np0005486808 epic_black[325998]: --> All data devices are unavailable
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c941cb75-6649-4e9a-af56-52f88fdbce8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 systemd[1]: libpod-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope: Deactivated successfully.
Oct 14 05:07:01 np0005486808 conmon[325998]: conmon 4422c108f94c6a1d0de0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope/container/memory.events
Oct 14 05:07:01 np0005486808 podman[325981]: 2025-10-14 09:07:01.833414279 +0000 UTC m=+1.261960670 container died 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:07:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3489470a31c6ea86aa2b03cf10cd0a194066e3e54c47c38781a7a9fd32824514-merged.mount: Deactivated successfully.
Oct 14 05:07:01 np0005486808 podman[325981]: 2025-10-14 09:07:01.896084362 +0000 UTC m=+1.324630743 container remove 4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:07:01 np0005486808 systemd[1]: libpod-conmon-4422c108f94c6a1d0de0424482aa008c0f795a02dbab7191df66f73ef5c2d9d3.scope: Deactivated successfully.
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.910 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0096230b-9213-49c9-a309-f811cd147f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.913 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.913 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 05:07:01 np0005486808 NetworkManager[44885]: <info>  [1760432821.9715] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.974 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:01Z|00643|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.978 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[046f6f94-bd15-4d2f-9c8c-fbcab087e455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.980 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:07:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:01.981 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:07:01 np0005486808 nova_compute[259627]: 2025-10-14 09:07:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.070 2 INFO nova.virt.libvirt.driver [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deleting instance files /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030_del#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.071 2 INFO nova.virt.libvirt.driver [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deletion of /var/lib/nova/instances/2189eac5-238f-4f09-ae1c-1cf47c3b6030_del complete#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.130 2 INFO nova.compute.manager [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG oslo.service.loopingcall [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.131 2 DEBUG nova.network.neutron [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:02 np0005486808 podman[326346]: 2025-10-14 09:07:02.360209822 +0000 UTC m=+0.049545641 container create 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:02 np0005486808 systemd[1]: Started libpod-conmon-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope.
Oct 14 05:07:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:02 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f8a07d67777e3bae26757fdb9f288e4a77887d356ee44526796fcfd258e250/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:02 np0005486808 podman[326346]: 2025-10-14 09:07:02.337468832 +0000 UTC m=+0.026804681 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:07:02 np0005486808 podman[326346]: 2025-10-14 09:07:02.436885131 +0000 UTC m=+0.126220960 container init 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:07:02 np0005486808 podman[326346]: 2025-10-14 09:07:02.441661099 +0000 UTC m=+0.130996928 container start 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.453 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432807.4486067, 73d6be04-84dc-4b80-81f8-a9bbf9938051 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.453 2 INFO nova.compute.manager [-] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:07:02 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : New worker (326392) forked
Oct 14 05:07:02 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : Loading success.
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.476 2 DEBUG nova.compute.manager [None req-0f87be0d-b499-42af-8c6a-97b195807800 - - - - - -] [instance: 73d6be04-84dc-4b80-81f8-a9bbf9938051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.575825763 +0000 UTC m=+0.042063877 container create 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:02 np0005486808 systemd[1]: Started libpod-conmon-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope.
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.55824085 +0000 UTC m=+0.024478984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.674587145 +0000 UTC m=+0.140825309 container init 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 05:07:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.686721714 +0000 UTC m=+0.152959848 container start 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.689718188 +0000 UTC m=+0.155956352 container attach 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:07:02 np0005486808 stoic_taussig[326431]: 167 167
Oct 14 05:07:02 np0005486808 systemd[1]: libpod-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope: Deactivated successfully.
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.696692019 +0000 UTC m=+0.162930163 container died 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:07:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-498faadb83c147a9a25133648bf65545008379dfa5661921ba75760583ea92e4-merged.mount: Deactivated successfully.
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:02 np0005486808 podman[326415]: 2025-10-14 09:07:02.744326492 +0000 UTC m=+0.210564616 container remove 82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:02 np0005486808 systemd[1]: libpod-conmon-82fc413ec974b31b4866a9edbdb1fe58113bfbf57f95d851a8760d9a546cecfa.scope: Deactivated successfully.
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432822.7872407, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Started (Lifecycle Event)#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.823 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.827 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432822.787327, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.827 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.851 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:02 np0005486808 nova_compute[259627]: 2025-10-14 09:07:02.875 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:03.004474359 +0000 UTC m=+0.061209218 container create b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:07:03 np0005486808 systemd[1]: Started libpod-conmon-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope.
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:02.975729891 +0000 UTC m=+0.032464810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:03.111505834 +0000 UTC m=+0.168240733 container init b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:03.11984637 +0000 UTC m=+0.176581229 container start b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:03.123568381 +0000 UTC m=+0.180303240 container attach b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.531 2 INFO nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Port df1ec4d8-f543-4899-9d98-b60a6a46cc7c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.532 2 DEBUG nova.network.neutron [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [{"id": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "address": "fa:16:3e:9c:3f:ae", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap350a3bec-5d", "ovs_interfaceid": "350a3bec-5dbd-4a83-8d80-5796be0319fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.552 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Releasing lock "refresh_cache-2189eac5-238f-4f09-ae1c-1cf47c3b6030" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.586 2 DEBUG oslo_concurrency.lockutils [None req-855f2c99-03b4-4a39-9152-9bfb73c7fd15 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "interface-2189eac5-238f-4f09-ae1c-1cf47c3b6030-df1ec4d8-f543-4899-9d98-b60a6a46cc7c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.766 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.766 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.767 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] No waiting events found dispatching network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.767 2 WARNING nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received unexpected event network-vif-plugged-350a3bec-5dbd-4a83-8d80-5796be0319fd for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.768 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Processing event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.769 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.770 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.770 2 DEBUG oslo_concurrency.lockutils [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.771 2 DEBUG nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.771 2 WARNING nova.compute.manager [req-19e3e364-ddae-4d72-890d-2c1c68618827 req-ee3de58f-185c-4c5b-8c34-e4e67897776f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.772 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.776 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432823.7761073, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.776 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.778 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.781 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance spawned successfully.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.781 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.800 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.806 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.810 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.810 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.811 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.811 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.812 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.812 2 DEBUG nova.virt.libvirt.driver [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.835 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.865 2 INFO nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Took 8.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.866 2 DEBUG nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.930 2 INFO nova.compute.manager [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Took 9.75 seconds to build instance.#033[00m
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]: {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    "0": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "devices": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "/dev/loop3"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            ],
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_name": "ceph_lv0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_size": "21470642176",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "name": "ceph_lv0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "tags": {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_name": "ceph",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.crush_device_class": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.encrypted": "0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_id": "0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.vdo": "0"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            },
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "vg_name": "ceph_vg0"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        }
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    ],
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    "1": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "devices": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "/dev/loop4"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            ],
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_name": "ceph_lv1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_size": "21470642176",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "name": "ceph_lv1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "tags": {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_name": "ceph",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.crush_device_class": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.encrypted": "0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_id": "1",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.vdo": "0"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            },
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "vg_name": "ceph_vg1"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        }
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    ],
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    "2": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "devices": [
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "/dev/loop5"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            ],
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_name": "ceph_lv2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_size": "21470642176",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "name": "ceph_lv2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "tags": {
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.cluster_name": "ceph",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.crush_device_class": "",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.encrypted": "0",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osd_id": "2",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:                "ceph.vdo": "0"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            },
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "type": "block",
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:            "vg_name": "ceph_vg2"
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:        }
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]:    ]
Oct 14 05:07:03 np0005486808 peaceful_colden[326471]: }
Oct 14 05:07:03 np0005486808 nova_compute[259627]: 2025-10-14 09:07:03.961 2 DEBUG oslo_concurrency.lockutils [None req-21652fce-9b88-4f6e-95f8-98293d003fc8 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:03 np0005486808 systemd[1]: libpod-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope: Deactivated successfully.
Oct 14 05:07:03 np0005486808 podman[326453]: 2025-10-14 09:07:03.963537638 +0000 UTC m=+1.020272457 container died b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:07:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e5eb8e17047fe15406f852744907350edb2f66a5d921d637a5927ee16a04d804-merged.mount: Deactivated successfully.
Oct 14 05:07:04 np0005486808 podman[326453]: 2025-10-14 09:07:04.039082798 +0000 UTC m=+1.095817617 container remove b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:07:04 np0005486808 systemd[1]: libpod-conmon-b78143c8355e10002fc86ce044455eea6cead2b18249ee134cbd00f75897ea4b.scope: Deactivated successfully.
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.712433691 +0000 UTC m=+0.037830122 container create 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 14 05:07:04 np0005486808 systemd[1]: Started libpod-conmon-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope.
Oct 14 05:07:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.789333125 +0000 UTC m=+0.114729586 container init 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.696643952 +0000 UTC m=+0.022040393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.794886352 +0000 UTC m=+0.120282783 container start 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.798146842 +0000 UTC m=+0.123543293 container attach 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:04 np0005486808 tender_boyd[326648]: 167 167
Oct 14 05:07:04 np0005486808 systemd[1]: libpod-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope: Deactivated successfully.
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.799710581 +0000 UTC m=+0.125107042 container died 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5edda76d64926d90015d1bb673852e248a0bfe5f0b8e13db4e854b075878724e-merged.mount: Deactivated successfully.
Oct 14 05:07:04 np0005486808 podman[326631]: 2025-10-14 09:07:04.831657507 +0000 UTC m=+0.157053938 container remove 0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_boyd, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:07:04 np0005486808 systemd[1]: libpod-conmon-0b24b025199f87d72f67bcc1e8cf5ae5e8fc53bacdd2bd76d3c1b42e3e80cf28.scope: Deactivated successfully.
Oct 14 05:07:05 np0005486808 podman[326673]: 2025-10-14 09:07:05.027002368 +0000 UTC m=+0.041797320 container create 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:07:05 np0005486808 systemd[1]: Started libpod-conmon-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope.
Oct 14 05:07:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:05 np0005486808 podman[326673]: 2025-10-14 09:07:05.007322914 +0000 UTC m=+0.022117886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:07:05 np0005486808 podman[326673]: 2025-10-14 09:07:05.108796983 +0000 UTC m=+0.123591935 container init 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:07:05 np0005486808 podman[326673]: 2025-10-14 09:07:05.114211796 +0000 UTC m=+0.129006748 container start 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:07:05 np0005486808 podman[326673]: 2025-10-14 09:07:05.117286592 +0000 UTC m=+0.132081574 container attach 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:07:05 np0005486808 nova_compute[259627]: 2025-10-14 09:07:05.334 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:05 np0005486808 nova_compute[259627]: 2025-10-14 09:07:05.335 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:05 np0005486808 nova_compute[259627]: 2025-10-14 09:07:05.335 2 INFO nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shelving#033[00m
Oct 14 05:07:05 np0005486808 nova_compute[259627]: 2025-10-14 09:07:05.361 2 DEBUG nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:07:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:07:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:07:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:07:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368455642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:07:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct 14 05:07:06 np0005486808 sad_hawking[326690]: {
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_id": 2,
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "type": "bluestore"
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    },
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_id": 1,
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "type": "bluestore"
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    },
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_id": 0,
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:        "type": "bluestore"
Oct 14 05:07:06 np0005486808 sad_hawking[326690]:    }
Oct 14 05:07:06 np0005486808 sad_hawking[326690]: }
Oct 14 05:07:06 np0005486808 systemd[1]: libpod-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope: Deactivated successfully.
Oct 14 05:07:06 np0005486808 podman[326673]: 2025-10-14 09:07:06.090001497 +0000 UTC m=+1.104796439 container died 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:07:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cec5f5138eda5a20fcb10ca1f15523490ba3e1ec8c836377b5060d64630c36cb-merged.mount: Deactivated successfully.
Oct 14 05:07:06 np0005486808 podman[326673]: 2025-10-14 09:07:06.30610762 +0000 UTC m=+1.320902572 container remove 1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:07:06 np0005486808 systemd[1]: libpod-conmon-1e60f86bcbad9fa2e431c43770923842cdcb140ef1d033fbc7504033e4b3c66e.scope: Deactivated successfully.
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:07:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3fde45db-e93e-4225-bdb9-9fcd731233f5 does not exist
Oct 14 05:07:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 32b68a08-15ad-4ad4-a555-0df41ca7eb73 does not exist
Oct 14 05:07:06 np0005486808 nova_compute[259627]: 2025-10-14 09:07:06.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:07:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:07:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.025 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.185 2 DEBUG nova.network.neutron [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.213 2 INFO nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Took 5.08 seconds to deallocate network for instance.#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.268 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.268 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.388 2 DEBUG nova.compute.manager [req-74caf18f-c655-4ef4-92f6-1fa4f23bbab8 req-95789484-10ed-4041-8c97-7eec329f7f60 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Received event network-vif-deleted-350a3bec-5dbd-4a83-8d80-5796be0319fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.403 2 DEBUG oslo_concurrency.processutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 05:07:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918270055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.808 2 DEBUG oslo_concurrency.processutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.819 2 DEBUG nova.compute.provider_tree [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.860 2 DEBUG nova.scheduler.client.report [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.891 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:07 np0005486808 nova_compute[259627]: 2025-10-14 09:07:07.951 2 INFO nova.scheduler.client.report [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance 2189eac5-238f-4f09-ae1c-1cf47c3b6030#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.033 2 DEBUG oslo_concurrency.lockutils [None req-d54c6bcf-6d67-4d6e-b5ca-5c0bc1b1d281 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "2189eac5-238f-4f09-ae1c-1cf47c3b6030" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.427 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.428 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.429 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.430 2 INFO nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Terminating instance#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.431 2 DEBUG nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:07:08 np0005486808 kernel: tapdffa5a1f-65 (unregistering): left promiscuous mode
Oct 14 05:07:08 np0005486808 NetworkManager[44885]: <info>  [1760432828.5013] device (tapdffa5a1f-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:08Z|00644|binding|INFO|Releasing lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 from this chassis (sb_readonly=0)
Oct 14 05:07:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:08Z|00645|binding|INFO|Setting lport dffa5a1f-657b-498e-bbe5-6540fead7fb6 down in Southbound
Oct 14 05:07:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:08Z|00646|binding|INFO|Removing iface tapdffa5a1f-65 ovn-installed in OVS
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.528 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:40:de 10.100.0.8'], port_security=['fa:16:3e:b4:40:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2d149f-aebf-406a-aed2-5161dd22b079', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8945d892653642ac9c3d894f8bc3619d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6192a40-cbd8-43eb-9955-4fede99ddb79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e933e805-d9b6-497a-9ba3-5f7153390420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dffa5a1f-657b-498e-bbe5-6540fead7fb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.529 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dffa5a1f-657b-498e-bbe5-6540fead7fb6 in datapath fc2d149f-aebf-406a-aed2-5161dd22b079 unbound from our chassis#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.530 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2d149f-aebf-406a-aed2-5161dd22b079, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.531 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd05692f-7cff-4f1f-b700-3efff555e451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.539 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 namespace which is not needed anymore#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 14 05:07:08 np0005486808 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000039.scope: Consumed 15.091s CPU time.
Oct 14 05:07:08 np0005486808 systemd-machined[214636]: Machine qemu-72-instance-00000039 terminated.
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.687 2 INFO nova.virt.libvirt.driver [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Instance destroyed successfully.#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.689 2 DEBUG nova.objects.instance [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lazy-loading 'resources' on Instance uuid a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.711 2 DEBUG nova.virt.libvirt.vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1758133214',display_name='tempest-tempest.common.compute-instance-1758133214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1758133214',id=57,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP68NH14S3HRbU1S6/VMc0nOUsEVOZ3tz6VZKHo1Z+OFyqqDVRUrwJaEU67IXpbnQXXmmP8KKv7jtqoNjZqCUWnJrm5ENg2LKrvLCxr2iywwS6vcmTe+HUgI45UTIKYhOw==',key_name='tempest-keypair-1916836153',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:06:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8945d892653642ac9c3d894f8bc3619d',ramdisk_id='',reservation_id='r-6x4gt3p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-193524374',owner_user_name='tempest-AttachInterfacesTestJSON-193524374-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:06:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8b0f0f2991214b9caed6f475a149c342',uuid=a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.712 2 DEBUG nova.network.os_vif_util [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converting VIF {"id": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "address": "fa:16:3e:b4:40:de", "network": {"id": "fc2d149f-aebf-406a-aed2-5161dd22b079", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1146646001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8945d892653642ac9c3d894f8bc3619d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdffa5a1f-65", "ovs_interfaceid": "dffa5a1f-657b-498e-bbe5-6540fead7fb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.713 2 DEBUG nova.network.os_vif_util [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.713 2 DEBUG os_vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdffa5a1f-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.738 2 INFO os_vif [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:40:de,bridge_name='br-int',has_traffic_filtering=True,id=dffa5a1f-657b-498e-bbe5-6540fead7fb6,network=Network(fc2d149f-aebf-406a-aed2-5161dd22b079),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdffa5a1f-65')#033[00m
Oct 14 05:07:08 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : haproxy version is 2.8.14-c23fe91
Oct 14 05:07:08 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [NOTICE]   (322662) : path to executable is /usr/sbin/haproxy
Oct 14 05:07:08 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [WARNING]  (322662) : Exiting Master process...
Oct 14 05:07:08 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [ALERT]    (322662) : Current worker (322664) exited with code 143 (Terminated)
Oct 14 05:07:08 np0005486808 neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079[322658]: [WARNING]  (322662) : All workers exited. Exiting... (0)
Oct 14 05:07:08 np0005486808 systemd[1]: libpod-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope: Deactivated successfully.
Oct 14 05:07:08 np0005486808 podman[326836]: 2025-10-14 09:07:08.756350382 +0000 UTC m=+0.074035894 container died 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:07:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5d2b9ef6811a39d51e9705ae92738e1dfb3f318ac7ca2b4d0ea3d271eb3fe0ad-merged.mount: Deactivated successfully.
Oct 14 05:07:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777-userdata-shm.mount: Deactivated successfully.
Oct 14 05:07:08 np0005486808 podman[326836]: 2025-10-14 09:07:08.808676621 +0000 UTC m=+0.126362133 container cleanup 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:07:08 np0005486808 podman[326852]: 2025-10-14 09:07:08.810381223 +0000 UTC m=+0.081467677 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Oct 14 05:07:08 np0005486808 podman[326853]: 2025-10-14 09:07:08.818464052 +0000 UTC m=+0.083782144 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:07:08 np0005486808 systemd[1]: libpod-conmon-0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777.scope: Deactivated successfully.
Oct 14 05:07:08 np0005486808 podman[326924]: 2025-10-14 09:07:08.886223021 +0000 UTC m=+0.053974711 container remove 0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.893 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ad9d36-00c9-4105-8269-8be3462095de]: (4, ('Tue Oct 14 09:07:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777)\n0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777\nTue Oct 14 09:07:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 (0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777)\n0965aa24674b7a429e2b571bb5d6315cd793a560543408ad41a34da8f5fb4777\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.895 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e533ec11-e723-47a8-b9f8-7725e38f540c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc2d149f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:08 np0005486808 kernel: tapfc2d149f-a0: left promiscuous mode
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 nova_compute[259627]: 2025-10-14 09:07:08.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23434a19-3690-461f-b9d5-5123c27766d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15c4170a-636e-46d9-88dc-83df76842f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0730380c-c421-4fbf-97f3-0b386cc75d90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c494b9a3-b121-468e-b935-6295f3087bf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654845, 'reachable_time': 37215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326939, 'error': None, 'target': 'ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.982 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc2d149f-aebf-406a-aed2-5161dd22b079 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:07:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:08.982 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[97921249-f904-4ee0-bd9d-dd991b1dc0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:08 np0005486808 systemd[1]: run-netns-ovnmeta\x2dfc2d149f\x2daebf\x2d406a\x2daed2\x2d5161dd22b079.mount: Deactivated successfully.
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.354 2 INFO nova.virt.libvirt.driver [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deleting instance files /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_del#033[00m
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.355 2 INFO nova.virt.libvirt.driver [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deletion of /var/lib/nova/instances/a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8_del complete#033[00m
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.427 2 INFO nova.compute.manager [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.428 2 DEBUG oslo.service.loopingcall [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.429 2 DEBUG nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:07:09 np0005486808 nova_compute[259627]: 2025-10-14 09:07:09.429 2 DEBUG nova.network.neutron [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:07:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 167 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 118 op/s
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.462 2 DEBUG nova.network.neutron [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.489 2 INFO nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Took 1.06 seconds to deallocate network for instance.#033[00m
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.549 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.549 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.563 2 DEBUG nova.compute.manager [req-7c2c448f-f8d4-40b8-9ad1-6e238694daef req-64fa60b9-d74c-4f2a-bd11-eb28a042eef0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Received event network-vif-deleted-dffa5a1f-657b-498e-bbe5-6540fead7fb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:10 np0005486808 nova_compute[259627]: 2025-10-14 09:07:10.616 2 DEBUG oslo_concurrency.processutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1537167008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.041 2 DEBUG oslo_concurrency.processutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.048 2 DEBUG nova.compute.provider_tree [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.085 2 DEBUG nova.scheduler.client.report [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.122 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.171 2 INFO nova.scheduler.client.report [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Deleted allocations for instance a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8#033[00m
Oct 14 05:07:11 np0005486808 nova_compute[259627]: 2025-10-14 09:07:11.264 2 DEBUG oslo_concurrency.lockutils [None req-8de98daa-cc73-444a-a33d-7e106bb2de6a 8b0f0f2991214b9caed6f475a149c342 8945d892653642ac9c3d894f8bc3619d - - default default] Lock "a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 146 op/s
Oct 14 05:07:12 np0005486808 nova_compute[259627]: 2025-10-14 09:07:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:13 np0005486808 nova_compute[259627]: 2025-10-14 09:07:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 88 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 129 op/s
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.406 2 DEBUG nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.421 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.422 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.440 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.517 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.518 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.524 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.524 2 INFO nova.compute.claims [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:07:15 np0005486808 nova_compute[259627]: 2025-10-14 09:07:15.623 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Oct 14 05:07:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:15Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 05:07:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:15Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:f7:ed 10.100.0.13
Oct 14 05:07:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570839737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.048 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.054 2 DEBUG nova.compute.provider_tree [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.073 2 DEBUG nova.scheduler.client.report [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.106 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.107 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.185 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.186 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.215 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.236 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.377 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.378 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.379 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating image(s)#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.403 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.429 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.461 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.465 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.497 2 DEBUG nova.policy [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '695c749a8dce4506a31e2cec4f02876b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bda6775f81f403e83269a5f798c9853', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.504 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432821.496341, 2189eac5-238f-4f09-ae1c-1cf47c3b6030 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.505 2 INFO nova.compute.manager [-] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.525 2 DEBUG nova.compute.manager [None req-be725eb6-9ea9-41d1-820b-75daebfa89dc - - - - - -] [instance: 2189eac5-238f-4f09-ae1c-1cf47c3b6030] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.538 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.539 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.560 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.563 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e065d857-2df9-4199-aa98-41ca3c436bad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.829 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e065d857-2df9-4199-aa98-41ca3c436bad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.894 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] resizing rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.981 2 DEBUG nova.objects.instance [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'migration_context' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Ensure instance console log exists: /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.993 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.994 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:16 np0005486808 nova_compute[259627]: 2025-10-14 09:07:16.994 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:17 np0005486808 nova_compute[259627]: 2025-10-14 09:07:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 05:07:17 np0005486808 nova_compute[259627]: 2025-10-14 09:07:17.891 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Successfully created port: e18648ba-6112-40fa-85f6-bdf82a012079 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.852 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Successfully updated port: e18648ba-6112-40fa-85f6-bdf82a012079 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.866 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.972 2 DEBUG nova.compute.manager [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.973 2 DEBUG nova.compute.manager [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing instance network info cache due to event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:18 np0005486808 nova_compute[259627]: 2025-10-14 09:07:18.974 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.046 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:07:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:19Z|00647|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 kernel: tap93e6162a-d0 (unregistering): left promiscuous mode
Oct 14 05:07:19 np0005486808 NetworkManager[44885]: <info>  [1760432839.2944] device (tap93e6162a-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:19Z|00648|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:07:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:19Z|00649|binding|INFO|Releasing lport 93e6162a-d037-4440-9c8c-1cb9b293f249 from this chassis (sb_readonly=0)
Oct 14 05:07:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:19Z|00650|binding|INFO|Removing iface tap93e6162a-d0 ovn-installed in OVS
Oct 14 05:07:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:19Z|00651|binding|INFO|Setting lport 93e6162a-d037-4440-9c8c-1cb9b293f249 down in Southbound
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.323 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:f7:ed 10.100.0.13'], port_security=['fa:16:3e:13:f7:ed 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c8d53ba7-c60f-4e5c-899f-fd95996ea742', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=93e6162a-d037-4440-9c8c-1cb9b293f249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.324 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 93e6162a-d037-4440-9c8c-1cb9b293f249 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.325 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0613a9a-8ce4-402e-90e5-d356f9c09b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 14 05:07:19 np0005486808 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003f.scope: Consumed 13.659s CPU time.
Oct 14 05:07:19 np0005486808 systemd-machined[214636]: Machine qemu-77-instance-0000003f terminated.
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : haproxy version is 2.8.14-c23fe91
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [NOTICE]   (326390) : path to executable is /usr/sbin/haproxy
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : Exiting Master process...
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : Exiting Master process...
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [ALERT]    (326390) : Current worker (326392) exited with code 143 (Terminated)
Oct 14 05:07:19 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[326384]: [WARNING]  (326390) : All workers exited. Exiting... (0)
Oct 14 05:07:19 np0005486808 systemd[1]: libpod-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope: Deactivated successfully.
Oct 14 05:07:19 np0005486808 podman[327175]: 2025-10-14 09:07:19.492890365 +0000 UTC m=+0.052801531 container died 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:07:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1-userdata-shm.mount: Deactivated successfully.
Oct 14 05:07:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c5f8a07d67777e3bae26757fdb9f288e4a77887d356ee44526796fcfd258e250-merged.mount: Deactivated successfully.
Oct 14 05:07:19 np0005486808 podman[327175]: 2025-10-14 09:07:19.547974942 +0000 UTC m=+0.107886128 container cleanup 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:07:19 np0005486808 systemd[1]: libpod-conmon-92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1.scope: Deactivated successfully.
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.573 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance shutdown successfully after 14 seconds.#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.581 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.582 2 DEBUG nova.objects.instance [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'numa_topology' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:19 np0005486808 podman[327211]: 2025-10-14 09:07:19.640132051 +0000 UTC m=+0.061410583 container remove 92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.646 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f4cb37-9544-4817-8377-a9c845b00de1]: (4, ('Tue Oct 14 09:07:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1)\n92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1\nTue Oct 14 09:07:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1)\n92ee2c50ef7d28250bae192eadf650bd5d0cef129ed1f7cc245610d055aefec1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.648 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8860767b-08b7-4164-8bf7-2ad56e58a316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.650 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed549f94-1c50-40d4-98e8-9f0086aee950]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.704 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37bb19a8-0256-49f3-9993-c7e817f85975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0283821-7bff-4f1f-8f6d-42ebc5a3695e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[96f0754b-29d7-495c-a367-98ef21f39d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660038, 'reachable_time': 17866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327235, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.738 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:07:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:19.739 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4201e93f-320f-4aa0-85e9-50053550c98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:19 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 05:07:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 109 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.802 2 DEBUG nova.network.neutron [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.825 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.825 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance network_info: |[{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.826 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.826 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.831 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Start _get_guest_xml network_info=[{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.839 2 WARNING nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.851 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.852 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.857 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.858 2 DEBUG nova.virt.libvirt.host [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.859 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.859 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.860 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.861 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.861 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.862 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.863 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.863 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.864 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.864 2 DEBUG nova.virt.hardware [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.869 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:19 np0005486808 nova_compute[259627]: 2025-10-14 09:07:19.921 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Beginning cold snapshot process#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.080 2 DEBUG nova.virt.libvirt.imagebackend [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.335 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] creating snapshot(0ec03d9c9ffa40e4b2f3296f6173a878) on rbd image(c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620390014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.405 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.424 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.428 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107769105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.869 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.872 2 DEBUG nova.virt.libvirt.vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.872 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.874 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.876 2 DEBUG nova.objects.instance [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'pci_devices' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.894 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <uuid>e065d857-2df9-4199-aa98-41ca3c436bad</uuid>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <name>instance-00000040</name>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerActionsTestOtherB-server-1278548098</nova:name>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:07:19</nova:creationTime>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:user uuid="695c749a8dce4506a31e2cec4f02876b">tempest-ServerActionsTestOtherB-381012378-project-member</nova:user>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:project uuid="4bda6775f81f403e83269a5f798c9853">tempest-ServerActionsTestOtherB-381012378</nova:project>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <nova:port uuid="e18648ba-6112-40fa-85f6-bdf82a012079">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="serial">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="uuid">e065d857-2df9-4199-aa98-41ca3c436bad</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk.config">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:0b:9e:35"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <target dev="tape18648ba-61"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/console.log" append="off"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:07:20 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:07:20 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:07:20 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:07:20 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.896 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Preparing to wait for external event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.896 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.897 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.897 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.898 2 DEBUG nova.virt.libvirt.vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1278548098',display_name='tempest-ServerActionsTestOtherB-server-1278548098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1278548098',id=64,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN/kVgKkHzFM6KgYtJMEi52k+/MuBrPIt79IRFIgFmNTlVvXooEFluDr37nozPBAZXSiIdNHa7h8jeIafiglGDw1A5mNs3hIQ2Rxweba0GKcdCWJKvOM6RPyHsBm/r09+g==',key_name='tempest-keypair-1307751836',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bda6775f81f403e83269a5f798c9853',ramdisk_id='',reservation_id='r-j6ifs0px',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-381012378',owner_user_name='tempest-ServerActionsTestOtherB-381012378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='695c749a8dce4506a31e2cec4f02876b',uuid=e065d857-2df9-4199-aa98-41ca3c436bad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.899 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converting VIF {"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.899 2 DEBUG nova.network.os_vif_util [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.900 2 DEBUG os_vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape18648ba-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape18648ba-61, col_values=(('external_ids', {'iface-id': 'e18648ba-6112-40fa-85f6-bdf82a012079', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:9e:35', 'vm-uuid': 'e065d857-2df9-4199-aa98-41ca3c436bad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:20 np0005486808 NetworkManager[44885]: <info>  [1760432840.9115] manager: (tape18648ba-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.920 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] cloning vms/c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk@0ec03d9c9ffa40e4b2f3296f6173a878 to images/64285d7a-1987-45f5-9cf9-f0e67a4d8856 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.969 2 INFO os_vif [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:9e:35,bridge_name='br-int',has_traffic_filtering=True,id=e18648ba-6112-40fa-85f6-bdf82a012079,network=Network(9d540b01-e9c4-4dc5-9a51-94512ad9a409),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18648ba-61')#033[00m
Oct 14 05:07:20 np0005486808 nova_compute[259627]: 2025-10-14 09:07:20.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.043 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.044 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.044 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No VIF found with MAC fa:16:3e:0b:9e:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.045 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Using config drive#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.077 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.092 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] flattening images/64285d7a-1987-45f5-9cf9-f0e67a4d8856 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.500 2 DEBUG nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.500 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG oslo_concurrency.lockutils [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.501 2 DEBUG nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.501 2 WARNING nova.compute.manager [req-3e7aa7a7-98f1-4e95-9715-b7510752d56a req-ae085b6d-b8e4-4c15-889a-9ec37d9b4e72 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-unplugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.574 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] removing snapshot(0ec03d9c9ffa40e4b2f3296f6173a878) on rbd image(c8d53ba7-c60f-4e5c-899f-fd95996ea742_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:07:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 4.7 MiB/s wr, 118 op/s
Oct 14 05:07:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 14 05:07:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 14 05:07:21 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.890 2 DEBUG nova.storage.rbd_utils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] creating snapshot(snap) on rbd image(64285d7a-1987-45f5-9cf9-f0e67a4d8856) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:21 np0005486808 nova_compute[259627]: 2025-10-14 09:07:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.141 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated VIF entry in instance network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.142 2 DEBUG nova.network.neutron [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.160 2 DEBUG oslo_concurrency.lockutils [req-a7d5f779-4abe-4596-8ae3-5e31059778bc req-945f8585-9151-4a0a-a18a-25120e212d54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.186 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Creating config drive at /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.191 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85ehh5sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.348 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85ehh5sb" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.379 2 DEBUG nova.storage.rbd_utils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] rbd image e065d857-2df9-4199-aa98-41ca3c436bad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.384 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.589 2 DEBUG oslo_concurrency.processutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config e065d857-2df9-4199-aa98-41ca3c436bad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.590 2 INFO nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting local config drive /var/lib/nova/instances/e065d857-2df9-4199-aa98-41ca3c436bad/disk.config because it was imported into RBD.#033[00m
Oct 14 05:07:22 np0005486808 kernel: tape18648ba-61: entered promiscuous mode
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.6495] manager: (tape18648ba-61): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 14 05:07:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:22Z|00652|binding|INFO|Claiming lport e18648ba-6112-40fa-85f6-bdf82a012079 for this chassis.
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:22Z|00653|binding|INFO|e18648ba-6112-40fa-85f6-bdf82a012079: Claiming fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.663 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:9e:35 10.100.0.9'], port_security=['fa:16:3e:0b:9e:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e065d857-2df9-4199-aa98-41ca3c436bad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bda6775f81f403e83269a5f798c9853', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baab55cf-9843-49b9-a43b-28ca1ab122c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88e90b59-4c4c-42c1-a4ed-574ac64367e5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e18648ba-6112-40fa-85f6-bdf82a012079) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.664 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e18648ba-6112-40fa-85f6-bdf82a012079 in datapath 9d540b01-e9c4-4dc5-9a51-94512ad9a409 bound to our chassis#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.666 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d540b01-e9c4-4dc5-9a51-94512ad9a409#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9883f8-3397-42d6-860c-3bf33d181c83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.681 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d540b01-e1 in ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:07:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:22 np0005486808 systemd-machined[214636]: New machine qemu-78-instance-00000040.
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.683 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d540b01-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9978c729-1597-48c4-bb26-61dd715ad288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.684 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bb5ce0-de24-4739-b664-3463c4a63f76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 podman[327502]: 2025-10-14 09:07:22.689457158 +0000 UTC m=+0.093981836 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.700 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[323d18cc-cbbc-4f96-bffb-d11b97d61a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 systemd[1]: Started Virtual Machine qemu-78-instance-00000040.
Oct 14 05:07:22 np0005486808 systemd-udevd[327562]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72267736-3691-4925-b339-771102f2cc97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:22Z|00654|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 ovn-installed in OVS
Oct 14 05:07:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:22Z|00655|binding|INFO|Setting lport e18648ba-6112-40fa-85f6-bdf82a012079 up in Southbound
Oct 14 05:07:22 np0005486808 podman[327501]: 2025-10-14 09:07:22.738090125 +0000 UTC m=+0.141860944 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.7422] device (tape18648ba-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.7433] device (tape18648ba-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.761 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcc2f1d-0d4f-4a73-bfb0-1c8dc0862df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 systemd-udevd[327573]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.7670] manager: (tap9d540b01-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.768 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71ecb441-2634-4630-bc58-c4687b912e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.796 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0256ed-8efd-450d-b066-4250373be3ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.799 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[91b71b32-a17e-4733-ad2f-5d877c1c0981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.8197] device (tap9d540b01-e0): carrier: link connected
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fda78a3f-147b-4035-a5b6-9dbae9f6df83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e26d7dcc-7503-4eb7-bf9c-83181bf89450]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327597, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f70463fa-cb58-4e13-a85f-8e8364e57479]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6aec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662158, 'tstamp': 662158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327598, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[17ebc2c2-ace0-460c-870a-bdb9d04e323a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d540b01-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:6a:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662158, 'reachable_time': 22381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327599, 'error': None, 'target': 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.903 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82698df5-b05f-4fea-8126-9356de91dd93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 nova_compute[259627]: 2025-10-14 09:07:22.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4a500e-89a5-48b1-92ea-86c6c978839d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.982 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d540b01-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.983 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.983 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d540b01-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:22 np0005486808 NetworkManager[44885]: <info>  [1760432842.9864] manager: (tap9d540b01-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct 14 05:07:22 np0005486808 kernel: tap9d540b01-e0: entered promiscuous mode
Oct 14 05:07:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:22.992 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d540b01-e0, col_values=(('external_ids', {'iface-id': 'fcca615a-5470-4880-844d-73adc425bce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:22Z|00656|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.060 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.062 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[591caace-b4df-4d4f-93a6-15e9e7264bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.063 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/9d540b01-e9c4-4dc5-9a51-94512ad9a409.pid.haproxy
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 9d540b01-e9c4-4dc5-9a51-94512ad9a409
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:07:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:23.067 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'env', 'PROCESS_TAG=haproxy-9d540b01-e9c4-4dc5-9a51-94512ad9a409', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d540b01-e9c4-4dc5-9a51-94512ad9a409.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.163 2 DEBUG nova.compute.manager [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.164 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.164 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.166 2 DEBUG oslo_concurrency.lockutils [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.167 2 DEBUG nova.compute.manager [req-34525acc-0d9b-4ebc-bb56-12bcfdec9f58 req-7afacfb9-3892-4370-b029-e68c60e5862c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Processing event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:07:23 np0005486808 podman[327677]: 2025-10-14 09:07:23.452662454 +0000 UTC m=+0.072834675 container create d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:07:23 np0005486808 systemd[1]: Started libpod-conmon-d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a.scope.
Oct 14 05:07:23 np0005486808 podman[327677]: 2025-10-14 09:07:23.409757077 +0000 UTC m=+0.029929398 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:07:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670696918cb8387e8757572bf34cc188f3c339f0a3acc9d8f14f93b3255809dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:23 np0005486808 podman[327677]: 2025-10-14 09:07:23.549942499 +0000 UTC m=+0.170114720 container init d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:07:23 np0005486808 podman[327677]: 2025-10-14 09:07:23.559390612 +0000 UTC m=+0.179562833 container start d7ba7ebc0e91d20684f519c582dc85351814f5e3e7324b64ad8f9ffcff668f4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:07:23 np0005486808 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : New worker (327698) forked
Oct 14 05:07:23 np0005486808 neutron-haproxy-ovnmeta-9d540b01-e9c4-4dc5-9a51-94512ad9a409[327692]: [NOTICE]   (327696) : Loading success.
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG oslo_concurrency.lockutils [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.606 2 DEBUG nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] No waiting events found dispatching network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.607 2 WARNING nova.compute.manager [req-8a624555-d31d-497f-9ebb-94bebd0f798f req-419e2f16-55c9-449e-a97f-fcfc1a2f1386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received unexpected event network-vif-plugged-93e6162a-d037-4440-9c8c-1cb9b293f249 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.686 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432828.6852503, a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.687 2 INFO nova.compute.manager [-] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.711 2 DEBUG nova.compute.manager [None req-713cd2c8-4ff2-4fd7-873c-844becfb6b10 - - - - - -] [instance: a6fdf3ba-aaa5-4c33-9f9d-d951cf22bfb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.8 MiB/s wr, 110 op/s
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.792 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.7921097, e065d857-2df9-4199-aa98-41ca3c436bad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.792 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Started (Lifecycle Event)#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.794 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.800 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.805 2 INFO nova.virt.libvirt.driver [-] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Instance spawned successfully.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.805 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.816 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.827 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.834 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.835 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.836 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.836 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.837 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.838 2 DEBUG nova.virt.libvirt.driver [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.850 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.851 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.796003, e065d857-2df9-4199-aa98-41ca3c436bad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.851 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.880 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.885 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432843.8001757, e065d857-2df9-4199-aa98-41ca3c436bad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.885 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.908 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.916 2 INFO nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 7.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.916 2 DEBUG nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.929 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.973 2 INFO nova.compute.manager [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 8.47 seconds to build instance.#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:23 np0005486808 nova_compute[259627]: 2025-10-14 09:07:23.989 2 DEBUG oslo_concurrency.lockutils [None req-0bd2df33-2769-4fb0-bc4c-a3a09fe79e76 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.680 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Snapshot image upload complete#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.680 2 DEBUG nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.727 2 INFO nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Shelve offloading#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.733 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.733 2 DEBUG nova.compute.manager [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.736 2 DEBUG nova.network.neutron [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:07:24 np0005486808 nova_compute[259627]: 2025-10-14 09:07:24.998 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.265 2 DEBUG nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.266 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.267 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.268 2 DEBUG oslo_concurrency.lockutils [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e065d857-2df9-4199-aa98-41ca3c436bad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.268 2 DEBUG nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] No waiting events found dispatching network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.268 2 WARNING nova.compute.manager [req-319179a8-3a80-4a0b-83bc-7b798ee8d8fd req-18e79236-b022-4501-bc08-73d722291fab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received unexpected event network-vif-plugged-e18648ba-6112-40fa-85f6-bdf82a012079 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:07:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982781137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.457 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.521 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.522 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.526 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.527 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.679 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.680 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3863MB free_disk=59.921966552734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.680 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.681 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.759 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c8d53ba7-c60f-4e5c-899f-fd95996ea742 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.759 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e065d857-2df9-4199-aa98-41ca3c436bad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.760 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:07:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 415 op/s
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.820 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:25 np0005486808 nova_compute[259627]: 2025-10-14 09:07:25.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828970195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.310 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.320 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.351 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:26 np0005486808 NetworkManager[44885]: <info>  [1760432846.3609] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 14 05:07:26 np0005486808 NetworkManager[44885]: <info>  [1760432846.3629] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.389 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.389 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:26Z|00657|binding|INFO|Releasing lport fcca615a-5470-4880-844d-73adc425bce1 from this chassis (sb_readonly=0)
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.706 2 DEBUG nova.compute.manager [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Received event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.707 2 DEBUG nova.compute.manager [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing instance network info cache due to event network-changed-e18648ba-6112-40fa-85f6-bdf82a012079. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.707 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.708 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.708 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Refreshing network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.806 2 DEBUG nova.network.neutron [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:26 np0005486808 nova_compute[259627]: 2025-10-14 09:07:26.829 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.391 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.391 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:07:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 14 05:07:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 14 05:07:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 14 05:07:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.996 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:07:27 np0005486808 nova_compute[259627]: 2025-10-14 09:07:27.997 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.772 2 INFO nova.virt.libvirt.driver [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Instance destroyed successfully.#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.773 2 DEBUG nova.objects.instance [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid c8d53ba7-c60f-4e5c-899f-fd95996ea742 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.791 2 DEBUG nova.virt.libvirt.vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:06:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1969379492',display_name='tempest-DeleteServersTestJSON-server-1969379492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1969379492',id=63,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-399cflte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member',shelved_at='2025-10-14T09:07:24.680673',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='64285d7a-1987-45f5-9cf9-f0e67a4d8856'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:07:20Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=c8d53ba7-c60f-4e5c-899f-fd95996ea742,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.791 2 DEBUG nova.network.os_vif_util [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.793 2 DEBUG nova.network.os_vif_util [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.793 2 DEBUG os_vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93e6162a-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.815 2 INFO os_vif [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:f7:ed,bridge_name='br-int',has_traffic_filtering=True,id=93e6162a-d037-4440-9c8c-1cb9b293f249,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93e6162a-d0')#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.872 2 DEBUG nova.compute.manager [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Received event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.873 2 DEBUG nova.compute.manager [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing instance network info cache due to event network-changed-93e6162a-d037-4440-9c8c-1cb9b293f249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.874 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.889 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updated VIF entry in instance network info cache for port e18648ba-6112-40fa-85f6-bdf82a012079. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.890 2 DEBUG nova.network.neutron [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Updating instance_info_cache with network_info: [{"id": "e18648ba-6112-40fa-85f6-bdf82a012079", "address": "fa:16:3e:0b:9e:35", "network": {"id": "9d540b01-e9c4-4dc5-9a51-94512ad9a409", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-617314423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bda6775f81f403e83269a5f798c9853", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18648ba-61", "ovs_interfaceid": "e18648ba-6112-40fa-85f6-bdf82a012079", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:28 np0005486808 nova_compute[259627]: 2025-10-14 09:07:28.919 2 DEBUG oslo_concurrency.lockutils [req-14231968-17b5-455d-94a3-ebda6b503b0f req-a9c31bde-c241-40d9-92f7-46d9e34fc8b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e065d857-2df9-4199-aa98-41ca3c436bad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.341 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deleting instance files /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742_del#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.342 2 INFO nova.virt.libvirt.driver [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Deletion of /var/lib/nova/instances/c8d53ba7-c60f-4e5c-899f-fd95996ea742_del complete#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.457 2 INFO nova.scheduler.client.report [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance c8d53ba7-c60f-4e5c-899f-fd95996ea742#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.511 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.511 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.564 2 DEBUG oslo_concurrency.processutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.622 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93e6162a-d0", "ovs_interfaceid": "93e6162a-d037-4440-9c8c-1cb9b293f249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.657 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.658 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.659 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.660 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Refreshing network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.662 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:29 np0005486808 nova_compute[259627]: 2025-10-14 09:07:29.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 246 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 231 op/s
Oct 14 05:07:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148408731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:30 np0005486808 nova_compute[259627]: 2025-10-14 09:07:30.085 2 DEBUG oslo_concurrency.processutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:30 np0005486808 nova_compute[259627]: 2025-10-14 09:07:30.091 2 DEBUG nova.compute.provider_tree [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:30 np0005486808 nova_compute[259627]: 2025-10-14 09:07:30.112 2 DEBUG nova.scheduler.client.report [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:30 np0005486808 nova_compute[259627]: 2025-10-14 09:07:30.144 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:30 np0005486808 nova_compute[259627]: 2025-10-14 09:07:30.212 2 DEBUG oslo_concurrency.lockutils [None req-9801e33e-0b90-430b-a8ff-704013352641 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "c8d53ba7-c60f-4e5c-899f-fd95996ea742" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 14 05:07:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 14 05:07:30 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 14 05:07:30 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 14 05:07:31 np0005486808 nova_compute[259627]: 2025-10-14 09:07:31.514 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updated VIF entry in instance network info cache for port 93e6162a-d037-4440-9c8c-1cb9b293f249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:31 np0005486808 nova_compute[259627]: 2025-10-14 09:07:31.515 2 DEBUG nova.network.neutron [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Updating instance_info_cache with network_info: [{"id": "93e6162a-d037-4440-9c8c-1cb9b293f249", "address": "fa:16:3e:13:f7:ed", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": null, "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap93e6162a-d0", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:31 np0005486808 nova_compute[259627]: 2025-10-14 09:07:31.531 2 DEBUG oslo_concurrency.lockutils [req-53844b2b-abb2-43ac-9bbc-f3dfee100ff5 req-39deca61-8cea-4d9a-a37c-c19ae38e1285 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c8d53ba7-c60f-4e5c-899f-fd95996ea742" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 277 op/s
Oct 14 05:07:32 np0005486808 nova_compute[259627]: 2025-10-14 09:07:32.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:32 np0005486808 nova_compute[259627]: 2025-10-14 09:07:32.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:07:32
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:07:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:07:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 167 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 14 05:07:33 np0005486808 nova_compute[259627]: 2025-10-14 09:07:33.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:33 np0005486808 nova_compute[259627]: 2025-10-14 09:07:33.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.445 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.446 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.464 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.566 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.567 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.574 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.575 2 INFO nova.compute.claims [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.578 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760432839.572321, c8d53ba7-c60f-4e5c-899f-fd95996ea742 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.578 2 INFO nova.compute.manager [-] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.616 2 DEBUG nova.compute.manager [None req-eba3def5-debe-48f8-8117-be9e13e18554 - - - - - -] [instance: c8d53ba7-c60f-4e5c-899f-fd95996ea742] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:34 np0005486808 nova_compute[259627]: 2025-10-14 09:07:34.727 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:34 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 14 05:07:34 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:34.953904) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:07:34 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 14 05:07:34 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432854953954, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1825, "num_deletes": 252, "total_data_size": 2767758, "memory_usage": 2817408, "flush_reason": "Manual Compaction"}
Oct 14 05:07:34 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855116847, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2705619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31071, "largest_seqno": 32895, "table_properties": {"data_size": 2697358, "index_size": 4947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17817, "raw_average_key_size": 20, "raw_value_size": 2680512, "raw_average_value_size": 3066, "num_data_blocks": 220, "num_entries": 874, "num_filter_entries": 874, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760432683, "oldest_key_time": 1760432683, "file_creation_time": 1760432854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 162972 microseconds, and 5377 cpu microseconds.
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.116882) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2705619 bytes OK
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.116899) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143223) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143265) EVENT_LOG_v1 {"time_micros": 1760432855143253, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.143315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2759903, prev total WAL file size 2759903, number of live WAL files 2.
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.146076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2642KB)], [68(7258KB)]
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855146156, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10138009, "oldest_snapshot_seqno": -1}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469170859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.285 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.294 2 DEBUG nova.compute.provider_tree [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5752 keys, 8516455 bytes, temperature: kUnknown
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855305280, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8516455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8477216, "index_size": 23769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144502, "raw_average_key_size": 25, "raw_value_size": 8373086, "raw_average_value_size": 1455, "num_data_blocks": 967, "num_entries": 5752, "num_filter_entries": 5752, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760432855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.314 2 DEBUG nova.scheduler.client.report [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.305571) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8516455 bytes
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.323831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.7 rd, 53.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 6272, records dropped: 520 output_compression: NoCompression
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.323863) EVENT_LOG_v1 {"time_micros": 1760432855323850, "job": 38, "event": "compaction_finished", "compaction_time_micros": 159255, "compaction_time_cpu_micros": 32612, "output_level": 6, "num_output_files": 1, "total_output_size": 8516455, "num_input_records": 6272, "num_output_records": 5752, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855324918, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760432855326931, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.145885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:07:35.327115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.347 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.349 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.405 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.405 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.427 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.451 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.539 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.541 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.541 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating image(s)#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.571 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.602 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.626 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.629 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.712 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.713 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.734 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.737 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.5 MiB/s wr, 113 op/s
Oct 14 05:07:35 np0005486808 nova_compute[259627]: 2025-10-14 09:07:35.847 2 DEBUG nova.policy [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.003 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.081 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:07:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 05:07:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:36Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:9e:35 10.100.0.9
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.176 2 DEBUG nova.objects.instance [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.191 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.191 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Ensure instance console log exists: /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.192 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.192 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.193 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:36 np0005486808 nova_compute[259627]: 2025-10-14 09:07:36.642 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Successfully created port: 1dc4ac40-9a94-49bf-a098-664b98599004 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.229 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.230 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.245 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.302 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.303 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.310 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.310 2 INFO nova.compute.claims [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.350 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Successfully updated port: 1dc4ac40-9a94-49bf-a098-664b98599004 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.371 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.372 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.372 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.446 2 DEBUG nova.compute.manager [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-changed-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.447 2 DEBUG nova.compute.manager [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Refreshing instance network info cache due to event network-changed-1dc4ac40-9a94-49bf-a098-664b98599004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.447 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.456 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.543 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 14 05:07:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:07:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935533830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.979 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:37 np0005486808 nova_compute[259627]: 2025-10-14 09:07:37.987 2 DEBUG nova.compute.provider_tree [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.005 2 DEBUG nova.scheduler.client.report [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.037 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.038 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.105 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.106 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.132 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.167 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.298 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.300 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.300 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating image(s)#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.334 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.363 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.389 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.392 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.425 2 DEBUG nova.network.neutron [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.429 2 DEBUG nova.policy [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '40a7a5045f164fb3bc6f8ae8a40f6bac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.473 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.473 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.474 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.475 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.497 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.500 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.656 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.657 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance network_info: |[{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.658 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.658 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Refreshing network info cache for port 1dc4ac40-9a94-49bf-a098-664b98599004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.663 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start _get_guest_xml network_info=[{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.668 2 WARNING nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.678 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.679 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.684 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.685 2 DEBUG nova.virt.libvirt.host [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.685 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.686 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.687 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.687 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.688 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.689 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.690 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.690 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.691 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.691 2 DEBUG nova.virt.hardware [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.698 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:38 np0005486808 nova_compute[259627]: 2025-10-14 09:07:38.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434854446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.207 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.231 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.235 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.428 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.505 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] resizing rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.646 2 DEBUG nova.objects.instance [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'migration_context' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.669 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.670 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Ensure instance console log exists: /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.671 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.671 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.672 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673018092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:39 np0005486808 podman[328214]: 2025-10-14 09:07:39.685476424 +0000 UTC m=+0.087426314 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:07:39 np0005486808 podman[328215]: 2025-10-14 09:07:39.688188481 +0000 UTC m=+0.081432007 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.699 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.700 2 DEBUG nova.virt.libvirt.vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:35Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.700 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.701 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.702 2 DEBUG nova.objects.instance [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.726 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <uuid>58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</uuid>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <name>instance-00000041</name>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:name>tempest-DeleteServersTestJSON-server-2035284822</nova:name>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:07:38</nova:creationTime>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:user uuid="a72439ec330b476ca4bb358682159b61">tempest-DeleteServersTestJSON-555285866-project-member</nova:user>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:project uuid="d39581efff7d48fb83412ca1f615d412">tempest-DeleteServersTestJSON-555285866</nova:project>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <nova:port uuid="1dc4ac40-9a94-49bf-a098-664b98599004">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="serial">58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="uuid">58b61a7a-1a2e-4e3a-9444-3a89da64c5f3</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:0d:81:c4"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <target dev="tap1dc4ac40-9a"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/console.log" append="off"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:07:39 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:07:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:07:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:07:39 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Preparing to wait for external event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.727 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.728 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.728 2 DEBUG nova.virt.libvirt.vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:35Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.729 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.729 2 DEBUG nova.network.os_vif_util [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.730 2 DEBUG os_vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc4ac40-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1dc4ac40-9a, col_values=(('external_ids', {'iface-id': '1dc4ac40-9a94-49bf-a098-664b98599004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:81:c4', 'vm-uuid': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:39 np0005486808 NetworkManager[44885]: <info>  [1760432859.7367] manager: (tap1dc4ac40-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.742 2 INFO os_vif [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a')#033[00m
Oct 14 05:07:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 106 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.787 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.787 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.788 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] No VIF found with MAC fa:16:3e:0d:81:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.788 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Using config drive#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.814 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:39 np0005486808 nova_compute[259627]: 2025-10-14 09:07:39.820 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Successfully created port: 2d7c67a0-10d0-4de1-a430-60e038fcf537 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.339 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Creating config drive at /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.347 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0f95boe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.506 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0f95boe" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.531 2 DEBUG nova.storage.rbd_utils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.534 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.746 2 DEBUG oslo_concurrency.processutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.747 2 INFO nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deleting local config drive /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3/disk.config because it was imported into RBD.#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.771 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updated VIF entry in instance network info cache for port 1dc4ac40-9a94-49bf-a098-664b98599004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.772 2 DEBUG nova.network.neutron [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [{"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.798 2 DEBUG oslo_concurrency.lockutils [req-6ec3329f-7a07-4e0c-ae10-b5e561891e15 req-b70d3e1e-2140-462f-9fce-de57ea714f2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:40 np0005486808 kernel: tap1dc4ac40-9a: entered promiscuous mode
Oct 14 05:07:40 np0005486808 NetworkManager[44885]: <info>  [1760432860.8137] manager: (tap1dc4ac40-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:40Z|00658|binding|INFO|Claiming lport 1dc4ac40-9a94-49bf-a098-664b98599004 for this chassis.
Oct 14 05:07:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:40Z|00659|binding|INFO|1dc4ac40-9a94-49bf-a098-664b98599004: Claiming fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.828 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:81:c4 10.100.0.8'], port_security=['fa:16:3e:0d:81:c4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1dc4ac40-9a94-49bf-a098-664b98599004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1dc4ac40-9a94-49bf-a098-664b98599004 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 bound to our chassis#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.833 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.847 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cd95ffce-2f26-400f-bd3d-523867fed3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.848 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a07d59e-b1 in ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.851 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a07d59e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d089a92-3d67-4eb2-9bcc-af6b31d5c629]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6ba1c3-1cf3-4a16-827b-ad125b2a1d53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:40Z|00660|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 ovn-installed in OVS
Oct 14 05:07:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:40Z|00661|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 up in Southbound
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:40 np0005486808 systemd-udevd[328346]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:40 np0005486808 systemd-machined[214636]: New machine qemu-79-instance-00000041.
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.867 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c6fb8-93ab-4a1d-baad-f0ea72f5df10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 NetworkManager[44885]: <info>  [1760432860.8768] device (tap1dc4ac40-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:07:40 np0005486808 NetworkManager[44885]: <info>  [1760432860.8778] device (tap1dc4ac40-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:07:40 np0005486808 systemd[1]: Started Virtual Machine qemu-79-instance-00000041.
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.881 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Successfully updated port: 2d7c67a0-10d0-4de1-a430-60e038fcf537 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4844db9b-a422-42a7-8310-b3da24378811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.899 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.899 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.927 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e59c8d-8f08-448e-8cb1-42ae3c8962f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[140d3c18-0bb8-4b4e-b62a-baba91a25db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 NetworkManager[44885]: <info>  [1760432860.9331] manager: (tap0a07d59e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Oct 14 05:07:40 np0005486808 systemd-udevd[328349]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.973 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[24d4cddf-cbef-4987-b342-44849877768c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:40.976 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a4eadc-79d8-485a-ab31-4f2d31646b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.993 2 DEBUG nova.compute.manager [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.994 2 DEBUG nova.compute.manager [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing instance network info cache due to event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:40 np0005486808 nova_compute[259627]: 2025-10-14 09:07:40.994 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:40 np0005486808 NetworkManager[44885]: <info>  [1760432860.9968] device (tap0a07d59e-b0): carrier: link connected
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.002 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[31d9581b-11a8-41d4-a10f-fc4975c6e848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.022 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e91161fa-fd59-475e-b808-2eef186d2fdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663975, 'reachable_time': 38303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328377, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.038 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[121782ad-6040-4c43-90c7-094d3dec68fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:2e1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663975, 'tstamp': 663975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328378, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d146ffd-c679-4632-8d3c-477687f7517a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a07d59e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:2e:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663975, 'reachable_time': 38303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328379, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.106 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c99f2ea-bdf1-4211-b419-b990f3a51cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 nova_compute[259627]: 2025-10-14 09:07:41.162 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3b4229-99d8-4e1b-a73d-f7bde3ddfa9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.182 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.183 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.184 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a07d59e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:41 np0005486808 kernel: tap0a07d59e-b0: entered promiscuous mode
Oct 14 05:07:41 np0005486808 NetworkManager[44885]: <info>  [1760432861.1873] manager: (tap0a07d59e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct 14 05:07:41 np0005486808 nova_compute[259627]: 2025-10-14 09:07:41.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.201 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a07d59e-b0, col_values=(('external_ids', {'iface-id': '31ed66d8-7c3d-4486-83f3-5ccb9a199aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:41Z|00662|binding|INFO|Releasing lport 31ed66d8-7c3d-4486-83f3-5ccb9a199aa1 from this chassis (sb_readonly=0)
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.210 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.212 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6992e8b7-a464-43e5-9bfb-3b821bb58705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.213 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.pid.haproxy
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0a07d59e-be8b-4d41-a103-fb5a64bf6f88
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:07:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:41.214 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'env', 'PROCESS_TAG=haproxy-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a07d59e-be8b-4d41-a103-fb5a64bf6f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:07:41 np0005486808 nova_compute[259627]: 2025-10-14 09:07:41.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:41 np0005486808 podman[328411]: 2025-10-14 09:07:41.664096492 +0000 UTC m=+0.082853171 container create 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 05:07:41 np0005486808 podman[328411]: 2025-10-14 09:07:41.616069609 +0000 UTC m=+0.034826348 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:07:41 np0005486808 systemd[1]: Started libpod-conmon-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope.
Oct 14 05:07:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae4ef622a69186680b309d3a53a544ac1d19dfd134061b342523d307a4ae263/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 05:07:41 np0005486808 podman[328411]: 2025-10-14 09:07:41.792488824 +0000 UTC m=+0.211245553 container init 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:07:41 np0005486808 podman[328411]: 2025-10-14 09:07:41.803192338 +0000 UTC m=+0.221949007 container start 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:07:41 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : New worker (328474) forked
Oct 14 05:07:41 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : Loading success.
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.071 2 DEBUG nova.network.neutron [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.089 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.090 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance network_info: |[{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.091 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.092 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.097 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start _get_guest_xml network_info=[{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.105 2 WARNING nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.114 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.115 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.120 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.121 2 DEBUG nova.virt.libvirt.host [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.122 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.122 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.123 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.124 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.124 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.125 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.125 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.126 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.127 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.127 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.128 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.129 2 DEBUG nova.virt.hardware [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.133 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.287 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432862.2867455, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.288 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Started (Lifecycle Event)#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.319 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.325 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432862.288286, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.348 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.351 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.381 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38985500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.648 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.682 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:42 np0005486808 nova_compute[259627]: 2025-10-14 09:07:42.686 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014490414591065964 of space, bias 1.0, pg target 0.43471243773197893 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:07:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:07:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182743236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.167 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.172 2 DEBUG nova.virt.libvirt.vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.173 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.174 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.176 2 DEBUG nova.objects.instance [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.201 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <uuid>8a58a504-85a5-44e6-b815-99abb4ca2fc8</uuid>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <name>instance-00000042</name>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-1860549828</nova:name>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:07:42</nova:creationTime>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:user uuid="40a7a5045f164fb3bc6f8ae8a40f6bac">tempest-ServersTestJSON-1568732941-project-member</nova:user>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:project uuid="f9883ad901fc41c2a340b171d7165a0e">tempest-ServersTestJSON-1568732941</nova:project>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <nova:port uuid="2d7c67a0-10d0-4de1-a430-60e038fcf537">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="serial">8a58a504-85a5-44e6-b815-99abb4ca2fc8</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="uuid">8a58a504-85a5-44e6-b815-99abb4ca2fc8</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b6:5f:31"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <target dev="tap2d7c67a0-10"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/console.log" append="off"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:07:43 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:07:43 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:07:43 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:07:43 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.202 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Preparing to wait for external event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.203 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.204 2 DEBUG nova.virt.libvirt.vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:07:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.205 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.206 2 DEBUG nova.network.os_vif_util [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.206 2 DEBUG os_vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d7c67a0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d7c67a0-10, col_values=(('external_ids', {'iface-id': '2d7c67a0-10d0-4de1-a430-60e038fcf537', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:5f:31', 'vm-uuid': '8a58a504-85a5-44e6-b815-99abb4ca2fc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:43 np0005486808 NetworkManager[44885]: <info>  [1760432863.2508] manager: (tap2d7c67a0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.261 2 INFO os_vif [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10')#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.328 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.328 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.329 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] No VIF found with MAC fa:16:3e:b6:5f:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.329 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Using config drive#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.357 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.8 MiB/s wr, 171 op/s
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.961 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Creating config drive at /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config#033[00m
Oct 14 05:07:43 np0005486808 nova_compute[259627]: 2025-10-14 09:07:43.973 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq8mdkh22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.134 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq8mdkh22" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.171 2 DEBUG nova.storage.rbd_utils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] rbd image 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.177 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.392 2 DEBUG oslo_concurrency.processutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config 8a58a504-85a5-44e6-b815-99abb4ca2fc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.394 2 INFO nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deleting local config drive /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8/disk.config because it was imported into RBD.#033[00m
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.4682] manager: (tap2d7c67a0-10): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Oct 14 05:07:44 np0005486808 kernel: tap2d7c67a0-10: entered promiscuous mode
Oct 14 05:07:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:44Z|00663|binding|INFO|Claiming lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 for this chassis.
Oct 14 05:07:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:44Z|00664|binding|INFO|2d7c67a0-10d0-4de1-a430-60e038fcf537: Claiming fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.532 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5f:31 10.100.0.3'], port_security=['fa:16:3e:b6:5f:31 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a58a504-85a5-44e6-b815-99abb4ca2fc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3e044c-d77c-4323-a9d7-2b0425933df0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38f16233-f765-495d-b104-7721931f5384', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8aeb3f0-29e9-44d2-ad79-ad7dad88caab, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2d7c67a0-10d0-4de1-a430-60e038fcf537) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.534 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2d7c67a0-10d0-4de1-a430-60e038fcf537 in datapath df3e044c-d77c-4323-a9d7-2b0425933df0 bound to our chassis#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.536 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df3e044c-d77c-4323-a9d7-2b0425933df0#033[00m
Oct 14 05:07:44 np0005486808 systemd-udevd[328619]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.549 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[045e2ecd-d6a1-473a-b751-bc2c1c2a5f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.551 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf3e044c-d1 in ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.552 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf3e044c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e96524c6-0230-48b4-852b-f61150439b3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:44Z|00665|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 ovn-installed in OVS
Oct 14 05:07:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:44Z|00666|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 up in Southbound
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.553 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70814f65-df00-48fa-8e16-0c8c8ee44b5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 systemd-machined[214636]: New machine qemu-80-instance-00000042.
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.5651] device (tap2d7c67a0-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.5664] device (tap2d7c67a0-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.570 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb76f71-7d96-423d-9016-83b1bcce7fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 systemd[1]: Started Virtual Machine qemu-80-instance-00000042.
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.596 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f457a325-81d3-4883-a454-6a54a116daba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.627 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1723f40-072e-495b-a558-467e7d27109d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.6355] manager: (tapdf3e044c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9abff52-a3a3-41eb-8183-4e8b359ddafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.679 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[64d76807-1ef8-424e-8aa6-98bad444f061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.682 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3c9438-6e76-44e7-9bcf-5bf8bdbfcacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.7089] device (tapdf3e044c-d0): carrier: link connected
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.717 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[11e2decd-0cdc-4d38-b343-865b73de2450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab1e218-9699-400a-ba7b-17472cd73f3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf3e044c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:44:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664347, 'reachable_time': 17983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328652, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8da52f18-d28c-4337-ada8-138e8884cb72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:4429'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664347, 'tstamp': 664347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328653, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1d6f85-3936-4cab-9e10-d5e389ff381a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf3e044c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:44:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664347, 'reachable_time': 17983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328654, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d11cedfd-32ba-482c-aead-5855ffc4d135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.900 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe9948-5c8e-467b-81e0-d1b2c5f4e000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf3e044c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.903 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.904 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf3e044c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:44 np0005486808 NetworkManager[44885]: <info>  [1760432864.9078] manager: (tapdf3e044c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 14 05:07:44 np0005486808 kernel: tapdf3e044c-d0: entered promiscuous mode
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf3e044c-d0, col_values=(('external_ids', {'iface-id': '7ee05a46-b477-4dd4-add4-3044883f8018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:44Z|00667|binding|INFO|Releasing lport 7ee05a46-b477-4dd4-add4-3044883f8018 from this chassis (sb_readonly=0)
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.915 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.920 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[108ab0ce-b410-4e24-93bd-98446f79fa09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.922 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-df3e044c-d77c-4323-a9d7-2b0425933df0
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/df3e044c-d77c-4323-a9d7-2b0425933df0.pid.haproxy
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID df3e044c-d77c-4323-a9d7-2b0425933df0
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:07:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:44.923 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'env', 'PROCESS_TAG=haproxy-df3e044c-d77c-4323-a9d7-2b0425933df0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df3e044c-d77c-4323-a9d7-2b0425933df0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:07:44 np0005486808 nova_compute[259627]: 2025-10-14 09:07:44.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.135 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updated VIF entry in instance network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.137 2 DEBUG nova.network.neutron [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.162 2 DEBUG oslo_concurrency.lockutils [req-b344db00-4be9-4c1c-8afa-ed3fb1fbbae6 req-a8b71dae-21aa-41af-9e21-6235b1dadebc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:45 np0005486808 podman[328728]: 2025-10-14 09:07:45.353206345 +0000 UTC m=+0.054557354 container create 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:07:45 np0005486808 systemd[1]: Started libpod-conmon-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope.
Oct 14 05:07:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:07:45 np0005486808 podman[328728]: 2025-10-14 09:07:45.327997184 +0000 UTC m=+0.029348183 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:07:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18ff3ed9243abc8a60fd8bac38117c2979ed9d105d233f8f10fb601e76b89352/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:07:45 np0005486808 podman[328728]: 2025-10-14 09:07:45.438828494 +0000 UTC m=+0.140179523 container init 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:07:45 np0005486808 podman[328728]: 2025-10-14 09:07:45.444715909 +0000 UTC m=+0.146066918 container start 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:07:45 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : New worker (328749) forked
Oct 14 05:07:45 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : Loading success.
Oct 14 05:07:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 4.8 MiB/s wr, 140 op/s
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.789 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432865.7882369, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Started (Lifecycle Event)#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.824 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432865.7893763, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.825 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.851 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:45 np0005486808 nova_compute[259627]: 2025-10-14 09:07:45.891 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.616 2 DEBUG nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.656 2 INFO nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.657 2 DEBUG nova.objects.instance [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.798 2 DEBUG nova.compute.manager [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.799 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.800 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.801 2 DEBUG oslo_concurrency.lockutils [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.801 2 DEBUG nova.compute.manager [req-5a82374e-0eed-4793-85de-f88abfd10841 req-7887a4ef-6899-49b6-bf11-af1a246188a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Processing event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.803 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.807 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432866.8070023, 8a58a504-85a5-44e6-b815-99abb4ca2fc8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.807 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.810 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.815 2 INFO nova.virt.libvirt.driver [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance spawned successfully.#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.816 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.850 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.860 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.861 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.862 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.863 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.864 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.865 2 DEBUG nova.virt.libvirt.driver [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.872 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.927 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:46.934 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:07:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:46.936 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.943 2 INFO nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.944 2 DEBUG nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:46 np0005486808 nova_compute[259627]: 2025-10-14 09:07:46.982 2 INFO nova.virt.libvirt.driver [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process#033[00m
Oct 14 05:07:47 np0005486808 nova_compute[259627]: 2025-10-14 09:07:47.009 2 INFO nova.compute.manager [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 9.72 seconds to build instance.#033[00m
Oct 14 05:07:47 np0005486808 nova_compute[259627]: 2025-10-14 09:07:47.034 2 DEBUG oslo_concurrency.lockutils [None req-629caff2-e928-4953-a0ce-273428ee282e 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:47 np0005486808 nova_compute[259627]: 2025-10-14 09:07:47.121 2 DEBUG nova.virt.libvirt.imagebackend [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:07:47 np0005486808 nova_compute[259627]: 2025-10-14 09:07:47.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:47 np0005486808 nova_compute[259627]: 2025-10-14 09:07:47.390 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(8269a39a6fd14d9d8ed1b9ad0f12c4f5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 4.8 MiB/s wr, 139 op/s
Oct 14 05:07:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 14 05:07:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 14 05:07:48 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.047 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@8269a39a6fd14d9d8ed1b9ad0f12c4f5 to images/4d823dba-1513-4c47-907f-2858c26de3c1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.091 2 DEBUG nova.compute.manager [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.092 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.092 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.093 2 DEBUG oslo_concurrency.lockutils [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.094 2 DEBUG nova.compute.manager [req-984e67bb-31df-45af-915c-1236d89fd33d req-71010bed-08e1-404c-a8c6-28ba254a2594 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Processing event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.095 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.100 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.102 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760432868.1023886, 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.103 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.107 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance spawned successfully.#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.108 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.133 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.141 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.146 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.147 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.148 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.148 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.149 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.151 2 DEBUG nova.virt.libvirt.driver [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.170 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/4d823dba-1513-4c47-907f-2858c26de3c1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.220 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.236 2 INFO nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 12.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.238 2 DEBUG nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.308 2 INFO nova.compute.manager [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 13.80 seconds to build instance.#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.328 2 DEBUG oslo_concurrency.lockutils [None req-d0067cd6-a023-4d27-b133-f4ccb27b6873 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.547 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(8269a39a6fd14d9d8ed1b9ad0f12c4f5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.945 2 DEBUG nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG oslo_concurrency.lockutils [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.946 2 DEBUG nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:48 np0005486808 nova_compute[259627]: 2025-10-14 09:07:48.947 2 WARNING nova.compute.manager [req-b7224b85-92b3-4a93-9d19-cf55e3bcb2ce req-63089f25-6a15-4ce6-9a6f-5f1c0525190b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received unexpected event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:07:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 14 05:07:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 14 05:07:49 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 14 05:07:49 np0005486808 nova_compute[259627]: 2025-10-14 09:07:49.041 2 DEBUG nova.storage.rbd_utils [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(4d823dba-1513-4c47-907f-2858c26de3c1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 59 KiB/s wr, 27 op/s
Oct 14 05:07:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 14 05:07:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 14 05:07:50 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.210 2 DEBUG nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG oslo_concurrency.lockutils [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.211 2 DEBUG nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.212 2 WARNING nova.compute.manager [req-7793ce39-ec53-4b26-8efa-1b2e02cacf64 req-9f8c4958-abf9-4dd1-83aa-84fa72ee9ff3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.289 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.290 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.290 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.293 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.294 2 DEBUG nova.objects.instance [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'flavor' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:50 np0005486808 nova_compute[259627]: 2025-10-14 09:07:50.314 2 DEBUG nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:07:51 np0005486808 nova_compute[259627]: 2025-10-14 09:07:51.432 2 INFO nova.virt.libvirt.driver [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete#033[00m
Oct 14 05:07:51 np0005486808 nova_compute[259627]: 2025-10-14 09:07:51.434 2 INFO nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 4.74 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:07:51 np0005486808 nova_compute[259627]: 2025-10-14 09:07:51.731 2 DEBUG nova.compute.manager [None req-747a28f4-a835-43e3-843e-da3b34c92b5e 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct 14 05:07:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 413 op/s
Oct 14 05:07:52 np0005486808 nova_compute[259627]: 2025-10-14 09:07:52.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 14 05:07:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 14 05:07:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 14 05:07:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:07:52.939 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:53 np0005486808 podman[328900]: 2025-10-14 09:07:53.665820012 +0000 UTC m=+0.073468680 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 05:07:53 np0005486808 podman[328899]: 2025-10-14 09:07:53.702132866 +0000 UTC m=+0.115049984 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.730 2 DEBUG nova.compute.manager [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.730 2 DEBUG nova.compute.manager [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing instance network info cache due to event network-changed-2d7c67a0-10d0-4de1-a430-60e038fcf537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.731 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.731 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:07:53 np0005486808 nova_compute[259627]: 2025-10-14 09:07:53.732 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Refreshing network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:07:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 8.1 MiB/s wr, 429 op/s
Oct 14 05:07:55 np0005486808 nova_compute[259627]: 2025-10-14 09:07:55.651 2 DEBUG nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:07:55 np0005486808 nova_compute[259627]: 2025-10-14 09:07:55.727 2 INFO nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting#033[00m
Oct 14 05:07:55 np0005486808 nova_compute[259627]: 2025-10-14 09:07:55.728 2 DEBUG nova.objects.instance [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:07:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.9 MiB/s wr, 386 op/s
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.033 2 INFO nova.virt.libvirt.driver [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process#033[00m
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.223 2 DEBUG nova.virt.libvirt.imagebackend [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.446 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updated VIF entry in instance network info cache for port 2d7c67a0-10d0-4de1-a430-60e038fcf537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.447 2 DEBUG nova.network.neutron [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [{"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.477 2 DEBUG oslo_concurrency.lockutils [req-0a2695f8-7c32-4cff-8fb0-8cbf29567543 req-c58125f2-8fd1-4093-88f3-feb1be6a3d0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8a58a504-85a5-44e6-b815-99abb4ca2fc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:07:56 np0005486808 nova_compute[259627]: 2025-10-14 09:07:56.510 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(4ccb5be71bd24d81a2819bc7d0631fd9) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 14 05:07:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 14 05:07:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 14 05:07:57 np0005486808 nova_compute[259627]: 2025-10-14 09:07:57.154 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@4ccb5be71bd24d81a2819bc7d0631fd9 to images/71b4e8a7-8830-426f-b3a3-9271d6f6992b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:07:57 np0005486808 nova_compute[259627]: 2025-10-14 09:07:57.292 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/71b4e8a7-8830-426f-b3a3-9271d6f6992b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:07:57 np0005486808 nova_compute[259627]: 2025-10-14 09:07:57.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:07:57 np0005486808 nova_compute[259627]: 2025-10-14 09:07:57.719 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(4ccb5be71bd24d81a2819bc7d0631fd9) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:07:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.0 MiB/s wr, 337 op/s
Oct 14 05:07:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 14 05:07:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 14 05:07:58 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 14 05:07:58 np0005486808 nova_compute[259627]: 2025-10-14 09:07:58.148 2 DEBUG nova.storage.rbd_utils [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(71b4e8a7-8830-426f-b3a3-9271d6f6992b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:07:58 np0005486808 nova_compute[259627]: 2025-10-14 09:07:58.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:07:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:58Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 05:07:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:58Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:5f:31 10.100.0.3
Oct 14 05:07:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 14 05:07:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 14 05:07:59 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 14 05:07:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:59Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 05:07:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:07:59Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:81:c4 10.100.0.8
Oct 14 05:07:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 293 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 22 op/s
Oct 14 05:08:00 np0005486808 nova_compute[259627]: 2025-10-14 09:08:00.377 2 DEBUG nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:08:00 np0005486808 nova_compute[259627]: 2025-10-14 09:08:00.664 2 INFO nova.virt.libvirt.driver [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete#033[00m
Oct 14 05:08:00 np0005486808 nova_compute[259627]: 2025-10-14 09:08:00.664 2 INFO nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 4.92 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:08:00 np0005486808 nova_compute[259627]: 2025-10-14 09:08:00.937 2 DEBUG nova.compute.manager [None req-a0d8b344-ead5-4283-889c-d93ec916ee44 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct 14 05:08:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.442 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.488 2 INFO nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] instance snapshotting#033[00m
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.489 2 DEBUG nova.objects.instance [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] Lazy-loading 'flavor' on Instance uuid e065d857-2df9-4199-aa98-41ca3c436bad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:08:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:08:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 14 05:08:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 14 05:08:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.759 2 INFO nova.virt.libvirt.driver [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Beginning live snapshot process#033[00m
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:08:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:08:02 np0005486808 kernel: tap1dc4ac40-9a (unregistering): left promiscuous mode
Oct 14 05:08:02 np0005486808 NetworkManager[44885]: <info>  [1760432882.8673] device (tap1dc4ac40-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:08:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:02Z|00668|binding|INFO|Releasing lport 1dc4ac40-9a94-49bf-a098-664b98599004 from this chassis (sb_readonly=0)
Oct 14 05:08:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:02Z|00669|binding|INFO|Setting lport 1dc4ac40-9a94-49bf-a098-664b98599004 down in Southbound
Oct 14 05:08:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:02Z|00670|binding|INFO|Removing iface tap1dc4ac40-9a ovn-installed in OVS
Oct 14 05:08:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.898 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:81:c4 10.100.0.8'], port_security=['fa:16:3e:0d:81:c4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '58b61a7a-1a2e-4e3a-9444-3a89da64c5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd39581efff7d48fb83412ca1f615d412', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd5e08c6-d8c1-45fb-85db-6ce5c46fea39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c165cf5-3bad-4110-8321-7f7bda9723ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1dc4ac40-9a94-49bf-a098-664b98599004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:08:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.903 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1dc4ac40-9a94-49bf-a098-664b98599004 in datapath 0a07d59e-be8b-4d41-a103-fb5a64bf6f88 unbound from our chassis#033[00m
Oct 14 05:08:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.905 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a07d59e-be8b-4d41-a103-fb5a64bf6f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:08:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.908 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaf8160-309e-4316-8dfb-7d757ee42ca0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:02.909 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 namespace which is not needed anymore#033[00m
Oct 14 05:08:02 np0005486808 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct 14 05:08:02 np0005486808 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000041.scope: Consumed 12.999s CPU time.
Oct 14 05:08:02 np0005486808 systemd-machined[214636]: Machine qemu-79-instance-00000041 terminated.
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:02 np0005486808 nova_compute[259627]: 2025-10-14 09:08:02.967 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:08:03 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : haproxy version is 2.8.14-c23fe91
Oct 14 05:08:03 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [NOTICE]   (328472) : path to executable is /usr/sbin/haproxy
Oct 14 05:08:03 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [WARNING]  (328472) : Exiting Master process...
Oct 14 05:08:03 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [ALERT]    (328472) : Current worker (328474) exited with code 143 (Terminated)
Oct 14 05:08:03 np0005486808 neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88[328467]: [WARNING]  (328472) : All workers exited. Exiting... (0)
Oct 14 05:08:03 np0005486808 systemd[1]: libpod-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope: Deactivated successfully.
Oct 14 05:08:03 np0005486808 podman[329144]: 2025-10-14 09:08:03.098749971 +0000 UTC m=+0.053772215 container died 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:08:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1ae4ef622a69186680b309d3a53a544ac1d19dfd134061b342523d307a4ae263-merged.mount: Deactivated successfully.
Oct 14 05:08:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:08:03 np0005486808 podman[329144]: 2025-10-14 09:08:03.141827782 +0000 UTC m=+0.096850006 container cleanup 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:08:03 np0005486808 systemd[1]: libpod-conmon-3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6.scope: Deactivated successfully.
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.213 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(646213f4699848f899233453071617e5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:08:03 np0005486808 podman[329183]: 2025-10-14 09:08:03.23758406 +0000 UTC m=+0.062303935 container remove 3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.319 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11cb850d-f471-4304-837d-853250d9707b]: (4, ('Tue Oct 14 09:08:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6)\n3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6\nTue Oct 14 09:08:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 (3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6)\n3ed3db388e1811d2fe2943e10fec34c81206373b3495c340a2ce64fb3c1dddf6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb9e86c-96f9-4cd8-bd54-ca3721744814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.322 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a07d59e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:08:03 np0005486808 kernel: tap0a07d59e-b0: left promiscuous mode
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.327 2 DEBUG nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG oslo_concurrency.lockutils [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.328 2 DEBUG nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.328 2 WARNING nova.compute.manager [req-ea258189-9eef-4861-9af9-178f81ec79cd req-ccdc7b2b-85e7-4b8b-b843-c8c2a3e1d513 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-unplugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state active and task_state powering-off.#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[841e1498-0d55-4790-8b5c-0e984eb65647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.368 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7090f9e4-ddd2-4bae-a663-2df1a623f3fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.369 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b0566a7b-4238-49ee-b1a4-df94a0cb38a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.383 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9414bd68-8f8b-48e1-9388-48e6266630be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663968, 'reachable_time': 22368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329225, 'error': None, 'target': 'ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.386 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a07d59e-be8b-4d41-a103-fb5a64bf6f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:08:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:03.386 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3d17b999-6b81-4a87-8817-39c0e26843b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:03 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0a07d59e\x2dbe8b\x2d4d41\x2da103\x2dfb5a64bf6f88.mount: Deactivated successfully.
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.395 2 INFO nova.virt.libvirt.driver [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.399 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance destroyed successfully.#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.399 2 DEBUG nova.objects.instance [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.412 2 DEBUG nova.compute.manager [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:08:03 np0005486808 nova_compute[259627]: 2025-10-14 09:08:03.459 2 DEBUG oslo_concurrency.lockutils [None req-723a0627-6d36-4fc8-a381-9238f81dce19 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 432 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 389 op/s
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.051 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.052 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.053 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.054 2 INFO nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Terminating instance#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.055 2 DEBUG nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.061 2 INFO nova.virt.libvirt.driver [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Instance destroyed successfully.#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.061 2 DEBUG nova.objects.instance [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'resources' on Instance uuid 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.075 2 DEBUG nova.virt.libvirt.vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2035284822',display_name='tempest-DeleteServersTestJSON-server-2035284822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2035284822',id=65,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d39581efff7d48fb83412ca1f615d412',ramdisk_id='',reservation_id='r-47ja83kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-555285866',owner_user_name='tempest-DeleteServersTestJSON-555285866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:08:03Z,user_data=None,user_id='a72439ec330b476ca4bb358682159b61',uuid=58b61a7a-1a2e-4e3a-9444-3a89da64c5f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.076 2 DEBUG nova.network.os_vif_util [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converting VIF {"id": "1dc4ac40-9a94-49bf-a098-664b98599004", "address": "fa:16:3e:0d:81:c4", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dc4ac40-9a", "ovs_interfaceid": "1dc4ac40-9a94-49bf-a098-664b98599004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.077 2 DEBUG nova.network.os_vif_util [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.077 2 DEBUG os_vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc4ac40-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.086 2 INFO os_vif [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:81:c4,bridge_name='br-int',has_traffic_filtering=True,id=1dc4ac40-9a94-49bf-a098-664b98599004,network=Network(0a07d59e-be8b-4d41-a103-fb5a64bf6f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dc4ac40-9a')#033[00m
Oct 14 05:08:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 14 05:08:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 14 05:08:04 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.247 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] cloning vms/e065d857-2df9-4199-aa98-41ca3c436bad_disk@646213f4699848f899233453071617e5 to images/0dc1ba26-0588-4dc2-8af6-e697669fc950 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.369 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] flattening images/0dc1ba26-0588-4dc2-8af6-e697669fc950 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.712 2 INFO nova.virt.libvirt.driver [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deleting instance files /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_del#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.712 2 INFO nova.virt.libvirt.driver [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deletion of /var/lib/nova/instances/58b61a7a-1a2e-4e3a-9444-3a89da64c5f3_del complete#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.725 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] removing snapshot(646213f4699848f899233453071617e5) on rbd image(e065d857-2df9-4199-aa98-41ca3c436bad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.787 2 INFO nova.compute.manager [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.788 2 DEBUG oslo.service.loopingcall [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.788 2 DEBUG nova.compute.manager [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:08:04 np0005486808 nova_compute[259627]: 2025-10-14 09:08:04.789 2 DEBUG nova.network.neutron [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.226 2 DEBUG nova.storage.rbd_utils [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] creating snapshot(snap) on rbd image(0dc1ba26-0588-4dc2-8af6-e697669fc950) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.424 2 DEBUG nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.424 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.425 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.426 2 DEBUG oslo_concurrency.lockutils [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.426 2 DEBUG nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] No waiting events found dispatching network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.427 2 WARNING nova.compute.manager [req-e5d5a8ff-a69c-42ca-ba06-f0a259b12765 req-203c45e9-f7df-49d8-ab86-2000d4fd0bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received unexpected event network-vif-plugged-1dc4ac40-9a94-49bf-a098-664b98599004 for instance with vm_state stopped and task_state deleting.#033[00m
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:08:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566661624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.613 2 DEBUG nova.network.neutron [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.630 2 INFO nova.compute.manager [-] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.680 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.681 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 24 MiB/s wr, 633 op/s
Oct 14 05:08:05 np0005486808 nova_compute[259627]: 2025-10-14 09:08:05.849 2 DEBUG oslo_concurrency.processutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 14 05:08:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 14 05:08:06 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 14 05:08:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:08:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209689776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.295 2 DEBUG oslo_concurrency.processutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.305 2 DEBUG nova.compute.provider_tree [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.330 2 DEBUG nova.scheduler.client.report [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.353 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.382 2 INFO nova.scheduler.client.report [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Deleted allocations for instance 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3#033[00m
Oct 14 05:08:06 np0005486808 nova_compute[259627]: 2025-10-14 09:08:06.467 2 DEBUG oslo_concurrency.lockutils [None req-fd0a5e33-77e3-4298-97cb-9f29903a6089 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "58b61a7a-1a2e-4e3a-9444-3a89da64c5f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.026 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:07 np0005486808 nova_compute[259627]: 2025-10-14 09:08:07.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:07 np0005486808 nova_compute[259627]: 2025-10-14 09:08:07.529 2 DEBUG nova.compute.manager [req-73dc8602-ac19-446d-8dc6-187466d6aa6a req-6126bc4d-c521-4da2-99ed-bf2af8bd2687 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 58b61a7a-1a2e-4e3a-9444-3a89da64c5f3] Received event network-vif-deleted-1dc4ac40-9a94-49bf-a098-664b98599004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1704267d-3faa-49a0-a0d8-c233c578e95f does not exist
Oct 14 05:08:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4b559b02-acae-4197-9af7-41d866e9bd68 does not exist
Oct 14 05:08:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 800c208f-8024-40be-9cba-425996bd2a51 does not exist
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:08:07 np0005486808 nova_compute[259627]: 2025-10-14 09:08:07.692 2 INFO nova.virt.libvirt.driver [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Snapshot image upload complete#033[00m
Oct 14 05:08:07 np0005486808 nova_compute[259627]: 2025-10-14 09:08:07.693 2 INFO nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Took 5.18 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:08:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:08:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 9.3 MiB/s wr, 289 op/s
Oct 14 05:08:08 np0005486808 nova_compute[259627]: 2025-10-14 09:08:08.018 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct 14 05:08:08 np0005486808 nova_compute[259627]: 2025-10-14 09:08:08.019 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Oct 14 05:08:08 np0005486808 nova_compute[259627]: 2025-10-14 09:08:08.019 2 DEBUG nova.compute.manager [None req-2e7f67ba-9c69-4032-bcb0-c69027d0f293 695c749a8dce4506a31e2cec4f02876b 4bda6775f81f403e83269a5f798c9853 - - default default] [instance: e065d857-2df9-4199-aa98-41ca3c436bad] Deleting image 4d823dba-1513-4c47-907f-2858c26de3c1 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Oct 14 05:08:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:08:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.28361002 +0000 UTC m=+0.049668695 container create b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:08:08 np0005486808 systemd[1]: Started libpod-conmon-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope.
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.263399442 +0000 UTC m=+0.029458157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.383004387 +0000 UTC m=+0.149063062 container init b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.390633355 +0000 UTC m=+0.156692050 container start b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.394443779 +0000 UTC m=+0.160502464 container attach b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:08:08 np0005486808 determined_swartz[329649]: 167 167
Oct 14 05:08:08 np0005486808 systemd[1]: libpod-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope: Deactivated successfully.
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.398788106 +0000 UTC m=+0.164846801 container died b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:08:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-186e18801a1ec1db3f37637e3894e08760957ca5c530d0429ededd8c2656f4ff-merged.mount: Deactivated successfully.
Oct 14 05:08:08 np0005486808 podman[329631]: 2025-10-14 09:08:08.447118276 +0000 UTC m=+0.213176951 container remove b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:08:08 np0005486808 systemd[1]: libpod-conmon-b6282746cc1e2580e2ea2ab1ba9a0d77a4313e6afd4216ecb82cf76048b10791.scope: Deactivated successfully.
Oct 14 05:08:08 np0005486808 podman[329673]: 2025-10-14 09:08:08.722102249 +0000 UTC m=+0.073779598 container create abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:08:08 np0005486808 systemd[1]: Started libpod-conmon-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope.
Oct 14 05:08:08 np0005486808 podman[329673]: 2025-10-14 09:08:08.691825523 +0000 UTC m=+0.043502912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:08 np0005486808 podman[329673]: 2025-10-14 09:08:08.839985802 +0000 UTC m=+0.191663141 container init abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:08:08 np0005486808 podman[329673]: 2025-10-14 09:08:08.84964889 +0000 UTC m=+0.201326239 container start abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:08:08 np0005486808 podman[329673]: 2025-10-14 09:08:08.854338675 +0000 UTC m=+0.206016004 container attach abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 14 05:08:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 14 05:08:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.544 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "9512792b-fd02-459a-8377-c2815c130684" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.545 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "9512792b-fd02-459a-8377-c2815c130684" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.566 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.635 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.635 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.644 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.645 2 INFO nova.compute.claims [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:08:09 np0005486808 nova_compute[259627]: 2025-10-14 09:08:09.775 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 438 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.042 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.043 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.044 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.044 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.045 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.048 2 INFO nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Terminating instance#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.050 2 DEBUG nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:08:10 np0005486808 frosty_sanderson[329691]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:08:10 np0005486808 frosty_sanderson[329691]: --> relative data size: 1.0
Oct 14 05:08:10 np0005486808 frosty_sanderson[329691]: --> All data devices are unavailable
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Consumed 1.143s CPU time.
Oct 14 05:08:10 np0005486808 podman[329673]: 2025-10-14 09:08:10.11610707 +0000 UTC m=+1.467784379 container died abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:08:10 np0005486808 kernel: tap2d7c67a0-10 (unregistering): left promiscuous mode
Oct 14 05:08:10 np0005486808 NetworkManager[44885]: <info>  [1760432890.1318] device (tap2d7c67a0-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:08:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-14271cfda9c02fb47d7f1fbb1be3eb81c22ebb9669934f26b9e773ac27511367-merged.mount: Deactivated successfully.
Oct 14 05:08:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:10Z|00671|binding|INFO|Releasing lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 from this chassis (sb_readonly=0)
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:10Z|00672|binding|INFO|Setting lport 2d7c67a0-10d0-4de1-a430-60e038fcf537 down in Southbound
Oct 14 05:08:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:08:10Z|00673|binding|INFO|Removing iface tap2d7c67a0-10 ovn-installed in OVS
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 podman[329673]: 2025-10-14 09:08:10.176524698 +0000 UTC m=+1.528201997 container remove abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5f:31 10.100.0.3'], port_security=['fa:16:3e:b6:5f:31 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a58a504-85a5-44e6-b815-99abb4ca2fc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3e044c-d77c-4323-a9d7-2b0425933df0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9883ad901fc41c2a340b171d7165a0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38f16233-f765-495d-b104-7721931f5384', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8aeb3f0-29e9-44d2-ad79-ad7dad88caab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2d7c67a0-10d0-4de1-a430-60e038fcf537) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2d7c67a0-10d0-4de1-a430-60e038fcf537 in datapath df3e044c-d77c-4323-a9d7-2b0425933df0 unbound from our chassis#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.180 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df3e044c-d77c-4323-a9d7-2b0425933df0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.181 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58a0146c-8013-4ff0-9a19-da24aefe086c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.182 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 namespace which is not needed anymore#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000042.scope: Consumed 14.580s CPU time.
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-conmon-abb3bcbd9db9b756923f3dbd0171d47009e515896c29c10f29cbde182ae5a1f8.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 systemd-machined[214636]: Machine qemu-80-instance-00000042 terminated.
Oct 14 05:08:10 np0005486808 podman[329741]: 2025-10-14 09:08:10.256888167 +0000 UTC m=+0.108623056 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:08:10 np0005486808 podman[329743]: 2025-10-14 09:08:10.260965987 +0000 UTC m=+0.100213609 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 14 05:08:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:08:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120816586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.309 2 INFO nova.virt.libvirt.driver [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Instance destroyed successfully.#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.310 2 DEBUG nova.objects.instance [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lazy-loading 'resources' on Instance uuid 8a58a504-85a5-44e6-b815-99abb4ca2fc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.316 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.323 2 DEBUG nova.compute.provider_tree [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : haproxy version is 2.8.14-c23fe91
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [NOTICE]   (328747) : path to executable is /usr/sbin/haproxy
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : Exiting Master process...
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : Exiting Master process...
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [ALERT]    (328747) : Current worker (328749) exited with code 143 (Terminated)
Oct 14 05:08:10 np0005486808 neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0[328743]: [WARNING]  (328747) : All workers exited. Exiting... (0)
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.330 2 DEBUG nova.virt.libvirt.vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1860549828',display_name='tempest-ServersTestJSON-server-1860549828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1860549828',id=66,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPBjCQeMHoUcI/GHoNmPZT13/aYbG1GbRQyp9yA777AdaKu728JKactgc6o+aymRL18NOp98nhjzfD96xfaginRg8v3g0mj8wP4FAzm7DJKAkKlO+Gseanq7GXJCgToDw==',key_name='tempest-keypair-1047945045',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:07:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9883ad901fc41c2a340b171d7165a0e',ramdisk_id='',reservation_id='r-owfgitz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1568732941',owner_user_name='tempest-ServersTestJSON-1568732941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:07:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='40a7a5045f164fb3bc6f8ae8a40f6bac',uuid=8a58a504-85a5-44e6-b815-99abb4ca2fc8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.331 2 DEBUG nova.network.os_vif_util [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converting VIF {"id": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "address": "fa:16:3e:b6:5f:31", "network": {"id": "df3e044c-d77c-4323-a9d7-2b0425933df0", "bridge": "br-int", "label": "tempest-ServersTestJSON-338264538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9883ad901fc41c2a340b171d7165a0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d7c67a0-10", "ovs_interfaceid": "2d7c67a0-10d0-4de1-a430-60e038fcf537", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.331 2 DEBUG nova.network.os_vif_util [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:08:10 np0005486808 podman[329837]: 2025-10-14 09:08:10.331991636 +0000 UTC m=+0.045226795 container died 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.332 2 DEBUG os_vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7c67a0-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.339 2 INFO os_vif [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:5f:31,bridge_name='br-int',has_traffic_filtering=True,id=2d7c67a0-10d0-4de1-a430-60e038fcf537,network=Network(df3e044c-d77c-4323-a9d7-2b0425933df0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d7c67a0-10')#033[00m
Oct 14 05:08:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-18ff3ed9243abc8a60fd8bac38117c2979ed9d105d233f8f10fb601e76b89352-merged.mount: Deactivated successfully.
Oct 14 05:08:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b-userdata-shm.mount: Deactivated successfully.
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.368 2 DEBUG nova.scheduler.client.report [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:08:10 np0005486808 podman[329837]: 2025-10-14 09:08:10.370737991 +0000 UTC m=+0.083973150 container cleanup 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-conmon-5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.398 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.399 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:08:10 np0005486808 podman[329942]: 2025-10-14 09:08:10.443299028 +0000 UTC m=+0.050749221 container remove 5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.446 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.447 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.452 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf654df-ce30-41cd-a484-eae3a3f95442]: (4, ('Tue Oct 14 09:08:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 (5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b)\n5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b\nTue Oct 14 09:08:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 (5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b)\n5d02d8391ff1827db34b998999ac13c1154619e05e1156557b36b6b8c09afd3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[08b230af-fa1c-4485-b2a4-43847492cc07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.454 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf3e044c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:08:10 np0005486808 kernel: tapdf3e044c-d0: left promiscuous mode
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.466 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.466 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG oslo_concurrency.lockutils [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.467 2 DEBUG nova.compute.manager [req-44607642-f72d-43d8-8613-763d1da2ad60 req-4746acfd-b1ac-493c-81b5-4735ab7378ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-unplugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.471 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.474 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6df5c1a4-f5cd-4564-acc1-10d9a01153df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24e5a6fe-4617-4b37-bca9-dffdbc126640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6761c6-9a32-496f-bd1f-745eac736a54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.508 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a9bec1-57f0-42ac-b55c-2d17e9fd69d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664338, 'reachable_time': 32890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329989, 'error': None, 'target': 'ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 systemd[1]: run-netns-ovnmeta\x2ddf3e044c\x2dd77c\x2d4323\x2da9d7\x2d2b0425933df0.mount: Deactivated successfully.
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.523 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df3e044c-d77c-4323-a9d7-2b0425933df0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:08:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:08:10.524 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8aae0a5c-11f3-4759-b046-3363f03dda00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.606 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.607 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.608 2 INFO nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Creating image(s)#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.631 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.652 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.674 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.677 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.708 2 DEBUG nova.policy [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a72439ec330b476ca4bb358682159b61', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd39581efff7d48fb83412ca1f615d412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.745 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.746 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.747 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.747 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.768 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] rbd image 9512792b-fd02-459a-8377-c2815c130684_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.774 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9512792b-fd02-459a-8377-c2815c130684_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.798793252 +0000 UTC m=+0.044822414 container create c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:08:10 np0005486808 systemd[1]: Started libpod-conmon-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope.
Oct 14 05:08:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.778073822 +0000 UTC m=+0.024103004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.89128737 +0000 UTC m=+0.137316562 container init c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.90225036 +0000 UTC m=+0.148279522 container start c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.906941616 +0000 UTC m=+0.152970878 container attach c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:08:10 np0005486808 dazzling_bhabha[330121]: 167 167
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope: Deactivated successfully.
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.908832143 +0000 UTC m=+0.154861305 container died c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.923 2 INFO nova.virt.libvirt.driver [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deleting instance files /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8_del#033[00m
Oct 14 05:08:10 np0005486808 nova_compute[259627]: 2025-10-14 09:08:10.925 2 INFO nova.virt.libvirt.driver [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deletion of /var/lib/nova/instances/8a58a504-85a5-44e6-b815-99abb4ca2fc8_del complete#033[00m
Oct 14 05:08:10 np0005486808 podman[330085]: 2025-10-14 09:08:10.965070507 +0000 UTC m=+0.211099669 container remove c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:08:10 np0005486808 systemd[1]: libpod-conmon-c397790cf39a4dd655d7f0782b378e43cb9341eb7704640e3c617d5f45c8bde0.scope: Deactivated successfully.
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.009 2 INFO nova.compute.manager [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.009 2 DEBUG oslo.service.loopingcall [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.010 2 DEBUG nova.compute.manager [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.010 2 DEBUG nova.network.neutron [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.052 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9512792b-fd02-459a-8377-c2815c130684_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.126 2 DEBUG nova.storage.rbd_utils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] resizing rbd image 9512792b-fd02-459a-8377-c2815c130684_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:08:11 np0005486808 podman[330181]: 2025-10-14 09:08:11.138558089 +0000 UTC m=+0.046170698 container create 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:08:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-716d01a2a33a9c8f91bc7cddf4d80919629e78d4b79a94c61bc4f95e70a75d8a-merged.mount: Deactivated successfully.
Oct 14 05:08:11 np0005486808 systemd[1]: Started libpod-conmon-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope.
Oct 14 05:08:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:11 np0005486808 podman[330181]: 2025-10-14 09:08:11.120528995 +0000 UTC m=+0.028141644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:11 np0005486808 podman[330181]: 2025-10-14 09:08:11.236338217 +0000 UTC m=+0.143950896 container init 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.239 2 DEBUG nova.objects.instance [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lazy-loading 'migration_context' on Instance uuid 9512792b-fd02-459a-8377-c2815c130684 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:08:11 np0005486808 podman[330181]: 2025-10-14 09:08:11.24661158 +0000 UTC m=+0.154224199 container start 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:08:11 np0005486808 podman[330181]: 2025-10-14 09:08:11.250872245 +0000 UTC m=+0.158484944 container attach 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.262 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.263 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Ensure instance console log exists: /var/lib/nova/instances/9512792b-fd02-459a-8377-c2815c130684/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.263 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.264 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:11 np0005486808 nova_compute[259627]: 2025-10-14 09:08:11.264 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 14 05:08:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 14 05:08:11 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 14 05:08:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 33 KiB/s wr, 82 op/s
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]: {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    "0": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "devices": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "/dev/loop3"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            ],
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_name": "ceph_lv0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_size": "21470642176",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "name": "ceph_lv0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "tags": {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_name": "ceph",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.crush_device_class": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.encrypted": "0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_id": "0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.vdo": "0"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            },
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "vg_name": "ceph_vg0"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        }
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    ],
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    "1": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "devices": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "/dev/loop4"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            ],
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_name": "ceph_lv1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_size": "21470642176",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "name": "ceph_lv1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "tags": {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_name": "ceph",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.crush_device_class": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.encrypted": "0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_id": "1",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.vdo": "0"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            },
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "vg_name": "ceph_vg1"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        }
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    ],
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    "2": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "devices": [
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "/dev/loop5"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            ],
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_name": "ceph_lv2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_size": "21470642176",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "name": "ceph_lv2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "tags": {
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.cluster_name": "ceph",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.crush_device_class": "",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.encrypted": "0",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osd_id": "2",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:                "ceph.vdo": "0"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            },
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "type": "block",
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:            "vg_name": "ceph_vg2"
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:        }
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]:    ]
Oct 14 05:08:12 np0005486808 hopeful_nightingale[330234]: }
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.098 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Successfully created port: 1c4cb1d0-4a75-436b-965e-910994f941f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:08:12 np0005486808 systemd[1]: libpod-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope: Deactivated successfully.
Oct 14 05:08:12 np0005486808 podman[330181]: 2025-10-14 09:08:12.13584482 +0000 UTC m=+1.043457439 container died 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:08:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-abcb2b83750f548e1284412cdc1de5b8df5d849e041651f9a42b98a15f58d761-merged.mount: Deactivated successfully.
Oct 14 05:08:12 np0005486808 podman[330181]: 2025-10-14 09:08:12.203768143 +0000 UTC m=+1.111380752 container remove 44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:08:12 np0005486808 systemd[1]: libpod-conmon-44d0c59f83c9faa91ecfe6e419ca7081f70c21e974a2a4b4b532f63c4fb5a454.scope: Deactivated successfully.
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.580 2 DEBUG nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.580 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG oslo_concurrency.lockutils [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.581 2 DEBUG nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] No waiting events found dispatching network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.581 2 WARNING nova.compute.manager [req-84e6100c-6718-4acf-aa57-72982b3ba251 req-2880fcd9-d244-4af6-8765-2df8322db760 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received unexpected event network-vif-plugged-2d7c67a0-10d0-4de1-a430-60e038fcf537 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 14 05:08:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:08:12 np0005486808 nova_compute[259627]: 2025-10-14 09:08:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.155123422 +0000 UTC m=+0.072433754 container create ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:08:13 np0005486808 systemd[1]: Started libpod-conmon-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope.
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.12661162 +0000 UTC m=+0.043922012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.259215086 +0000 UTC m=+0.176525458 container init ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.268247438 +0000 UTC m=+0.185557770 container start ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.272684118 +0000 UTC m=+0.189994500 container attach ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:08:13 np0005486808 zen_bose[330434]: 167 167
Oct 14 05:08:13 np0005486808 systemd[1]: libpod-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope: Deactivated successfully.
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.279089545 +0000 UTC m=+0.196399907 container died ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:08:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-405d2089b4f3d21fb561626d20247e354995d2206fe8bcceaefa47b3d8c2e42d-merged.mount: Deactivated successfully.
Oct 14 05:08:13 np0005486808 podman[330418]: 2025-10-14 09:08:13.339121444 +0000 UTC m=+0.256431766 container remove ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:08:13 np0005486808 systemd[1]: libpod-conmon-ae62a4ba604867d11a1575132e08b08411528fed824e286011b694803dc581d1.scope: Deactivated successfully.
Oct 14 05:08:13 np0005486808 podman[330458]: 2025-10-14 09:08:13.557151113 +0000 UTC m=+0.045005149 container create 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:08:13 np0005486808 systemd[1]: Started libpod-conmon-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope.
Oct 14 05:08:13 np0005486808 podman[330458]: 2025-10-14 09:08:13.538929455 +0000 UTC m=+0.026783531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:08:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:08:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:08:13 np0005486808 podman[330458]: 2025-10-14 09:08:13.66543262 +0000 UTC m=+0.153286736 container init 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:08:13 np0005486808 podman[330458]: 2025-10-14 09:08:13.672539695 +0000 UTC m=+0.160393721 container start 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:08:13 np0005486808 podman[330458]: 2025-10-14 09:08:13.675883847 +0000 UTC m=+0.163737923 container attach 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:08:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 339 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 44 KiB/s wr, 109 op/s
Oct 14 05:08:13 np0005486808 nova_compute[259627]: 2025-10-14 09:08:13.957 2 DEBUG nova.network.neutron [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:08:13 np0005486808 nova_compute[259627]: 2025-10-14 09:08:13.977 2 INFO nova.compute.manager [-] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Took 2.97 seconds to deallocate network for instance.#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.031 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.069 2 DEBUG nova.compute.manager [req-391e3273-d6cf-4ac3-b7da-854de9da06d7 req-7de161ba-b6f5-4d16-bfab-c8a135290f17 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8a58a504-85a5-44e6-b815-99abb4ca2fc8] Received event network-vif-deleted-2d7c67a0-10d0-4de1-a430-60e038fcf537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.106 2 DEBUG oslo_concurrency.processutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.295 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Successfully updated port: 1c4cb1d0-4a75-436b-965e-910994f941f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.318 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquiring lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.319 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Acquired lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.319 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.509 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454732831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.557 2 DEBUG oslo_concurrency.processutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.566 2 DEBUG nova.compute.provider_tree [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.586 2 DEBUG nova.scheduler.client.report [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.611 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.650 2 INFO nova.scheduler.client.report [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Deleted allocations for instance 8a58a504-85a5-44e6-b815-99abb4ca2fc8#033[00m
Oct 14 05:08:14 np0005486808 serene_gates[330474]: {
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_id": 2,
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "type": "bluestore"
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    },
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_id": 1,
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "type": "bluestore"
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    },
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_id": 0,
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:08:14 np0005486808 serene_gates[330474]:        "type": "bluestore"
Oct 14 05:08:14 np0005486808 serene_gates[330474]:    }
Oct 14 05:08:14 np0005486808 serene_gates[330474]: }
Oct 14 05:08:14 np0005486808 nova_compute[259627]: 2025-10-14 09:08:14.718 2 DEBUG oslo_concurrency.lockutils [None req-db464125-5a5e-4429-9bda-8f4faf7ff641 40a7a5045f164fb3bc6f8ae8a40f6bac f9883ad901fc41c2a340b171d7165a0e - - default default] Lock "8a58a504-85a5-44e6-b815-99abb4ca2fc8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:08:14 np0005486808 systemd[1]: libpod-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Deactivated successfully.
Oct 14 05:08:14 np0005486808 systemd[1]: libpod-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Consumed 1.084s CPU time.
Oct 14 05:08:14 np0005486808 podman[330458]: 2025-10-14 09:08:14.762109198 +0000 UTC m=+1.249963244 container died 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:08:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2153f7690c5386b0dcdd623d84639df483eb75202d3d19957e138eb75c00876e-merged.mount: Deactivated successfully.
Oct 14 05:08:14 np0005486808 podman[330458]: 2025-10-14 09:08:14.843282827 +0000 UTC m=+1.331136883 container remove 5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:08:14 np0005486808 systemd[1]: libpod-conmon-5109e769ebb815308c8da55a68fd91edb3b2e56d787ed21a234b721467190bc6.scope: Deactivated successfully.
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:08:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eb7ae002-7cd9-49ec-aecf-1ebfb1887d68 does not exist
Oct 14 05:08:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1bd5afc2-1141-42f6-b8d1-c9a2ebae3473 does not exist
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.584 2 DEBUG nova.network.neutron [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Updating instance_info_cache with network_info: [{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.606 2 DEBUG oslo_concurrency.lockutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Releasing lock "refresh_cache-9512792b-fd02-459a-8377-c2815c130684" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.606 2 DEBUG nova.compute.manager [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Instance network_info: |[{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.608 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] [instance: 9512792b-fd02-459a-8377-c2815c130684] Start _get_guest_xml network_info=[{"id": "1c4cb1d0-4a75-436b-965e-910994f941f6", "address": "fa:16:3e:3c:c5:30", "network": {"id": "0a07d59e-be8b-4d41-a103-fb5a64bf6f88", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1825604540-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d39581efff7d48fb83412ca1f615d412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c4cb1d0-4a", "ovs_interfaceid": "1c4cb1d0-4a75-436b-965e-910994f941f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.613 2 WARNING nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.620 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.621 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.624 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.libvirt.host [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.libvirt.driver [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.625 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.626 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.627 2 DEBUG nova.virt.hardware [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:08:15 np0005486808 nova_compute[259627]: 2025-10-14 09:08:15.630 2 DEBUG oslo_concurrency.processutils [None req-d9dd81b0-aec4-4b87-8b8c-c20ebca67df1 a72439ec330b476ca4bb358682159b61 d39581efff7d48fb83412ca1f615d412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:08:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 3.6 MiB/s wr, 281 op/s
Oct 14 05:08:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:08:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00861|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe ovn-installed in OVS
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00862|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe up in Southbound
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00863|binding|INFO|Releasing lport c9b0841b-401d-4f72-aa51-209173353afe from this chassis (sb_readonly=1)
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00864|if_status|INFO|Dropped 1 log messages in last 304 seconds (most recently, 304 seconds ago) due to excessive rate
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00865|if_status|INFO|Not setting lport c9b0841b-401d-4f72-aa51-209173353afe down as sb is readonly
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00866|binding|INFO|Removing iface tapc9b0841b-40 ovn-installed in OVS
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00867|binding|INFO|Releasing lport c9b0841b-401d-4f72-aa51-209173353afe from this chassis (sb_readonly=0)
Oct 14 05:11:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:44Z|00868|binding|INFO|Setting lport c9b0841b-401d-4f72-aa51-209173353afe down in Southbound
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.746 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:e1:a1 10.100.0.10'], port_security=['fa:16:3e:6d:e1:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ab77dbf7-4458-4b16-a2e7-ed73be047838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c9b0841b-401d-4f72-aa51-209173353afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.748 2 DEBUG nova.virt.libvirt.vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-1',id=83,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:39Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ab77dbf7-4458-4b16-a2e7-ed73be047838,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.748 2 DEBUG nova.network.os_vif_util [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "c9b0841b-401d-4f72-aa51-209173353afe", "address": "fa:16:3e:6d:e1:a1", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b0841b-40", "ovs_interfaceid": "c9b0841b-401d-4f72-aa51-209173353afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.749 2 DEBUG nova.network.os_vif_util [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.749 2 DEBUG os_vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9b0841b-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.759 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.760 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.760 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.761 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.763 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.764 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.767 2 INFO os_vif [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:e1:a1,bridge_name='br-int',has_traffic_filtering=True,id=c9b0841b-401d-4f72-aa51-209173353afe,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9b0841b-40')#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.782 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38d138bd-38de-4741-8b56-f796641e2975]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.817 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd9ddec-309e-47f1-9304-37d03c3442c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.820 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d91c704-7ee0-4233-b3c3-696325105cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 podman[345814]: 2025-10-14 09:11:44.840665015 +0000 UTC m=+0.088288195 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:11:44 np0005486808 podman[345812]: 2025-10-14 09:11:44.85794176 +0000 UTC m=+0.110097202 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.859 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[96e631cd-b17a-4b72-b118-522c64d1da52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.878 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19e26bd4-cb53-4f1e-a6ad-44046e03f45f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345872, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.890 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.891 2 DEBUG oslo_concurrency.lockutils [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.891 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.892 2 DEBUG nova.compute.manager [req-ad4692a2-889d-47de-bb5f-ca2c87865c58 req-2fcbd82d-1f55-4f76-b744-ce5ba0502bf8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.901 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe9daed-d78f-4185-8445-e4290eff02be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345876, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345876, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.904 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 nova_compute[259627]: 2025-10-14 09:11:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.907 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.908 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.908 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.909 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c9b0841b-401d-4f72-aa51-209173353afe in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.912 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.929 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[21dd3b47-bc0f-452c-ba2b-22e58352b3d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.968 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[51b1f586-33e7-4f65-bfd2-2abc1b640fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:44.972 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dbe09c-4fb9-47a7-9f57-459b57caf7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.006 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[10dc728b-642f-466f-bd85-569d68ea0393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:45 np0005486808 rsyslogd[1002]: imjournal: 11491 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.051 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0beea3a8-fc0a-456a-8903-842fa4f6a797]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345882, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.073 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5aaa5cc1-b595-4811-b355-40ebd707945a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345883, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345883, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.076 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.078 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:45.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.238 2 INFO nova.virt.libvirt.driver [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deleting instance files /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838_del#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.240 2 INFO nova.virt.libvirt.driver [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deletion of /var/lib/nova/instances/ab77dbf7-4458-4b16-a2e7-ed73be047838_del complete#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.298 2 INFO nova.compute.manager [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG oslo.service.loopingcall [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.299 2 DEBUG nova.network.neutron [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:11:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:45Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:5e:c8 10.100.0.5
Oct 14 05:11:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:45Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:5e:c8 10.100.0.5
Oct 14 05:11:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.6 MiB/s wr, 656 op/s
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.979 2 DEBUG nova.network.neutron [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:11:45 np0005486808 nova_compute[259627]: 2025-10-14 09:11:45.997 2 INFO nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Took 0.70 seconds to deallocate network for instance.#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.057 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.057 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.069 2 DEBUG nova.compute.manager [req-4d84d474-ddc8-4f15-bf22-2adfc94f0b35 req-0e5ef6dc-4f64-4ba9-bd1b-ecb1545a0acc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-deleted-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.220 2 DEBUG oslo_concurrency.processutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:11:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:11:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254349499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.714 2 DEBUG oslo_concurrency.processutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.727 2 DEBUG nova.compute.provider_tree [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.751 2 DEBUG nova.scheduler.client.report [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.779 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.801 2 INFO nova.scheduler.client.report [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance ab77dbf7-4458-4b16-a2e7-ed73be047838#033[00m
Oct 14 05:11:46 np0005486808 nova_compute[259627]: 2025-10-14 09:11:46.864 2 DEBUG oslo_concurrency.lockutils [None req-8493fb9e-36e7-43bd-b606-f680248c507f bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:11:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7937 writes, 35K keys, 7937 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7937 writes, 7937 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1669 writes, 7750 keys, 1669 commit groups, 1.0 writes per commit group, ingest: 9.97 MB, 0.02 MB/s#012Interval WAL: 1669 writes, 1669 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.3      0.45              0.14        21    0.021       0      0       0.0       0.0#012  L6      1/0    9.45 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    182.7    149.9      1.00              0.48        20    0.050    100K    11K       0.0       0.0#012 Sum      1/0    9.45 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.5    126.2    132.6      1.45              0.62        41    0.035    100K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.7     88.1     90.2      0.58              0.18        10    0.058     31K   3151       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    182.7    149.9      1.00              0.48        20    0.050    100K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.7      0.44              0.14        20    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.5 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 22.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000158 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1452,21.23 MB,6.98202%) FilterBlock(42,290.30 KB,0.0932543%) IndexBlock(42,526.34 KB,0.169081%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.118 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.119 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.120 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.120 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.121 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.121 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.121 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.122 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.122 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.123 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.123 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.123 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.124 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.124 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.125 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.126 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.126 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.127 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.127 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.128 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.128 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.128 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-unplugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.129 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.130 2 DEBUG oslo_concurrency.lockutils [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab77dbf7-4458-4b16-a2e7-ed73be047838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.130 2 DEBUG nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] No waiting events found dispatching network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.131 2 WARNING nova.compute.manager [req-605dfbb0-4aa3-4adc-893c-fb9c037a97dc req-92e3b04a-3386-427f-b8bd-15604d1f0774 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Received unexpected event network-vif-plugged-c9b0841b-401d-4f72-aa51-209173353afe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.497 2 INFO nova.compute.manager [None req-8482ca0e-7f08-4d01-a98d-1b05db4bcb53 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.504 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.746 2 INFO nova.compute.manager [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Pausing#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.748 2 DEBUG nova.objects.instance [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:11:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.782 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433107.7814875, 290980d2-08b4-4029-a1c3-becd3457a410 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.782 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.784 2 DEBUG nova.compute.manager [None req-12f90051-391d-48ea-8fbb-12aa75a3f8d2 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.817 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.822 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 05:11:47 np0005486808 nova_compute[259627]: 2025-10-14 09:11:47.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 350 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.2 MiB/s wr, 428 op/s
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.994 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.995 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.995 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.996 2 INFO nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Terminating instance#033[00m
Oct 14 05:11:49 np0005486808 nova_compute[259627]: 2025-10-14 09:11:49.997 2 DEBUG nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:11:50 np0005486808 kernel: tap8fb091af-e4 (unregistering): left promiscuous mode
Oct 14 05:11:50 np0005486808 NetworkManager[44885]: <info>  [1760433110.0447] device (tap8fb091af-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00869|binding|INFO|Releasing lport 8fb091af-e492-4374-b3c2-7ab4157389a6 from this chassis (sb_readonly=0)
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00870|binding|INFO|Setting lport 8fb091af-e492-4374-b3c2-7ab4157389a6 down in Southbound
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00871|binding|INFO|Removing iface tap8fb091af-e4 ovn-installed in OVS
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.134 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b2:b6 10.100.0.6'], port_security=['fa:16:3e:fa:b2:b6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ed2aee1e-f632-4d7f-ae03-f5d9c41e9104', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8fb091af-e492-4374-b3c2-7ab4157389a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.135 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8fb091af-e492-4374-b3c2-7ab4157389a6 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.140 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57#033[00m
Oct 14 05:11:50 np0005486808 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 14 05:11:50 np0005486808 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 11.236s CPU time.
Oct 14 05:11:50 np0005486808 systemd-machined[214636]: Machine qemu-104-instance-00000054 terminated.
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b3f98f-4fe3-4729-aace-332cf73df229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.197 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e446223-f12a-4eb9-8ae1-d73114bf4cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7ee235-6633-4658-b543-e81f9c735425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.241 2 INFO nova.virt.libvirt.driver [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Instance destroyed successfully.#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.242 2 DEBUG nova.objects.instance [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'resources' on Instance uuid ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.245 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[770fea98-dc18-4e1c-9b93-5108e41c023f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.266 2 DEBUG nova.virt.libvirt.vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-2',id=84,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:39Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=ed2aee1e-f632-4d7f-ae03-f5d9c41e9104,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.267 2 DEBUG nova.network.os_vif_util [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "8fb091af-e492-4374-b3c2-7ab4157389a6", "address": "fa:16:3e:fa:b2:b6", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fb091af-e4", "ovs_interfaceid": "8fb091af-e492-4374-b3c2-7ab4157389a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.268 2 DEBUG nova.network.os_vif_util [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.268 2 DEBUG os_vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8fb091af-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a30fcdd0-1ef3-409f-953a-636287532609]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b7bccdd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:80:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687671, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345929, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.277 2 INFO os_vif [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b2:b6,bridge_name='br-int',has_traffic_filtering=True,id=8fb091af-e492-4374-b3c2-7ab4157389a6,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fb091af-e4')#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.302 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8df2b51b-dff4-4332-bda7-73d517ed5891]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687688, 'tstamp': 687688}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345934, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b7bccdd-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687695, 'tstamp': 687695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345934, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.305 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b7bccdd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b7bccdd-40, col_values=(('external_ids', {'iface-id': '8bb0b8f1-e510-498b-862e-2d74544dc8a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.315 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.315 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.316 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.317 2 INFO nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Terminating instance#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.318 2 DEBUG nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:11:50 np0005486808 kernel: tap4006119e-fa (unregistering): left promiscuous mode
Oct 14 05:11:50 np0005486808 NetworkManager[44885]: <info>  [1760433110.3620] device (tap4006119e-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00872|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=0)
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00873|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 down in Southbound
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00874|binding|INFO|Removing iface tap4006119e-fa ovn-installed in OVS
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.388 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.389 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.390 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[363379de-1c6e-4c72-b722-f525a9ca9740]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.396 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 namespace which is not needed anymore#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 14 05:11:50 np0005486808 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Consumed 9.797s CPU time.
Oct 14 05:11:50 np0005486808 systemd-machined[214636]: Machine qemu-105-instance-00000055 terminated.
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.434 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.435 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.436 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.436 2 DEBUG oslo_concurrency.lockutils [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.437 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] No waiting events found dispatching network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.437 2 DEBUG nova.compute.manager [req-3578af30-a264-4f85-9b55-cc6702ba094b req-1f7aef61-b963-4719-bac4-0db59f18bee8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-unplugged-8fb091af-e492-4374-b3c2-7ab4157389a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : haproxy version is 2.8.14-c23fe91
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [NOTICE]   (345552) : path to executable is /usr/sbin/haproxy
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : Exiting Master process...
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : Exiting Master process...
Oct 14 05:11:50 np0005486808 systemd-udevd[345910]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:11:50 np0005486808 kernel: tap4006119e-fa: entered promiscuous mode
Oct 14 05:11:50 np0005486808 NetworkManager[44885]: <info>  [1760433110.5440] manager: (tap4006119e-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [ALERT]    (345552) : Current worker (345554) exited with code 143 (Terminated)
Oct 14 05:11:50 np0005486808 neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57[345545]: [WARNING]  (345552) : All workers exited. Exiting... (0)
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00875|binding|INFO|Claiming lport 4006119e-fa08-4095-bba7-d338c82ac066 for this chassis.
Oct 14 05:11:50 np0005486808 kernel: tap4006119e-fa (unregistering): left promiscuous mode
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00876|binding|INFO|4006119e-fa08-4095-bba7-d338c82ac066: Claiming fa:16:3e:be:3e:79 10.100.0.11
Oct 14 05:11:50 np0005486808 systemd[1]: libpod-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope: Deactivated successfully.
Oct 14 05:11:50 np0005486808 conmon[345545]: conmon 46e961f84509ad1e08e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope/container/memory.events
Oct 14 05:11:50 np0005486808 podman[345970]: 2025-10-14 09:11:50.556736706 +0000 UTC m=+0.063817693 container died 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.567 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.571 2 INFO nova.virt.libvirt.driver [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Instance destroyed successfully.#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.571 2 DEBUG nova.objects.instance [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lazy-loading 'resources' on Instance uuid 62fe725d-b24a-477a-a275-06d2cd960aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:11:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89-userdata-shm.mount: Deactivated successfully.
Oct 14 05:11:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f890f6c75aac33a3bd8ad553f6090efd1c052d214d952027d2bd4c2c78b53ada-merged.mount: Deactivated successfully.
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00877|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 ovn-installed in OVS
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00878|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 up in Southbound
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00879|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=1)
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00880|binding|INFO|Removing iface tap4006119e-fa ovn-installed in OVS
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00881|binding|INFO|Releasing lport 4006119e-fa08-4095-bba7-d338c82ac066 from this chassis (sb_readonly=0)
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.594 2 DEBUG nova.virt.libvirt.vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-792258866',display_name='tempest-ListServersNegativeTestJSON-server-792258866-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-792258866-3',id=85,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-14T09:11:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d2f1337472a4407869e16f2271280ef',ramdisk_id='',reservation_id='r-f3phfmf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1864242738',owner_user_name='tempest-ListServersNegativeTestJSON-1864242738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:41Z,user_data=None,user_id='bca591d7e37e4881bc4de44ee172b2f4',uuid=62fe725d-b24a-477a-a275-06d2cd960aaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.595 2 DEBUG nova.network.os_vif_util [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converting VIF {"id": "4006119e-fa08-4095-bba7-d338c82ac066", "address": "fa:16:3e:be:3e:79", "network": {"id": "7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-526226666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d2f1337472a4407869e16f2271280ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4006119e-fa", "ovs_interfaceid": "4006119e-fa08-4095-bba7-d338c82ac066", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:11:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:50Z|00882|binding|INFO|Setting lport 4006119e-fa08-4095-bba7-d338c82ac066 down in Southbound
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.595 2 DEBUG nova.network.os_vif_util [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.596 2 DEBUG os_vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4006119e-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 podman[345970]: 2025-10-14 09:11:50.600110494 +0000 UTC m=+0.107191481 container cleanup 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.607 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:3e:79 10.100.0.11'], port_security=['fa:16:3e:be:3e:79 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '62fe725d-b24a-477a-a275-06d2cd960aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d2f1337472a4407869e16f2271280ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cb91c5c-0b16-4305-8e1d-968c62c3d3c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9bea81a-55e6-4c5b-a0e6-3add7d9a646a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4006119e-fa08-4095-bba7-d338c82ac066) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:50 np0005486808 systemd[1]: libpod-conmon-46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89.scope: Deactivated successfully.
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.619 2 INFO os_vif [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:3e:79,bridge_name='br-int',has_traffic_filtering=True,id=4006119e-fa08-4095-bba7-d338c82ac066,network=Network(7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4006119e-fa')#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.683 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433095.655628, d1c0470c-5f74-43c3-a206-07147fa01d5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.683 2 INFO nova.compute.manager [-] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:11:50 np0005486808 podman[346002]: 2025-10-14 09:11:50.683556689 +0000 UTC m=+0.048266009 container remove 46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.692 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f71cbf70-7e6d-4f83-a024-a591930d5302]: (4, ('Tue Oct 14 09:11:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 (46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89)\n46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89\nTue Oct 14 09:11:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 (46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89)\n46e961f84509ad1e08e7633937b736f7f97a5ded7fe16b514f3800a097fe7a89\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a029ed42-bc3e-4b6a-b55a-4dbb1dfcf4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.701 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b7bccdd-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:50 np0005486808 kernel: tap7b7bccdd-40: left promiscuous mode
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.705 2 DEBUG nova.compute.manager [None req-7cf7bd16-ce9a-4c4d-b745-0475e483aefe - - - - - -] [instance: d1c0470c-5f74-43c3-a206-07147fa01d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[122617be-aaa0-463d-b9aa-b426f00f5c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03cafd5c-7a45-4743-b983-9b406aee6197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.737 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebb7c4a-43ec-458f-af63-680d4dc65c49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.764 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7eeb8f58-9bd0-4dab-8715-fe4fa5075a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687661, 'reachable_time': 42969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346034, 'error': None, 'target': 'ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7b7bccdd\x2d4e3c\x2d4e4b\x2da2d8\x2d8f7cf1dbab57.mount: Deactivated successfully.
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.770 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.770 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[43d17145-e24d-49b3-be13-6af8de1f63e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.773 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.773 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b49d7389-dd27-49ef-96f9-97ecb4760028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.774 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4006119e-fa08-4095-bba7-d338c82ac066 in datapath 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57 unbound from our chassis#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.776 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b7bccdd-4e3c-4e4b-a2d8-8f7cf1dbab57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:11:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:50.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f61744ba-b0be-4930-b5df-8aeeb1e436c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.785 2 INFO nova.virt.libvirt.driver [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deleting instance files /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_del#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.786 2 INFO nova.virt.libvirt.driver [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deletion of /var/lib/nova/instances/ed2aee1e-f632-4d7f-ae03-f5d9c41e9104_del complete#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.829 2 INFO nova.compute.manager [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.830 2 DEBUG oslo.service.loopingcall [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.830 2 DEBUG nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:11:50 np0005486808 nova_compute[259627]: 2025-10-14 09:11:50.831 2 DEBUG nova.network.neutron [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.035 2 INFO nova.virt.libvirt.driver [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deleting instance files /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf_del#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.036 2 INFO nova.virt.libvirt.driver [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deletion of /var/lib/nova/instances/62fe725d-b24a-477a-a275-06d2cd960aaf_del complete#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.084 2 INFO nova.compute.manager [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.085 2 DEBUG oslo.service.loopingcall [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.085 2 DEBUG nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.086 2 DEBUG nova.network.neutron [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.544 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG oslo_concurrency.lockutils [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.545 2 DEBUG nova.compute.manager [req-493506f6-083a-47aa-8b6b-c47de4c89185 req-d3aa670d-ce47-4ffe-b2e6-40c5d452cb7a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-unplugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.694 2 INFO nova.compute.manager [None req-370ac7de-a92b-4ded-8491-9abf4c6f9131 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.702 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.852 2 INFO nova.compute.manager [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Unpausing#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.853 2 DEBUG nova.objects.instance [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.880 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433111.8806348, 290980d2-08b4-4029-a1c3-becd3457a410 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.881 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:11:51 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.888 2 DEBUG nova.virt.libvirt.guest [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.889 2 DEBUG nova.compute.manager [None req-7cb6c8d7-6aeb-4f5d-89a4-1c9ad1ddf5d9 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 468 op/s
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.903 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.907 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:11:51 np0005486808 nova_compute[259627]: 2025-10-14 09:11:51.940 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.515 2 DEBUG nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG oslo_concurrency.lockutils [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.516 2 DEBUG nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] No waiting events found dispatching network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.517 2 WARNING nova.compute.manager [req-ccedff70-cc7e-4036-bd43-15ad3969670c req-0d226e1c-1e49-4fa7-8107-f9be53616209 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received unexpected event network-vif-plugged-8fb091af-e492-4374-b3c2-7ab4157389a6 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:11:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:52 np0005486808 nova_compute[259627]: 2025-10-14 09:11:52.933 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.238 2 DEBUG nova.network.neutron [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.272 2 INFO nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Took 2.19 seconds to deallocate network for instance.#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.312 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.313 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.362 2 DEBUG nova.network.neutron [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.404 2 INFO nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Took 2.57 seconds to deallocate network for instance.#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.454 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.484 2 DEBUG oslo_concurrency.processutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.662 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.663 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.664 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.665 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.665 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.666 2 WARNING nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received unexpected event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.667 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.668 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.668 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.669 2 DEBUG oslo_concurrency.lockutils [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.670 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] No waiting events found dispatching network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.670 2 WARNING nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received unexpected event network-vif-plugged-4006119e-fa08-4095-bba7-d338c82ac066 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.671 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Received event network-vif-deleted-4006119e-fa08-4095-bba7-d338c82ac066 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.672 2 DEBUG nova.compute.manager [req-a397fd1f-c5c9-4894-af03-72d97506e190 req-a05f06ca-4518-4e72-a0ef-ce4112bb9ee3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Received event network-vif-deleted-8fb091af-e492-4374-b3c2-7ab4157389a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 330 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.2 MiB/s wr, 320 op/s
Oct 14 05:11:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:11:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556266415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:11:53 np0005486808 nova_compute[259627]: 2025-10-14 09:11:53.995 2 DEBUG oslo_concurrency.processutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.002 2 DEBUG nova.compute.provider_tree [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.044 2 DEBUG nova.scheduler.client.report [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.081 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.083 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.107 2 INFO nova.scheduler.client.report [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance 62fe725d-b24a-477a-a275-06d2cd960aaf#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.211 2 DEBUG oslo_concurrency.lockutils [None req-08fa8d85-acf4-418a-9393-76cd98f9e289 bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "62fe725d-b24a-477a-a275-06d2cd960aaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.234 2 DEBUG oslo_concurrency.processutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:11:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:11:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384852915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.781 2 DEBUG oslo_concurrency.processutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.787 2 DEBUG nova.compute.provider_tree [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.807 2 DEBUG nova.scheduler.client.report [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.832 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.871 2 INFO nova.scheduler.client.report [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Deleted allocations for instance ed2aee1e-f632-4d7f-ae03-f5d9c41e9104#033[00m
Oct 14 05:11:54 np0005486808 nova_compute[259627]: 2025-10-14 09:11:54.950 2 DEBUG oslo_concurrency.lockutils [None req-2f5e2d2c-030d-4d4f-87f8-449e8e1cee2c bca591d7e37e4881bc4de44ee172b2f4 6d2f1337472a4407869e16f2271280ef - - default default] Lock "ed2aee1e-f632-4d7f-ae03-f5d9c41e9104" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:55 np0005486808 nova_compute[259627]: 2025-10-14 09:11:55.385 2 INFO nova.compute.manager [None req-ac9fd134-ff5c-444b-af36-42a88ab82c00 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Get console output#033[00m
Oct 14 05:11:55 np0005486808 nova_compute[259627]: 2025-10-14 09:11:55.391 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:11:55 np0005486808 nova_compute[259627]: 2025-10-14 09:11:55.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 4.3 MiB/s wr, 411 op/s
Oct 14 05:11:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:56Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 05:11:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:56Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 05:11:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:56.860 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:56 np0005486808 nova_compute[259627]: 2025-10-14 09:11:56.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:56.863 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.211 2 DEBUG nova.compute.manager [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.212 2 DEBUG nova.compute.manager [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing instance network info cache due to event network-changed-106349ae-cfaa-43ec-9bda-16f36a6ac3d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.213 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.213 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.214 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Refreshing network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.305 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.306 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.307 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "290980d2-08b4-4029-a1c3-becd3457a410-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.307 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.308 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.310 2 INFO nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Terminating instance#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.311 2 DEBUG nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:11:57 np0005486808 kernel: tap106349ae-cf (unregistering): left promiscuous mode
Oct 14 05:11:57 np0005486808 NetworkManager[44885]: <info>  [1760433117.3711] device (tap106349ae-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:11:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:57Z|00883|binding|INFO|Releasing lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 from this chassis (sb_readonly=0)
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:57Z|00884|binding|INFO|Setting lport 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 down in Southbound
Oct 14 05:11:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:57Z|00885|binding|INFO|Removing iface tap106349ae-cf ovn-installed in OVS
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.402 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:23:80 10.100.0.11'], port_security=['fa:16:3e:07:23:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '290980d2-08b4-4029-a1c3-becd3457a410', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2389730-ae66-46f4-aea3-6a67311703e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fef16ebb-8e3c-4cd9-b046-59c4109d4508', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45135636-9d1b-4fd5-951f-3d4d3d97b1e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=106349ae-cfaa-43ec-9bda-16f36a6ac3d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.404 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6 in datapath b2389730-ae66-46f4-aea3-6a67311703e9 unbound from our chassis#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.406 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2389730-ae66-46f4-aea3-6a67311703e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2473e52a-d4b5-4350-a0dd-4e824771fd10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.407 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 namespace which is not needed anymore#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 14 05:11:57 np0005486808 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Consumed 13.312s CPU time.
Oct 14 05:11:57 np0005486808 systemd-machined[214636]: Machine qemu-99-instance-0000004f terminated.
Oct 14 05:11:57 np0005486808 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : haproxy version is 2.8.14-c23fe91
Oct 14 05:11:57 np0005486808 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [NOTICE]   (342597) : path to executable is /usr/sbin/haproxy
Oct 14 05:11:57 np0005486808 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [WARNING]  (342597) : Exiting Master process...
Oct 14 05:11:57 np0005486808 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [ALERT]    (342597) : Current worker (342599) exited with code 143 (Terminated)
Oct 14 05:11:57 np0005486808 neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9[342575]: [WARNING]  (342597) : All workers exited. Exiting... (0)
Oct 14 05:11:57 np0005486808 systemd[1]: libpod-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc.scope: Deactivated successfully.
Oct 14 05:11:57 np0005486808 podman[346105]: 2025-10-14 09:11:57.537610825 +0000 UTC m=+0.042945448 container died 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.557 2 INFO nova.virt.libvirt.driver [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Instance destroyed successfully.#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.557 2 DEBUG nova.objects.instance [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 290980d2-08b4-4029-a1c3-becd3457a410 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:11:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc-userdata-shm.mount: Deactivated successfully.
Oct 14 05:11:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f5d3769d648c5314a067d97a55b05a97e81509531f785e9c48153acfc6141c6f-merged.mount: Deactivated successfully.
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.574 2 DEBUG nova.virt.libvirt.vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2139849378',display_name='tempest-TestNetworkAdvancedServerOps-server-2139849378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2139849378',id=79,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhJsgZxd+OWpD7zaKfBBpwRrzG5y2svlcIl3JOB/vCmtEivnvjVbVGSOAYp3d5tHvQ3QI5rFGEYwlQUr4eteTREKUKNxmmu7QDjX9h1ezH3YG5N/CgGPytwUcQasRNPJg==',key_name='tempest-TestNetworkAdvancedServerOps-1442263360',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-se9y79xx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:11:51Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=290980d2-08b4-4029-a1c3-becd3457a410,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.574 2 DEBUG nova.network.os_vif_util [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.575 2 DEBUG nova.network.os_vif_util [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.575 2 DEBUG os_vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 podman[346105]: 2025-10-14 09:11:57.577145839 +0000 UTC m=+0.082480472 container cleanup 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap106349ae-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.583 2 INFO os_vif [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:23:80,bridge_name='br-int',has_traffic_filtering=True,id=106349ae-cfaa-43ec-9bda-16f36a6ac3d6,network=Network(b2389730-ae66-46f4-aea3-6a67311703e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap106349ae-cf')#033[00m
Oct 14 05:11:57 np0005486808 systemd[1]: libpod-conmon-519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc.scope: Deactivated successfully.
Oct 14 05:11:57 np0005486808 podman[346150]: 2025-10-14 09:11:57.647974843 +0000 UTC m=+0.044411415 container remove 519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.657 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40886af5-ebb5-4698-b6eb-9bf8954cbe27]: (4, ('Tue Oct 14 09:11:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 (519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc)\n519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc\nTue Oct 14 09:11:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 (519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc)\n519b57c4f6e8e6329ae7dfadf3bdc5647959e8a378c5b633579c3a73a934e8dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.659 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89168fc5-f086-43de-a4d8-88e8a8b704e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.661 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2389730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:11:57 np0005486808 kernel: tapb2389730-a0: left promiscuous mode
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.673 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b3ebd7-93b8-4c84-b1d6-f204fd510bc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4142d2-8d96-4d11-9f1c-a2e2637150c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afd35977-7ac0-46ad-850a-c86e8f16637b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e20157-1633-432c-931b-3c088554d6a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686404, 'reachable_time': 28202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346181, 'error': None, 'target': 'ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 systemd[1]: run-netns-ovnmeta\x2db2389730\x2dae66\x2d46f4\x2daea3\x2d6a67311703e9.mount: Deactivated successfully.
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.726 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2389730-ae66-46f4-aea3-6a67311703e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:11:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:11:57.726 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7eda5965-7af6-40cc-a5e9-262831385fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:11:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:11:57 np0005486808 podman[346180]: 2025-10-14 09:11:57.784156597 +0000 UTC m=+0.058366058 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 05:11:57 np0005486808 podman[346177]: 2025-10-14 09:11:57.850268845 +0000 UTC m=+0.135812425 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 05:11:57 np0005486808 nova_compute[259627]: 2025-10-14 09:11:57.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.008 2 INFO nova.virt.libvirt.driver [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deleting instance files /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410_del#033[00m
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.008 2 INFO nova.virt.libvirt.driver [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deletion of /var/lib/nova/instances/290980d2-08b4-4029-a1c3-becd3457a410_del complete#033[00m
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.064 2 INFO nova.compute.manager [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.065 2 DEBUG oslo.service.loopingcall [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.066 2 DEBUG nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:11:58 np0005486808 nova_compute[259627]: 2025-10-14 09:11:58.066 2 DEBUG nova.network.neutron [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:11:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:59Z|00886|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:11:59Z|00887|binding|INFO|Releasing lport 970f2645-7ec3-4b7f-8527-871800c728d8 from this chassis (sb_readonly=0)
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.428 2 DEBUG nova.network.neutron [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.465 2 INFO nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Took 1.40 seconds to deallocate network for instance.#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.516 2 DEBUG nova.compute.manager [req-7f99e947-9e06-4f64-8cca-f84d49c8902e req-04e98afc-848d-4a67-807c-6305af4cbb54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Received event network-vif-deleted-106349ae-cfaa-43ec-9bda-16f36a6ac3d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.536 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.536 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.646 2 DEBUG oslo_concurrency.processutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.714 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433104.7121098, ab77dbf7-4458-4b16-a2e7-ed73be047838 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.714 2 INFO nova.compute.manager [-] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.735 2 DEBUG nova.compute.manager [None req-6c9acea0-f74e-4e3a-b75d-0fcd07dd8726 - - - - - -] [instance: ab77dbf7-4458-4b16-a2e7-ed73be047838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.777 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updated VIF entry in instance network info cache for port 106349ae-cfaa-43ec-9bda-16f36a6ac3d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.778 2 DEBUG nova.network.neutron [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Updating instance_info_cache with network_info: [{"id": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "address": "fa:16:3e:07:23:80", "network": {"id": "b2389730-ae66-46f4-aea3-6a67311703e9", "bridge": "br-int", "label": "tempest-network-smoke--924972702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap106349ae-cf", "ovs_interfaceid": "106349ae-cfaa-43ec-9bda-16f36a6ac3d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:11:59 np0005486808 nova_compute[259627]: 2025-10-14 09:11:59.796 2 DEBUG oslo_concurrency.lockutils [req-d96ec37f-46dd-4ca6-b3d4-60d10e0ca218 req-d6e116af-ffa4-4e87-bb17-a77d56c97ffe 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-290980d2-08b4-4029-a1c3-becd3457a410" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:11:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 268 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 14 05:12:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860183317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.107 2 DEBUG oslo_concurrency.processutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.116 2 DEBUG nova.compute.provider_tree [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.134 2 DEBUG nova.scheduler.client.report [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.169 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.205 2 INFO nova.scheduler.client.report [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 290980d2-08b4-4029-a1c3-becd3457a410#033[00m
Oct 14 05:12:00 np0005486808 nova_compute[259627]: 2025-10-14 09:12:00.417 2 DEBUG oslo_concurrency.lockutils [None req-4262b288-e1ea-42c7-9345-a0ea0f228b27 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "290980d2-08b4-4029-a1c3-becd3457a410" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.2 MiB/s wr, 178 op/s
Oct 14 05:12:02 np0005486808 nova_compute[259627]: 2025-10-14 09:12:02.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:02 np0005486808 nova_compute[259627]: 2025-10-14 09:12:02.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 14 05:12:03 np0005486808 nova_compute[259627]: 2025-10-14 09:12:03.982 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:12:04 np0005486808 nova_compute[259627]: 2025-10-14 09:12:04.874 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:04 np0005486808 nova_compute[259627]: 2025-10-14 09:12:04.875 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:04 np0005486808 nova_compute[259627]: 2025-10-14 09:12:04.909 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:04 np0005486808 nova_compute[259627]: 2025-10-14 09:12:04.993 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:04 np0005486808 nova_compute[259627]: 2025-10-14 09:12:04.994 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.002 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.002 2 INFO nova.compute.claims [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.152 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.239 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433110.2384858, ed2aee1e-f632-4d7f-ae03-f5d9c41e9104 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.240 2 INFO nova.compute.manager [-] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.267 2 DEBUG nova.compute.manager [None req-01c83d0d-08f1-4ac7-8f95-0ce1abda8691 - - - - - -] [instance: ed2aee1e-f632-4d7f-ae03-f5d9c41e9104] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.566 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433110.565926, 62fe725d-b24a-477a-a275-06d2cd960aaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.567 2 INFO nova.compute.manager [-] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712468327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.586 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.593 2 DEBUG nova.compute.provider_tree [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.697 2 DEBUG nova.compute.manager [None req-45e68e01-a61f-4783-ba94-7fa2bf0ee9df - - - - - -] [instance: 62fe725d-b24a-477a-a275-06d2cd960aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.700 2 DEBUG nova.scheduler.client.report [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:12:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078909986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.726 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.727 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.772 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.772 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.793 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.812 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.891 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.892 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.893 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating image(s)#033[00m
Oct 14 05:12:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 2.2 MiB/s wr, 142 op/s
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.914 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.936 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.958 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:05 np0005486808 nova_compute[259627]: 2025-10-14 09:12:05.964 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.061 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.062 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.062 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.063 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.083 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.086 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2534f8b9-e832-4b78-ada4-e551429bdc75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:06 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 14 05:12:06 np0005486808 kernel: tap7ce99440-fa (unregistering): left promiscuous mode
Oct 14 05:12:06 np0005486808 NetworkManager[44885]: <info>  [1760433126.2682] device (tap7ce99440-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:06Z|00888|binding|INFO|Releasing lport 7ce99440-fa49-4876-bb38-fce631d40400 from this chassis (sb_readonly=0)
Oct 14 05:12:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:06Z|00889|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 down in Southbound
Oct 14 05:12:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:06Z|00890|binding|INFO|Removing iface tap7ce99440-fa ovn-installed in OVS
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.289 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.291 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.292 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.315 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5399713-90a2-4c73-9e06-a2f97a7698cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.348 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0992ef2d-c3e8-4e63-bd81-9ba7ce7a03c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.351 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cd071236-e3ce-490a-ba9c-3623be67e13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 14 05:12:06 np0005486808 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000052.scope: Consumed 16.797s CPU time.
Oct 14 05:12:06 np0005486808 systemd-machined[214636]: Machine qemu-102-instance-00000052 terminated.
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.370 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2534f8b9-e832-4b78-ada4-e551429bdc75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.385 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6c89295c-7333-4574-bfc9-ea3a3bc04014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.408 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[913fde3e-d92d-4a95-96cd-be0530ffb2ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346393, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.426 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb1f9aa-80d5-44c0-91bc-e45eb24dc5d2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346411, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346411, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.428 2 DEBUG nova.policy [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '92e59e145f6942b78d0ffbebc4d89e76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '517aafb84156407c8672042097e3ef4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.435 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.436 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] resizing rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.436 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.437 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.530 2 DEBUG nova.objects.instance [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.546 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.547 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ensure instance console log exists: /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.547 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.548 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.548 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.852 2 DEBUG nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.853 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.853 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.854 2 DEBUG oslo_concurrency.lockutils [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.854 2 DEBUG nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:06 np0005486808 nova_compute[259627]: 2025-10-14 09:12:06.854 2 WARNING nova.compute.manager [req-0aab396c-b5ab-463a-8dbb-35a015f7e8a6 req-3843d1d3-9975-41a6-96c7-ebf2326d6e18 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:12:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:06.864 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.018 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance shutdown successfully after 24 seconds.#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.025 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.026 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.028 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.049 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Attempting rescue#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.051 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.055 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.056 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating image(s)#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.101 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.105 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.145 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.164 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.168 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.225 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Successfully created port: 4f827284-f357-43c5-bdde-c69731b52914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.240 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.241 2 DEBUG oslo_concurrency.lockutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.260 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.263 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.546 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.547 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.560 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.561 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start _get_guest_xml network_info=[{"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.561 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.578 2 WARNING nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.584 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.585 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.587 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.libvirt.host [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.588 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.589 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.589 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.590 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.virt.hardware [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.591 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.606 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:07 np0005486808 nova_compute[259627]: 2025-10-14 09:12:07.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 05:12:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344636585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.162 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.164 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.219 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Successfully updated port: 4f827284-f357-43c5-bdde-c69731b52914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.240 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.516 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.560 2 DEBUG nova.compute.manager [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-changed-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.561 2 DEBUG nova.compute.manager [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing instance network info cache due to event network-changed-4f827284-f357-43c5-bdde-c69731b52914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.561 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272901987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.693 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.695 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.955 2 DEBUG nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG oslo_concurrency.lockutils [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.956 2 DEBUG nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:08 np0005486808 nova_compute[259627]: 2025-10-14 09:12:08.957 2 WARNING nova.compute.manager [req-3887c8d7-41e5-4939-8f46-e92d0e411cf3 req-c6290408-fcf9-4465-8aa0-7b647c452da6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:12:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962736074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.130 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.131 2 DEBUG nova.virt.libvirt.vif [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:11:40Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.131 2 DEBUG nova.network.os_vif_util [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "vif_mac": "fa:16:3e:2b:08:ac"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.132 2 DEBUG nova.network.os_vif_util [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.133 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.151 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <uuid>1141f79e-2e47-40f1-91b0-275a9fac765c</uuid>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <name>instance-00000052</name>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-624290930</nova:name>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:07</nova:creationTime>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:user uuid="aa1425f7fdfc4218bdabfe2458cd1c60">tempest-ServerRescueNegativeTestJSON-1031174086-project-member</nova:user>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:project uuid="f10ae705d9a34608a922683282b952b5">tempest-ServerRescueNegativeTestJSON-1031174086</nova:project>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <nova:port uuid="7ce99440-fa49-4876-bb38-fce631d40400">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="serial">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="uuid">1141f79e-2e47-40f1-91b0-275a9fac765c</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk.rescue">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <target dev="vdb" bus="virtio"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:2b:08:ac"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <target dev="tap7ce99440-fa"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/console.log" append="off"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:09 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:09 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:09 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:09 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.165 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.215 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.216 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.217 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.218 2 DEBUG nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] No VIF found with MAC fa:16:3e:2b:08:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.219 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Using config drive#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.260 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.291 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.323 2 DEBUG nova.objects.instance [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'keypairs' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.678 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Creating config drive at /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.688 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4qu42cl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.834 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4qu42cl" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.863 2 DEBUG nova.storage.rbd_utils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] rbd image 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.868 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 108 KiB/s wr, 50 op/s
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.955 2 DEBUG nova.network.neutron [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.976 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.977 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance network_info: |[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.978 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.978 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing network info cache for port 4f827284-f357-43c5-bdde-c69731b52914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.983 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start _get_guest_xml network_info=[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.991 2 WARNING nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.996 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:09 np0005486808 nova_compute[259627]: 2025-10-14 09:12:09.997 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.023 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.023 2 DEBUG nova.virt.libvirt.host [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.024 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.024 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.025 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.026 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.027 2 DEBUG nova.virt.hardware [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.031 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.080 2 DEBUG oslo_concurrency.processutils [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue 1141f79e-2e47-40f1-91b0-275a9fac765c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.081 2 INFO nova.virt.libvirt.driver [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deleting local config drive /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c/disk.config.rescue because it was imported into RBD.#033[00m
Oct 14 05:12:10 np0005486808 kernel: tap7ce99440-fa: entered promiscuous mode
Oct 14 05:12:10 np0005486808 NetworkManager[44885]: <info>  [1760433130.1378] manager: (tap7ce99440-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Oct 14 05:12:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:10Z|00891|binding|INFO|Claiming lport 7ce99440-fa49-4876-bb38-fce631d40400 for this chassis.
Oct 14 05:12:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:10Z|00892|binding|INFO|7ce99440-fa49-4876-bb38-fce631d40400: Claiming fa:16:3e:2b:08:ac 10.100.0.4
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:10Z|00893|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 ovn-installed in OVS
Oct 14 05:12:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:10Z|00894|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 up in Southbound
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 bound to our chassis#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.180 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39#033[00m
Oct 14 05:12:10 np0005486808 systemd-udevd[346695]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:10 np0005486808 systemd-machined[214636]: New machine qemu-106-instance-00000052.
Oct 14 05:12:10 np0005486808 NetworkManager[44885]: <info>  [1760433130.2088] device (tap7ce99440-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.208 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0974daa4-0c34-4429-88f2-efd937ee79fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 NetworkManager[44885]: <info>  [1760433130.2098] device (tap7ce99440-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:10 np0005486808 systemd[1]: Started Virtual Machine qemu-106-instance-00000052.
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.246 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7c18133c-8dec-48ad-9c87-5f307552f6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.250 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fcae90-8924-46e3-9f0c-53bc1dedb9ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.286 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66101c9c-3394-4e17-82f7-ae4220e7df1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.305 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e71c40df-1ee7-411e-aeab-4c467163f76c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346729, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.322 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98859a24-71ed-4096-bc44-ac703bed3900]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346730, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346730, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.324 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.327 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.328 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:10.329 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229050100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.555 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.577 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:10 np0005486808 nova_compute[259627]: 2025-10-14 09:12:10.581 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831358236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.006 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.009 2 DEBUG nova.virt.libvirt.vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:05Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.009 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.011 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.013 2 DEBUG nova.objects.instance [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.031 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <uuid>2534f8b9-e832-4b78-ada4-e551429bdc75</uuid>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <name>instance-00000056</name>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersNegativeTestJSON-server-17250352</nova:name>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:09</nova:creationTime>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <nova:port uuid="4f827284-f357-43c5-bdde-c69731b52914">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="serial">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="uuid">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8b:d7:f7"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <target dev="tap4f827284-f3"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log" append="off"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:11 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:11 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:11 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:11 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.032 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Preparing to wait for external event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.033 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.034 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.034 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.036 2 DEBUG nova.virt.libvirt.vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:05Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.036 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.037 2 DEBUG nova.network.os_vif_util [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.038 2 DEBUG os_vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:11 np0005486808 NetworkManager[44885]: <info>  [1760433131.0505] manager: (tap4f827284-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.060 2 INFO os_vif [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.085 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.086 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.087 2 WARNING nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.087 2 DEBUG oslo_concurrency.lockutils [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.088 2 DEBUG nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.088 2 WARNING nova.compute.manager [req-686ea9b9-796c-404e-a14a-38e9da280964 req-6ccdde76-5e18-4581-b1be-076dd34099bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.127 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.128 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.128 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:8b:d7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.128 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Using config drive#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.153 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.517 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 1141f79e-2e47-40f1-91b0-275a9fac765c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.518 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433131.517287, 1141f79e-2e47-40f1-91b0-275a9fac765c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.518 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.522 2 DEBUG nova.compute.manager [None req-dec3b3d6-f097-4270-addc-313554a28d11 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.565 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.569 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.600 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.600 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433131.5199463, 1141f79e-2e47-40f1-91b0-275a9fac765c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.601 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:11 np0005486808 nova_compute[259627]: 2025-10-14 09:12:11.620 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.554 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433117.552755, 290980d2-08b4-4029-a1c3-becd3457a410 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.554 2 INFO nova.compute.manager [-] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.579 2 DEBUG nova.compute.manager [None req-4e80ac2d-c88e-487a-a4a7-7a853cca56ee - - - - - -] [instance: 290980d2-08b4-4029-a1c3-becd3457a410] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.671 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating config drive at /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.677 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4cb_lo9u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.827 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4cb_lo9u" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.874 2 DEBUG nova.storage.rbd_utils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.880 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:12 np0005486808 nova_compute[259627]: 2025-10-14 09:12:12.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.057 2 DEBUG oslo_concurrency.processutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.058 2 INFO nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting local config drive /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:13 np0005486808 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.1242] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Oct 14 05:12:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:13Z|00895|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 05:12:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:13Z|00896|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.180 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.183 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.185 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c#033[00m
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.1986] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.2005] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.212 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dbad8275-c74a-4226-9942-21021150aca0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.213 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:12:13 np0005486808 systemd-machined[214636]: New machine qemu-107-instance-00000056.
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.216 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.216 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[81669ea8-a53f-4394-89e2-842265fa0df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[39b49bde-800e-46f1-8499-b2a7c3af0a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 systemd[1]: Started Virtual Machine qemu-107-instance-00000056.
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.239 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[96fc59ea-9b4e-4479-9752-1592b6d9706c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6238d4-0a67-4f5f-b60a-04002fa1de71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:13Z|00897|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 05:12:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:13Z|00898|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.308 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[98c71466-8809-4493-80ab-e56c32259f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.3145] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.314 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfb9253-bb0a-4464-82ac-61519a518f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 systemd-udevd[346920]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.361 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcb9480-66b1-4848-9c4c-a30ac7a6b71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.364 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[049d73a4-69ef-4835-9682-ed32d55fe28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.388 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated VIF entry in instance network info cache for port 4f827284-f357-43c5-bdde-c69731b52914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.388 2 DEBUG nova.network.neutron [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.3893] device (tapa49b41b4-20): carrier: link connected
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[46a1706a-ec1c-4a49-8403-7392233c48f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.416 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad67ee35-7868-4bdf-ba4d-3cdf288fa5e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346939, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.419 2 DEBUG oslo_concurrency.lockutils [req-e18c3c15-496b-4e24-a4ce-1c34aeceabf1 req-8541c6a7-68ab-405b-a9ec-8742a1f222e9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[758fd87f-8614-47a2-99f4-dc7c0312c018]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691215, 'tstamp': 691215}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346940, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.448 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b6af0847-2333-4280-bf21-fbfa09d25c18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346941, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d125a32-e81c-4a4b-b3a7-5980814da3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a56944c3-e15a-451f-b269-4b8653732bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 NetworkManager[44885]: <info>  [1760433133.5481] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct 14 05:12:13 np0005486808 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.551 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:13Z|00899|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.554 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.556 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a73130c2-2444-44f1-8adf-7124c5f4550a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.557 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:12:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:13.557 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:12:13 np0005486808 nova_compute[259627]: 2025-10-14 09:12:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 51 op/s
Oct 14 05:12:14 np0005486808 podman[347015]: 2025-10-14 09:12:14.013200377 +0000 UTC m=+0.086334677 container create 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:12:14 np0005486808 podman[347015]: 2025-10-14 09:12:13.96539384 +0000 UTC m=+0.038528210 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:12:14 np0005486808 systemd[1]: Started libpod-conmon-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope.
Oct 14 05:12:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b768a60b37b396b810a626d7725822e051f41211fb6861ae019660974357e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:14 np0005486808 podman[347015]: 2025-10-14 09:12:14.107660513 +0000 UTC m=+0.180794853 container init 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:12:14 np0005486808 podman[347015]: 2025-10-14 09:12:14.11727722 +0000 UTC m=+0.190411530 container start 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.136 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.135862, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.136 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:14 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : New worker (347036) forked
Oct 14 05:12:14 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : Loading success.
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.172 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.136061, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.198 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.202 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.226 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.568 2 INFO nova.compute.manager [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Pausing#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.569 2 DEBUG nova.objects.instance [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'flavor' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.604 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433134.6047902, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.605 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.607 2 DEBUG nova.compute.manager [None req-d8b55714-50d7-4e4c-937f-1db455c68914 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.627 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.632 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:14 np0005486808 nova_compute[259627]: 2025-10-14 09:12:14.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 14 05:12:15 np0005486808 podman[347046]: 2025-10-14 09:12:15.69683769 +0000 UTC m=+0.095170365 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:12:15 np0005486808 podman[347045]: 2025-10-14 09:12:15.706522388 +0000 UTC m=+0.110693977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 05:12:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Oct 14 05:12:16 np0005486808 nova_compute[259627]: 2025-10-14 09:12:16.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.122 2 INFO nova.compute.manager [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Unpausing#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.123 2 DEBUG nova.objects.instance [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'flavor' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.149 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433137.1494572, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.149 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:17 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.153 2 DEBUG nova.virt.libvirt.guest [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.153 2 DEBUG nova.compute.manager [None req-770b5ff0-1343-49c9-972e-2396b40bab6d aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.175 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.177 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.206 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 14 05:12:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 05:12:17 np0005486808 nova_compute[259627]: 2025-10-14 09:12:17.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.757 2 DEBUG nova.compute.manager [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.758 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.758 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.759 2 DEBUG oslo_concurrency.lockutils [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.760 2 DEBUG nova.compute.manager [req-c1e2c84c-bf81-4aff-928a-2197569568f6 req-adefe643-e70e-4366-82bb-ace31311a7f1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Processing event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.761 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.765 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433138.7648256, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.765 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.768 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.772 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance spawned successfully.#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.773 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.799 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.808 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.809 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.810 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.810 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.811 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.812 2 DEBUG nova.virt.libvirt.driver [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.851 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.883 2 INFO nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 12.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.883 2 DEBUG nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.948 2 INFO nova.compute.manager [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 13.98 seconds to build instance.#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.968 2 DEBUG oslo_concurrency.lockutils [None req-7fdc655d-f3a8-4088-94fb-a94a378cde85 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.990 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:18 np0005486808 nova_compute[259627]: 2025-10-14 09:12:18.990 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.022 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.040 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.041 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.068 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.151 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.152 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.161 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.161 2 INFO nova.compute.claims [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.169 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.369 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.811 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.811 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.812 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.814 2 INFO nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Terminating instance#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.816 2 DEBUG nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011629649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:19 np0005486808 kernel: tap7ce99440-fa (unregistering): left promiscuous mode
Oct 14 05:12:19 np0005486808 NetworkManager[44885]: <info>  [1760433139.8601] device (tap7ce99440-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.866 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:19Z|00900|binding|INFO|Releasing lport 7ce99440-fa49-4876-bb38-fce631d40400 from this chassis (sb_readonly=0)
Oct 14 05:12:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:19Z|00901|binding|INFO|Setting lport 7ce99440-fa49-4876-bb38-fce631d40400 down in Southbound
Oct 14 05:12:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:19Z|00902|binding|INFO|Removing iface tap7ce99440-fa ovn-installed in OVS
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.879 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:ac 10.100.0.4'], port_security=['fa:16:3e:2b:08:ac 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1141f79e-2e47-40f1-91b0-275a9fac765c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7ce99440-fa49-4876-bb38-fce631d40400) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.880 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7ce99440-fa49-4876-bb38-fce631d40400 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.882 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e6e95c-6cd2-4631-98f6-9ed276458c39#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.897 2 DEBUG nova.compute.provider_tree [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:19 np0005486808 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct 14 05:12:19 np0005486808 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000052.scope: Consumed 9.710s CPU time.
Oct 14 05:12:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 14 05:12:19 np0005486808 systemd-machined[214636]: Machine qemu-106-instance-00000052 terminated.
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc135c42-77bf-458f-b76d-a7a64c3dd2fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.918 2 DEBUG nova.scheduler.client.report [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.943 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.944 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.945 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[52f999af-5ec3-4b22-b20f-8ef8ba9e4a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.948 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2668f-44ce-4c7c-bcb7-023b3acbb558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.950 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.959 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.959 2 INFO nova.compute.claims [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:19.985 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b872de77-1c75-45b3-a700-10bf5f691315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.993 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:19 np0005486808 nova_compute[259627]: 2025-10-14 09:12:19.994 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.014 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90dcba27-85cf-4c08-b0d0-c901d5ef3206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e6e95c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:6b:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686914, 'reachable_time': 20198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347118, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.030 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3c76c5-081a-43e8-9489-f3f276e782a7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686938, 'tstamp': 686938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347119, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap15e6e95c-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686941, 'tstamp': 686941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347119, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.045 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.056 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e6e95c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.059 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.060 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e6e95c-60, col_values=(('external_ids', {'iface-id': '970f2645-7ec3-4b7f-8527-871800c728d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:20.060 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.067 2 INFO nova.virt.libvirt.driver [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Instance destroyed successfully.#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.068 2 DEBUG nova.objects.instance [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 1141f79e-2e47-40f1-91b0-275a9fac765c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.098 2 DEBUG nova.virt.libvirt.vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-624290930',display_name='tempest-ServerRescueNegativeTestJSON-server-624290930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-624290930',id=82,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-njr7lxft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:11Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=1141f79e-2e47-40f1-91b0-275a9fac765c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.098 2 DEBUG nova.network.os_vif_util [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "7ce99440-fa49-4876-bb38-fce631d40400", "address": "fa:16:3e:2b:08:ac", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ce99440-fa", "ovs_interfaceid": "7ce99440-fa49-4876-bb38-fce631d40400", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.099 2 DEBUG nova.network.os_vif_util [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.100 2 DEBUG os_vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ce99440-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.111 2 INFO os_vif [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:ac,bridge_name='br-int',has_traffic_filtering=True,id=7ce99440-fa49-4876-bb38-fce631d40400,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ce99440-fa')#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.180 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.183 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.184 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating image(s)#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.207 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.227 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.250 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.254 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.318 2 DEBUG nova.policy [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.323 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.363 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.365 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.365 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.366 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.391 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.395 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.715 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG oslo_concurrency.lockutils [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.716 2 DEBUG nova.compute.manager [req-403689e3-213e-442b-9ca9-a735b5ceee7d req-5d9ea70f-3394-4e00-b6bc-fc5b2462fd70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-unplugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.717 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770145722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.797 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.806 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.852 2 DEBUG nova.compute.provider_tree [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.871 2 DEBUG nova.scheduler.client.report [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.923 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.924 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.932 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Ensure instance console log exists: /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.952 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.953 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.953 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.980 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.981 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:20 np0005486808 nova_compute[259627]: 2025-10-14 09:12:20.998 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.016 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.033 2 DEBUG nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.033 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG oslo_concurrency.lockutils [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.034 2 DEBUG nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.034 2 WARNING nova.compute.manager [req-a2901f3b-c347-4221-8719-54b8fafc13a7 req-cb1fa33f-032d-4ac9-a1a6-66edf9e29083 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.113 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.115 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.115 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating image(s)#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.155 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.180 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.210 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.214 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.259 2 DEBUG nova.policy [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.271 2 INFO nova.virt.libvirt.driver [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deleting instance files /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c_del#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.272 2 INFO nova.virt.libvirt.driver [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deletion of /var/lib/nova/instances/1141f79e-2e47-40f1-91b0-275a9fac765c_del complete#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.297 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Successfully created port: b0cc5216-2023-4add-97a9-4bafe30fd8c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.304 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.304 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.305 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.305 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.327 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.332 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.386 2 INFO nova.compute.manager [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG oslo.service.loopingcall [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.387 2 DEBUG nova.network.neutron [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.648 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.728 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.865 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.888 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.889 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Ensure instance console log exists: /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.890 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.891 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.891 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 14 05:12:21 np0005486808 nova_compute[259627]: 2025-10-14 09:12:21.926 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Successfully created port: 1037d287-c167-4691-9393-55be86ecbab2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.660 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.661 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.686 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.781 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.781 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.787 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.788 2 INFO nova.compute.claims [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.812 2 DEBUG nova.network.neutron [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.820 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.821 2 DEBUG oslo_concurrency.lockutils [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.821 2 DEBUG nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] No waiting events found dispatching network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.821 2 WARNING nova.compute.manager [req-eb06740f-9ab0-430c-8156-632875b89e02 req-a1b77354-0add-4fc7-991f-2ce66977b46d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received unexpected event network-vif-plugged-7ce99440-fa49-4876-bb38-fce631d40400 for instance with vm_state rescued and task_state deleting.#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.849 2 INFO nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Took 1.46 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.857 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Successfully updated port: b0cc5216-2023-4add-97a9-4bafe30fd8c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.897 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.906 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:22 np0005486808 nova_compute[259627]: 2025-10-14 09:12:22.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.046 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.117 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Received event network-vif-deleted-7ce99440-fa49-4876-bb38-fce631d40400 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.118 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-changed-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.119 2 DEBUG nova.compute.manager [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Refreshing instance network info cache due to event network-changed-b0cc5216-2023-4add-97a9-4bafe30fd8c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.119 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.249 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Successfully updated port: 1037d287-c167-4691-9393-55be86ecbab2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.283 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.291 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3034072506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.502 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.511 2 DEBUG nova.compute.provider_tree [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.528 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.534 2 DEBUG nova.scheduler.client.report [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.562 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.563 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.567 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.631 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.632 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.655 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.677 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.731 2 DEBUG oslo_concurrency.processutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.832 2 DEBUG nova.policy [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '92e59e145f6942b78d0ffbebc4d89e76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '517aafb84156407c8672042097e3ef4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.836 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.838 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.838 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating image(s)#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.862 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.887 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 277 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 69 KiB/s wr, 158 op/s
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.921 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:23 np0005486808 nova_compute[259627]: 2025-10-14 09:12:23.925 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.025 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.026 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.027 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.027 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.047 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.051 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552245106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.230 2 DEBUG oslo_concurrency.processutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.260 2 DEBUG nova.compute.provider_tree [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.288 2 DEBUG nova.scheduler.client.report [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.318 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.336 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.346 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.386 2 INFO nova.scheduler.client.report [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Deleted allocations for instance 1141f79e-2e47-40f1-91b0-275a9fac765c#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.388 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.388 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance network_info: |[{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.390 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.391 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Refreshing network info cache for port b0cc5216-2023-4add-97a9-4bafe30fd8c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.396 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start _get_guest_xml network_info=[{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.446 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] resizing rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.492 2 WARNING nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.496 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.497 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.501 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.502 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.503 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.504 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.507 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.563 2 DEBUG oslo_concurrency.lockutils [None req-dc5811c3-b0e8-43dc-bd2a-90c601cbd6a7 aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "1141f79e-2e47-40f1-91b0-275a9fac765c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.626 2 DEBUG nova.objects.instance [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.642 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.643 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Ensure instance console log exists: /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.643 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.644 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.644 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.645 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Successfully created port: 26bcc700-59c6-4e79-904d-988cd11152c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:24 np0005486808 nova_compute[259627]: 2025-10-14 09:12:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.025 2 DEBUG nova.network.neutron [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.049 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.050 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance network_info: |[{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.054 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start _get_guest_xml network_info=[{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.060 2 WARNING nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.066 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541068764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.068 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.071 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.072 2 DEBUG nova.virt.libvirt.host [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.073 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.073 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.074 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.074 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.075 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.075 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.076 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.076 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.077 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.077 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.078 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.078 2 DEBUG nova.virt.hardware [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.083 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.125 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.152 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.157 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.377 2 DEBUG nova.compute.manager [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-changed-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.378 2 DEBUG nova.compute.manager [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Refreshing instance network info cache due to event network-changed-1037d287-c167-4691-9393-55be86ecbab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.379 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.379 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.380 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Refreshing network info cache for port 1037d287-c167-4691-9393-55be86ecbab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852324471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.560 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.584 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.588 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/749511871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.638 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Successfully updated port: 26bcc700-59c6-4e79-904d-988cd11152c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.644 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.646 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:20Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.647 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.648 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.649 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.670 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.675 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <uuid>16c1b8b8-cda9-45f9-994f-3f102eb85e1e</uuid>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <name>instance-00000057</name>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-1135349362-1</nova:name>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:24</nova:creationTime>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <nova:port uuid="b0cc5216-2023-4add-97a9-4bafe30fd8c3">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="serial">16c1b8b8-cda9-45f9-994f-3f102eb85e1e</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="uuid">16c1b8b8-cda9-45f9-994f-3f102eb85e1e</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:52:4e:ee"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <target dev="tapb0cc5216-20"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/console.log" append="off"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:25 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:25 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:25 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:25 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.677 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Preparing to wait for external event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.677 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.678 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.678 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.679 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:20Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.680 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.681 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.682 2 DEBUG os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.683 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.684 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.684 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.686 2 INFO nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Terminating instance#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.687 2 DEBUG nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0cc5216-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0cc5216-20, col_values=(('external_ids', {'iface-id': 'b0cc5216-2023-4add-97a9-4bafe30fd8c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:4e:ee', 'vm-uuid': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 NetworkManager[44885]: <info>  [1760433145.7001] manager: (tapb0cc5216-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.707 2 INFO os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20')#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG nova.compute.manager [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-changed-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG nova.compute.manager [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Refreshing instance network info cache due to event network-changed-26bcc700-59c6-4e79-904d-988cd11152c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.717 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:25 np0005486808 kernel: tapf027d20e-66 (unregistering): left promiscuous mode
Oct 14 05:12:25 np0005486808 NetworkManager[44885]: <info>  [1760433145.7638] device (tapf027d20e-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.776 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.777 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.778 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:52:4e:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.778 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Using config drive#033[00m
Oct 14 05:12:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:25Z|00903|binding|INFO|Releasing lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 from this chassis (sb_readonly=0)
Oct 14 05:12:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:25Z|00904|binding|INFO|Setting lport f027d20e-665b-4bd0-836c-7e8edb2b6bf7 down in Southbound
Oct 14 05:12:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:25Z|00905|binding|INFO|Removing iface tapf027d20e-66 ovn-installed in OVS
Oct 14 05:12:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.833 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:5e:c8 10.100.0.5'], port_security=['fa:16:3e:1a:5e:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '70e3c250-cd38-4718-9a7f-0fbf7bf471fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f10ae705d9a34608a922683282b952b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '908d59ff-8d75-4488-a358-b4621923e921', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a29a9f-f981-430b-8a8f-ff453e5ae1a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f027d20e-665b-4bd0-836c-7e8edb2b6bf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.835 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f027d20e-665b-4bd0-836c-7e8edb2b6bf7 in datapath 15e6e95c-6cd2-4631-98f6-9ed276458c39 unbound from our chassis#033[00m
Oct 14 05:12:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.836 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15e6e95c-6cd2-4631-98f6-9ed276458c39, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.845 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:25 np0005486808 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct 14 05:12:25 np0005486808 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000050.scope: Consumed 15.581s CPU time.
Oct 14 05:12:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.837 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3ba941-96fe-41ed-aecd-80fcf8900198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:25.849 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 namespace which is not needed anymore#033[00m
Oct 14 05:12:25 np0005486808 systemd-machined[214636]: Machine qemu-100-instance-00000050 terminated.
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.900 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 280 op/s
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.941 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updated VIF entry in instance network info cache for port b0cc5216-2023-4add-97a9-4bafe30fd8c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.942 2 DEBUG nova.network.neutron [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [{"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.952 2 INFO nova.virt.libvirt.driver [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Instance destroyed successfully.#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.953 2 DEBUG nova.objects.instance [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lazy-loading 'resources' on Instance uuid 70e3c250-cd38-4718-9a7f-0fbf7bf471fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.964 2 DEBUG oslo_concurrency.lockutils [req-b64e8aa8-cd40-40ed-9dec-eae8af35a2f1 req-bb893528-5658-40b2-8cc8-aab02ca5b407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-16c1b8b8-cda9-45f9-994f-3f102eb85e1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.966 2 DEBUG nova.virt.libvirt.vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:11:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1354708936',display_name='tempest-ServerRescueNegativeTestJSON-server-1354708936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1354708936',id=80,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:11:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f10ae705d9a34608a922683282b952b5',ramdisk_id='',reservation_id='r-vlzoqsi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1031174086',owner_user_name='tempest-ServerRescueNegativeTestJSON-1031174086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:17Z,user_data=None,user_id='aa1425f7fdfc4218bdabfe2458cd1c60',uuid=70e3c250-cd38-4718-9a7f-0fbf7bf471fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.966 2 DEBUG nova.network.os_vif_util [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converting VIF {"id": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "address": "fa:16:3e:1a:5e:c8", "network": {"id": "15e6e95c-6cd2-4631-98f6-9ed276458c39", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1009216431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f10ae705d9a34608a922683282b952b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf027d20e-66", "ovs_interfaceid": "f027d20e-665b-4bd0-836c-7e8edb2b6bf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.967 2 DEBUG nova.network.os_vif_util [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.967 2 DEBUG os_vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf027d20e-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:25 np0005486808 nova_compute[259627]: 2025-10-14 09:12:25.985 2 INFO os_vif [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:5e:c8,bridge_name='br-int',has_traffic_filtering=True,id=f027d20e-665b-4bd0-836c-7e8edb2b6bf7,network=Network(15e6e95c-6cd2-4631-98f6-9ed276458c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf027d20e-66')#033[00m
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : haproxy version is 2.8.14-c23fe91
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [NOTICE]   (343564) : path to executable is /usr/sbin/haproxy
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : Exiting Master process...
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : Exiting Master process...
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [ALERT]    (343564) : Current worker (343566) exited with code 143 (Terminated)
Oct 14 05:12:26 np0005486808 neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39[343540]: [WARNING]  (343564) : All workers exited. Exiting... (0)
Oct 14 05:12:26 np0005486808 systemd[1]: libpod-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7.scope: Deactivated successfully.
Oct 14 05:12:26 np0005486808 podman[347894]: 2025-10-14 09:12:26.036890786 +0000 UTC m=+0.067295508 container died 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.058 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.058 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7-userdata-shm.mount: Deactivated successfully.
Oct 14 05:12:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-52a5ab24f7466258d49f499f0da82e5d8b268704cbd206b0609a9e9ddb16bcab-merged.mount: Deactivated successfully.
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.073 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:26 np0005486808 podman[347894]: 2025-10-14 09:12:26.083894374 +0000 UTC m=+0.114299096 container cleanup 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:12:26 np0005486808 systemd[1]: libpod-conmon-73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7.scope: Deactivated successfully.
Oct 14 05:12:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684799037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.131 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.132 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:21Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.132 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.133 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.134 2 DEBUG nova.objects.instance [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <uuid>ab89bfba-67b2-4767-90f2-7ef5dab476c0</uuid>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <name>instance-00000058</name>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:name>tempest-tempest.common.compute-instance-1135349362-2</nova:name>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:25</nova:creationTime>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <nova:port uuid="1037d287-c167-4691-9393-55be86ecbab2">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="serial">ab89bfba-67b2-4767-90f2-7ef5dab476c0</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="uuid">ab89bfba-67b2-4767-90f2-7ef5dab476c0</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:08:ea:3e"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <target dev="tap1037d287-c1"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/console.log" append="off"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:26 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:26 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:26 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:26 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Preparing to wait for external event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.148 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.149 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.149 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.150 2 DEBUG nova.virt.libvirt.vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:21Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.150 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.151 2 DEBUG nova.network.os_vif_util [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.152 2 DEBUG os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.158 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.158 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:26 np0005486808 podman[347939]: 2025-10-14 09:12:26.159630359 +0000 UTC m=+0.052159475 container remove 73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1037d287-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1037d287-c1, col_values=(('external_ids', {'iface-id': '1037d287-c167-4691-9393-55be86ecbab2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:ea:3e', 'vm-uuid': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.1629] manager: (tap1037d287-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.171 2 INFO os_vif [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1')#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5912bbef-d71f-4d6b-b87a-a58c46428da5]: (4, ('Tue Oct 14 09:12:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 (73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7)\n73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7\nTue Oct 14 09:12:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 (73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7)\n73a3caf9dda04a098e80cc6811994799b0a2aa8839a0f13aaaca5d02a3ca0ef7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[144e75cf-df58-415e-bf40-0e31ddec41f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.177 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e6e95c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:26 np0005486808 kernel: tap15e6e95c-60: left promiscuous mode
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.187 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.188 2 INFO nova.compute.claims [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.202 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba906da6-c73f-4ff3-9576-1aea58c8c6f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.223 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5685ed45-13da-46de-a8b4-aa2b52f846bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.224 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb30d3f3-bc49-4ad1-9fa8-074b51c05ebd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.239 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.240 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.240 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:08:ea:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.240 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Using config drive#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.242 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1b41c2-7c2f-4213-b53a-3dc18a5bad03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686905, 'reachable_time': 22784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347967, 'error': None, 'target': 'ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 systemd[1]: run-netns-ovnmeta\x2d15e6e95c\x2d6cd2\x2d4631\x2d98f6\x2d9ed276458c39.mount: Deactivated successfully.
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.247 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15e6e95c-6cd2-4631-98f6-9ed276458c39 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.248 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5d588f75-c96e-42c9-9c65-3b08bbba7ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.277 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.286 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Creating config drive at /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.296 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatg9khtx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.399 2 INFO nova.virt.libvirt.driver [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deleting instance files /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_del#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.400 2 INFO nova.virt.libvirt.driver [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deletion of /var/lib/nova/instances/70e3c250-cd38-4718-9a7f-0fbf7bf471fe_del complete#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.446 2 INFO nova.compute.manager [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.447 2 DEBUG oslo.service.loopingcall [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.449 2 DEBUG nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.449 2 DEBUG nova.network.neutron [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.450 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatg9khtx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.472 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.476 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.542 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.670 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config 16c1b8b8-cda9-45f9-994f-3f102eb85e1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.671 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deleting local config drive /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:26 np0005486808 kernel: tapb0cc5216-20: entered promiscuous mode
Oct 14 05:12:26 np0005486808 systemd-udevd[347860]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:26Z|00906|binding|INFO|Claiming lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 for this chassis.
Oct 14 05:12:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:26Z|00907|binding|INFO|b0cc5216-2023-4add-97a9-4bafe30fd8c3: Claiming fa:16:3e:52:4e:ee 10.100.0.14
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.7531] manager: (tapb0cc5216-20): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.765 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Creating config drive at /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.769 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d4jcbsn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.768 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:4e:ee 10.100.0.14'], port_security=['fa:16:3e:52:4e:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b0cc5216-2023-4add-97a9-4bafe30fd8c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.770 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b0cc5216-2023-4add-97a9-4bafe30fd8c3 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.7792] device (tapb0cc5216-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.7823] device (tapb0cc5216-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.800 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67438e9a-6338-4066-a34e-7c5bb6b68b8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.801 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0506bb08-71 in ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.803 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0506bb08-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a6243c-63d3-4cf2-93f5-5bce5f535e72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.804 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c567fb6-d519-4023-b31c-1ee2ec2dfca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 systemd-machined[214636]: New machine qemu-108-instance-00000057.
Oct 14 05:12:26 np0005486808 systemd[1]: Started Virtual Machine qemu-108-instance-00000057.
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.821 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a11174d4-9edf-4a04-9290-efedb87f0a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52f0e096-4aeb-49b8-ba32-3f4df3f7b152]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:26Z|00908|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 ovn-installed in OVS
Oct 14 05:12:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:26Z|00909|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 up in Southbound
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.893 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5570dd81-c1f3-4ab5-8cc7-11cb76d39f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.911 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7e67d0-7f60-4361-bc1b-8c8861acffb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.9129] manager: (tap0506bb08-70): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.920 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d4jcbsn" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.957 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d5cf87-0cff-4317-85bd-698133206b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.961 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77890ec3-62df-4478-a7c2-c187f0f974f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.982 2 DEBUG nova.storage.rbd_utils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:26 np0005486808 NetworkManager[44885]: <info>  [1760433146.9933] device (tap0506bb08-70): carrier: link connected
Oct 14 05:12:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286221075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:26 np0005486808 nova_compute[259627]: 2025-10-14 09:12:26.995 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:26.999 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0e25fa4c-cef2-466a-90cc-de53bb675581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63bea90d-a09a-45a3-90b2-f544a281de85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348118, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.042 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.052 2 DEBUG nova.compute.provider_tree [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.056 2 DEBUG nova.network.neutron [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.061 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8af3493e-df76-4ad2-92b7-fbcb76c68b17]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c30c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692575, 'tstamp': 692575}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348120, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.063 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updated VIF entry in instance network info cache for port 1037d287-c167-4691-9393-55be86ecbab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.063 2 DEBUG nova.network.neutron [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [{"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.090 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eef4573c-342c-47cd-af0f-29a0d6f0de28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348121, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.096 2 DEBUG nova.scheduler.client.report [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.100 2 DEBUG oslo_concurrency.lockutils [req-9a426975-42fd-4497-a4fc-4b853b057ebc req-1df1fe81-05bd-4ba2-ad17-1885ca1372c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ab89bfba-67b2-4767-90f2-7ef5dab476c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.101 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.101 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance network_info: |[{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.102 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.102 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Refreshing network info cache for port 26bcc700-59c6-4e79-904d-988cd11152c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.105 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start _get_guest_xml network_info=[{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.124 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.125 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.127 2 WARNING nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.136 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.136 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.139 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.140 2 DEBUG nova.virt.libvirt.host [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.140 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.141 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.141 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.142 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.143 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.144 2 DEBUG nova.virt.hardware [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.147 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.148 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68ec43b4-958d-44cc-9974-a201c33b01f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.208 2 DEBUG oslo_concurrency.processutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config ab89bfba-67b2-4767-90f2-7ef5dab476c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.209 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.210 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.213 2 INFO nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deleting local config drive /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.229 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.242 2 DEBUG nova.network.neutron [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.241 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[246229e7-6d07-4117-97de-5eaf9fdacf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 NetworkManager[44885]: <info>  [1760433147.2464] manager: (tap0506bb08-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 14 05:12:27 np0005486808 kernel: tap0506bb08-70: entered promiscuous mode
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.254 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.254 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:27Z|00910|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.264 2 INFO nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Took 0.81 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:27 np0005486808 NetworkManager[44885]: <info>  [1760433147.2681] manager: (tap1037d287-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct 14 05:12:27 np0005486808 systemd-udevd[348091]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.281 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 NetworkManager[44885]: <info>  [1760433147.2827] device (tap1037d287-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:27 np0005486808 kernel: tap1037d287-c1: entered promiscuous mode
Oct 14 05:12:27 np0005486808 NetworkManager[44885]: <info>  [1760433147.2840] device (tap1037d287-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[534869fc-de70-49a2-be92-91da630b2b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.283 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.283 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'env', 'PROCESS_TAG=haproxy-0506bb08-7957-44ca-9a0f-014c548c7b40', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0506bb08-7957-44ca-9a0f-014c548c7b40.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:12:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:27Z|00911|binding|INFO|Claiming lport 1037d287-c167-4691-9393-55be86ecbab2 for this chassis.
Oct 14 05:12:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:27Z|00912|binding|INFO|1037d287-c167-4691-9393-55be86ecbab2: Claiming fa:16:3e:08:ea:3e 10.100.0.5
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.296 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ea:3e 10.100.0.5'], port_security=['fa:16:3e:08:ea:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1037d287-c167-4691-9393-55be86ecbab2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:27Z|00913|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 ovn-installed in OVS
Oct 14 05:12:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:27Z|00914|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 up in Southbound
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 systemd-machined[214636]: New machine qemu-109-instance-00000058.
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.316 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.316 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 systemd[1]: Started Virtual Machine qemu-109-instance-00000058.
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.364 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.365 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.366 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating image(s)#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.398 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.421 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.445 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.448 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.497 2 DEBUG nova.policy [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.503 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.504 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] No waiting events found dispatching network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.505 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received unexpected event network-vif-unplugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.505 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.505 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.506 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] No waiting events found dispatching network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.506 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received unexpected event network-vif-plugged-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.507 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Processing event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.508 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.509 2 DEBUG oslo_concurrency.lockutils [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.509 2 DEBUG nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.509 2 WARNING nova.compute.manager [req-b11cd026-eeab-4b69-9f3a-000b447cb44e req-e5567907-deee-487b-9eca-81368c50914e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received unexpected event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.545 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.546 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.546 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.547 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.577 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.587 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9e354e27-d674-43c3-890b-caf8731cb827_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 podman[348308]: 2025-10-14 09:12:27.65593395 +0000 UTC m=+0.065330820 container create 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:12:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320841696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:27 np0005486808 systemd[1]: Started libpod-conmon-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope.
Oct 14 05:12:27 np0005486808 podman[348308]: 2025-10-14 09:12:27.625785637 +0000 UTC m=+0.035182557 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.720 2 DEBUG oslo_concurrency.processutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31049def98d964650864fccfae84a5b472505ff9f0ff1c2e6df68f12f050e09d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:27 np0005486808 podman[348308]: 2025-10-14 09:12:27.750177571 +0000 UTC m=+0.159574491 container init 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:12:27 np0005486808 podman[348308]: 2025-10-14 09:12:27.75541222 +0000 UTC m=+0.164809110 container start 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.771 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:27 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : New worker (348420) forked
Oct 14 05:12:27 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : Loading success.
Oct 14 05:12:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.812 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.826 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.830 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1037d287-c167-4691-9393-55be86ecbab2 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.833 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.864 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd386ea-fa5c-4934-b4fe-74d941f4bd01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.894 2 DEBUG nova.compute.manager [req-3a256bfd-c96d-418e-a6b9-1dfdfe710423 req-b1becbdd-1b49-430d-bd66-827c835c8745 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Received event network-vif-deleted-f027d20e-665b-4bd0-836c-7e8edb2b6bf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.916 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdad488-057b-4943-b6eb-3f45b3c8c325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.921 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[674c1197-5fcd-45b6-8b0f-fec3c4fde67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.952 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3467f58-e333-4923-88eb-893ab423ca96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 nova_compute[259627]: 2025-10-14 09:12:27.956 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9e354e27-d674-43c3-890b-caf8731cb827_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.978 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f037273c-5073-4c8b-817b-01548f526ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348467, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d96ac93e-e0b9-41cc-a807-8162d0e3b251]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692594, 'tstamp': 692594}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348483, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692598, 'tstamp': 692598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348483, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.993 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.996 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.996 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:27.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.046 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.146 2 DEBUG nova.objects.instance [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.163 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Ensure instance console log exists: /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.164 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.165 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.237 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2373714, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.238 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.241 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.251 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.259 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.264 2 INFO nova.virt.libvirt.driver [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance spawned successfully.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.264 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755585408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.272 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.291 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.292 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2374704, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.292 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.295 2 DEBUG oslo_concurrency.processutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.299 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.299 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.300 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.301 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.302 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.303 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.313 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.324 2 DEBUG nova.compute.provider_tree [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.331 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.2509418, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.332 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.361 2 DEBUG nova.scheduler.client.report [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.366 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Successfully created port: ce4eb1a6-2221-4519-98fa-44a39da77b71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.369 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.373 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764184911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.390 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.390 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.395 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.399 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.399 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.3402908, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.399 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.405 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.407 2 DEBUG nova.virt.libvirt.vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:23Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.407 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.408 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.408 2 DEBUG nova.objects.instance [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.419 2 INFO nova.scheduler.client.report [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Deleted allocations for instance 70e3c250-cd38-4718-9a7f-0fbf7bf471fe#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.426 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.430 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433148.3403986, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.430 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.433 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <uuid>4774788b-1dc2-40c6-87d0-db4e3f54a609</uuid>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <name>instance-00000059</name>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersNegativeTestJSON-server-1339714429</nova:name>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:27</nova:creationTime>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <nova:port uuid="26bcc700-59c6-4e79-904d-988cd11152c8">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="serial">4774788b-1dc2-40c6-87d0-db4e3f54a609</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="uuid">4774788b-1dc2-40c6-87d0-db4e3f54a609</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4774788b-1dc2-40c6-87d0-db4e3f54a609_disk">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:46:94:f3"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <target dev="tap26bcc700-59"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/console.log" append="off"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:28 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:28 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:28 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:28 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.434 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Preparing to wait for external event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.435 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.436 2 DEBUG nova.virt.libvirt.vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:23Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.436 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.437 2 DEBUG nova.network.os_vif_util [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.437 2 DEBUG os_vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26bcc700-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.442 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26bcc700-59, col_values=(('external_ids', {'iface-id': '26bcc700-59c6-4e79-904d-988cd11152c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:94:f3', 'vm-uuid': '4774788b-1dc2-40c6-87d0-db4e3f54a609'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:28 np0005486808 NetworkManager[44885]: <info>  [1760433148.4758] manager: (tap26bcc700-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.487 2 INFO os_vif [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59')#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.489 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 9.37 seconds to build instance.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.519 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updated VIF entry in instance network info cache for port 26bcc700-59c6-4e79-904d-988cd11152c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.520 2 DEBUG nova.network.neutron [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [{"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.523 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.535 2 DEBUG oslo_concurrency.lockutils [req-ac608442-b94d-4664-aeff-b75fbff5c4c7 req-5e6c74a1-66d1-4db8-b791-c4b89c2d26d1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4774788b-1dc2-40c6-87d0-db4e3f54a609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.540 2 DEBUG oslo_concurrency.lockutils [None req-a31b10f2-a679-4a2a-96a8-8b3224bd655b aa1425f7fdfc4218bdabfe2458cd1c60 f10ae705d9a34608a922683282b952b5 - - default default] Lock "70e3c250-cd38-4718-9a7f-0fbf7bf471fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.552 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:46:94:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.553 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Using config drive#033[00m
Oct 14 05:12:28 np0005486808 podman[348568]: 2025-10-14 09:12:28.581469743 +0000 UTC m=+0.062784617 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.599 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:28 np0005486808 podman[348566]: 2025-10-14 09:12:28.627389444 +0000 UTC m=+0.101245604 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:28 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:28.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.001 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.001 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.048 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Creating config drive at /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.053 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3bg1nyf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.139 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Successfully updated port: ce4eb1a6-2221-4519-98fa-44a39da77b71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.159 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.159 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.160 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.218 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3bg1nyf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.264 2 DEBUG nova.storage.rbd_utils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.270 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.317 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.564 2 DEBUG oslo_concurrency.processutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config 4774788b-1dc2-40c6-87d0-db4e3f54a609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.565 2 INFO nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deleting local config drive /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.591 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.592 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369680037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.593 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Processing event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.594 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.595 2 DEBUG oslo_concurrency.lockutils [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.595 2 DEBUG nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.595 2 WARNING nova.compute.manager [req-9089c12b-f94d-4cd8-87e0-45be370da97d req-ddf09d97-d189-4cc4-8b6e-75be352f6b22 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received unexpected event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.596 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.610 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433149.6104703, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.611 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.613 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.621 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.633 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.643 2 INFO nova.virt.libvirt.driver [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance spawned successfully.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.644 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:29 np0005486808 NetworkManager[44885]: <info>  [1760433149.6573] manager: (tap26bcc700-59): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Oct 14 05:12:29 np0005486808 kernel: tap26bcc700-59: entered promiscuous mode
Oct 14 05:12:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:29Z|00915|binding|INFO|Claiming lport 26bcc700-59c6-4e79-904d-988cd11152c8 for this chassis.
Oct 14 05:12:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:29Z|00916|binding|INFO|26bcc700-59c6-4e79-904d-988cd11152c8: Claiming fa:16:3e:46:94:f3 10.100.0.13
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.670 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:94:f3 10.100.0.13'], port_security=['fa:16:3e:46:94:f3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4774788b-1dc2-40c6-87d0-db4e3f54a609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26bcc700-59c6-4e79-904d-988cd11152c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.672 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26bcc700-59c6-4e79-904d-988cd11152c8 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.675 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.682 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:29Z|00917|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 ovn-installed in OVS
Oct 14 05:12:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:29Z|00918|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 up in Southbound
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.698 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.698 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.699 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.699 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.700 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.700 2 DEBUG nova.virt.libvirt.driver [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:29 np0005486808 systemd-udevd[348703]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:29 np0005486808 systemd-machined[214636]: New machine qemu-110-instance-00000059.
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f04d1d2f-42ce-4f36-968e-96ba05a17a02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 NetworkManager[44885]: <info>  [1760433149.7218] device (tap26bcc700-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:29 np0005486808 NetworkManager[44885]: <info>  [1760433149.7225] device (tap26bcc700-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:29 np0005486808 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.750 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[57d282fb-dfaa-4bc1-ad96-732f72195b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.754 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a941a3de-b190-4654-89c0-a682dc5602a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.785 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 8.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.785 2 DEBUG nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.794 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04af6222-75ed-4abb-a0c1-40b0392e1826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca12aa7-0715-4389-ab28-23512eb5b0a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348716, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d48790-0b46-4a6e-8285-65090308b0c8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691226, 'tstamp': 691226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691229, 'tstamp': 691229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.858 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.865 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.865 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.866 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:29.867 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.894 2 INFO nova.compute.manager [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 10.75 seconds to build instance.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.899 2 DEBUG nova.compute.manager [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.900 2 DEBUG nova.compute.manager [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.900 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.904 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.904 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.912 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.913 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 306 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.916 2 DEBUG oslo_concurrency.lockutils [None req-5655bc77-492d-4bec-80d0-f5fabc596dbb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.918 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.918 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.922 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.922 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.929 2 DEBUG nova.network.neutron [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.948 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.948 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance network_info: |[{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.949 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.949 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.951 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start _get_guest_xml network_info=[{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.954 2 WARNING nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.959 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.960 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.host [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.966 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.967 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.968 2 DEBUG nova.virt.hardware [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:29 np0005486808 nova_compute[259627]: 2025-10-14 09:12:29.970 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.258 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.85959243774414GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.372 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 16c1b8b8-cda9-45f9-994f-3f102eb85e1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ab89bfba-67b2-4767-90f2-7ef5dab476c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4774788b-1dc2-40c6-87d0-db4e3f54a609 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9e354e27-d674-43c3-890b-caf8731cb827 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.374 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:12:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667893954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.495 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.534 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.573 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.577 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.619 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433150.5444643, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.620 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.644 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.648 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433150.5445905, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.649 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.683 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.711 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699491605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.941 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.945 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.958 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.979 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:12:30 np0005486808 nova_compute[259627]: 2025-10-14 09:12:30.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050899084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.022 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.023 2 DEBUG nova.virt.libvirt.vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:27Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.023 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.024 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.025 2 DEBUG nova.objects.instance [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.042 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <uuid>9e354e27-d674-43c3-890b-caf8731cb827</uuid>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <name>instance-0000005a</name>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-461250371</nova:name>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:29</nova:creationTime>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <nova:port uuid="ce4eb1a6-2221-4519-98fa-44a39da77b71">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="serial">9e354e27-d674-43c3-890b-caf8731cb827</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="uuid">9e354e27-d674-43c3-890b-caf8731cb827</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9e354e27-d674-43c3-890b-caf8731cb827_disk">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9e354e27-d674-43c3-890b-caf8731cb827_disk.config">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:7f:9c:bd"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <target dev="tapce4eb1a6-22"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/console.log" append="off"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:31 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:31 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:31 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:31 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.043 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Preparing to wait for external event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.043 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.044 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.044 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.virt.libvirt.vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:27Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.045 2 DEBUG nova.network.os_vif_util [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.046 2 DEBUG os_vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4eb1a6-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce4eb1a6-22, col_values=(('external_ids', {'iface-id': 'ce4eb1a6-2221-4519-98fa-44a39da77b71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:9c:bd', 'vm-uuid': '9e354e27-d674-43c3-890b-caf8731cb827'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 NetworkManager[44885]: <info>  [1760433151.1022] manager: (tapce4eb1a6-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.107 2 INFO os_vif [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22')#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.173 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.173 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.174 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:7f:9c:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.175 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Using config drive#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.213 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.239 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.240 2 DEBUG nova.network.neutron [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.274 2 DEBUG oslo_concurrency.lockutils [req-e3287fff-2c2c-451b-9915-169f72028a6a req-b24b6db1-5df2-4384-8f48-eafe76a39094 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00919|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00920|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.613 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Creating config drive at /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.618 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ff1dqto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.698 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.699 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Processing event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.700 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG oslo_concurrency.lockutils [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.701 2 DEBUG nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.701 2 WARNING nova.compute.manager [req-a326863f-2b9e-4e6d-9a48-0d4c88f77c62 req-79da8ed1-77d0-48c2-9e64-b08262dd69eb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received unexpected event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.702 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.706 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433151.7058816, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.708 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.711 2 INFO nova.virt.libvirt.driver [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance spawned successfully.#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.712 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.738 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.742 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.743 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.743 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.744 2 DEBUG nova.virt.libvirt.driver [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.748 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.763 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ff1dqto" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.785 2 DEBUG nova.storage.rbd_utils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 9e354e27-d674-43c3-890b-caf8731cb827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.788 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config 9e354e27-d674-43c3-890b-caf8731cb827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.831 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.833 2 INFO nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.833 2 DEBUG nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.848 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.850 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.852 2 INFO nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Terminating instance#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.852 2 DEBUG nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:31 np0005486808 kernel: tapb0cc5216-20 (unregistering): left promiscuous mode
Oct 14 05:12:31 np0005486808 NetworkManager[44885]: <info>  [1760433151.9028] device (tapb0cc5216-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.910 2 INFO nova.compute.manager [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 9.15 seconds to build instance.#033[00m
Oct 14 05:12:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.2 MiB/s wr, 400 op/s
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00921|binding|INFO|Releasing lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 from this chassis (sb_readonly=0)
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00922|binding|INFO|Setting lport b0cc5216-2023-4add-97a9-4bafe30fd8c3 down in Southbound
Oct 14 05:12:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:31Z|00923|binding|INFO|Removing iface tapb0cc5216-20 ovn-installed in OVS
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.936 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:4e:ee 10.100.0.14'], port_security=['fa:16:3e:52:4e:ee 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '16c1b8b8-cda9-45f9-994f-3f102eb85e1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b0cc5216-2023-4add-97a9-4bafe30fd8c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.936 2 DEBUG oslo_concurrency.lockutils [None req-a72089ae-5950-498a-afbb-e926a18a9907 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.938 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b0cc5216-2023-4add-97a9-4bafe30fd8c3 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis#033[00m
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.940 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:31 np0005486808 nova_compute[259627]: 2025-10-14 09:12:31.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f60024ef-4980-4503-9f17-a47bfa648a51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:31 np0005486808 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 14 05:12:31 np0005486808 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Consumed 4.704s CPU time.
Oct 14 05:12:31 np0005486808 systemd-machined[214636]: Machine qemu-108-instance-00000057 terminated.
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.990 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[25e6a651-ee82-4418-aa2f-88a4d4f2c060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:31.992 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[39b76fcf-8731-4ecb-a029-14d1f0a7cdc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.008 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.009 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.011 2 INFO nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Terminating instance#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.011 2 DEBUG nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[077bbb5c-b57d-47cf-9173-f1cbdcf91a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdccda3-319e-44b6-a887-480d63538274]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692575, 'reachable_time': 26707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348915, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26bf25c5-8ec1-464b-a9a4-b23a5f6fd6b4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692594, 'tstamp': 692594}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348916, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692598, 'tstamp': 692598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348916, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.067 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.078 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.079 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.080 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.081 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.089 2 INFO nova.virt.libvirt.driver [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Instance destroyed successfully.#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.089 2 DEBUG nova.objects.instance [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid 16c1b8b8-cda9-45f9-994f-3f102eb85e1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.106 2 DEBUG nova.virt.libvirt.vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-1',id=87,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:28Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=16c1b8b8-cda9-45f9-994f-3f102eb85e1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.108 2 DEBUG nova.network.os_vif_util [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "address": "fa:16:3e:52:4e:ee", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cc5216-20", "ovs_interfaceid": "b0cc5216-2023-4add-97a9-4bafe30fd8c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.109 2 DEBUG nova.network.os_vif_util [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.110 2 DEBUG os_vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0cc5216-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:32 np0005486808 kernel: tap1037d287-c1 (unregistering): left promiscuous mode
Oct 14 05:12:32 np0005486808 NetworkManager[44885]: <info>  [1760433152.1523] device (tap1037d287-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.157 2 INFO os_vif [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:4e:ee,bridge_name='br-int',has_traffic_filtering=True,id=b0cc5216-2023-4add-97a9-4bafe30fd8c3,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cc5216-20')#033[00m
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00924|binding|INFO|Releasing lport 1037d287-c167-4691-9393-55be86ecbab2 from this chassis (sb_readonly=0)
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00925|binding|INFO|Setting lport 1037d287-c167-4691-9393-55be86ecbab2 down in Southbound
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00926|binding|INFO|Removing iface tap1037d287-c1 ovn-installed in OVS
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ea:3e 10.100.0.5'], port_security=['fa:16:3e:08:ea:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab89bfba-67b2-4767-90f2-7ef5dab476c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1037d287-c167-4691-9393-55be86ecbab2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.178 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1037d287-c167-4691-9393-55be86ecbab2 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.179 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0506bb08-7957-44ca-9a0f-014c548c7b40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.179 2 DEBUG oslo_concurrency.processutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config 9e354e27-d674-43c3-890b-caf8731cb827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.180 2 INFO nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deleting local config drive /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37142f37-1bad-4104-b73c-89013b86a4c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.181 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace which is not needed anymore#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 14 05:12:32 np0005486808 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Consumed 3.208s CPU time.
Oct 14 05:12:32 np0005486808 systemd-machined[214636]: Machine qemu-109-instance-00000058 terminated.
Oct 14 05:12:32 np0005486808 kernel: tapce4eb1a6-22: entered promiscuous mode
Oct 14 05:12:32 np0005486808 NetworkManager[44885]: <info>  [1760433152.2566] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00927|binding|INFO|Claiming lport ce4eb1a6-2221-4519-98fa-44a39da77b71 for this chassis.
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00928|binding|INFO|ce4eb1a6-2221-4519-98fa-44a39da77b71: Claiming fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 05:12:32 np0005486808 NetworkManager[44885]: <info>  [1760433152.2682] device (tapce4eb1a6-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:32 np0005486808 NetworkManager[44885]: <info>  [1760433152.2692] device (tapce4eb1a6-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.268 2 INFO nova.virt.libvirt.driver [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Instance destroyed successfully.#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.268 2 DEBUG nova.objects.instance [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid ab89bfba-67b2-4767-90f2-7ef5dab476c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.276 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.283 2 DEBUG nova.virt.libvirt.vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1135349362',display_name='tempest-tempest.common.compute-instance-1135349362-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1135349362-2',id=88,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:12:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-zo7br5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:29Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=ab89bfba-67b2-4767-90f2-7ef5dab476c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.283 2 DEBUG nova.network.os_vif_util [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "1037d287-c167-4691-9393-55be86ecbab2", "address": "fa:16:3e:08:ea:3e", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1037d287-c1", "ovs_interfaceid": "1037d287-c167-4691-9393-55be86ecbab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.284 2 DEBUG nova.network.os_vif_util [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.284 2 DEBUG os_vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1037d287-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:32 np0005486808 systemd-machined[214636]: New machine qemu-111-instance-0000005a.
Oct 14 05:12:32 np0005486808 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00929|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 ovn-installed in OVS
Oct 14 05:12:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:32Z|00930|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 up in Southbound
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.352 2 INFO os_vif [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ea:3e,bridge_name='br-int',has_traffic_filtering=True,id=1037d287-c167-4691-9393-55be86ecbab2,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1037d287-c1')#033[00m
Oct 14 05:12:32 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : haproxy version is 2.8.14-c23fe91
Oct 14 05:12:32 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [NOTICE]   (348411) : path to executable is /usr/sbin/haproxy
Oct 14 05:12:32 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [WARNING]  (348411) : Exiting Master process...
Oct 14 05:12:32 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [ALERT]    (348411) : Current worker (348420) exited with code 143 (Terminated)
Oct 14 05:12:32 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[348398]: [WARNING]  (348411) : All workers exited. Exiting... (0)
Oct 14 05:12:32 np0005486808 systemd[1]: libpod-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope: Deactivated successfully.
Oct 14 05:12:32 np0005486808 podman[348997]: 2025-10-14 09:12:32.379258942 +0000 UTC m=+0.065756901 container died 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 05:12:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4-userdata-shm.mount: Deactivated successfully.
Oct 14 05:12:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-31049def98d964650864fccfae84a5b472505ff9f0ff1c2e6df68f12f050e09d-merged.mount: Deactivated successfully.
Oct 14 05:12:32 np0005486808 podman[348997]: 2025-10-14 09:12:32.701477857 +0000 UTC m=+0.387975806 container cleanup 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 05:12:32 np0005486808 systemd[1]: libpod-conmon-7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4.scope: Deactivated successfully.
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:12:32
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr']
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:12:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:12:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:32 np0005486808 podman[349050]: 2025-10-14 09:12:32.902669072 +0000 UTC m=+0.161081648 container remove 7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.913 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2f5d7f-bafe-4272-9152-0a6c42a37286]: (4, ('Tue Oct 14 09:12:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4)\n7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4\nTue Oct 14 09:12:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4)\n7864abcb4c7372f2fc3dfe30d8613c349bc29b148d20a8988df0f393ca14b0c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.915 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d18cdc9d-db92-4d04-8129-b54d236c113c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.917 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:32 np0005486808 kernel: tap0506bb08-70: left promiscuous mode
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 nova_compute[259627]: 2025-10-14 09:12:32.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e301bca4-60b7-4de8-99b2-99ade3aa01ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.981 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec16076-0d4b-4d4a-8dd2-957d9b9b6310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:32.986 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98834985-9c57-4cde-a1f5-c34abd3777e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.011 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[703cc39c-47bd-47cb-a6db-53b510a1eef4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692564, 'reachable_time': 25965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349062, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.014 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.014 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c28901ae-fb4a-4c10-b475-1390de9903a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.016 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis#033[00m
Oct 14 05:12:33 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0506bb08\x2d7957\x2d44ca\x2d9a0f\x2d014c548c7b40.mount: Deactivated successfully.
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.020 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2aae7a-5927-4625-b975-cb6c878115e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.034 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cb8e394-e1 in ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.035 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cb8e394-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.036 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec52e96-c2d7-4599-ad1b-60098cb1f73d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.037 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0987ec56-d33e-4c0f-b7b3-433c77058d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.050 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee802f9-b0b5-49f7-b4ef-fadd71496a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.073 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3851e309-e7d9-4e6e-8a20-ab50a90924d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.104 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ac79c5a1-4471-44d5-86f1-41589940ac12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 systemd-udevd[349064]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:33 np0005486808 NetworkManager[44885]: <info>  [1760433153.1128] manager: (tap7cb8e394-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.112 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3963d85a-6b33-4949-87ef-87280d5e24ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.167 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cf3462-2de3-4d7f-b89d-17cec09e4cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.170 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6e667455-6fa3-4edc-9859-f4de653b83e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 NetworkManager[44885]: <info>  [1760433153.2061] device (tap7cb8e394-e0): carrier: link connected
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.217 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[37b6fde3-8afb-4a4c-bf79-00e5c83330bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[416443f3-657c-4ce6-8943-71835eb4f3ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693196, 'reachable_time': 25964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349091, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ff5ae1-b12c-4d0c-8712-0db2932badaa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:c43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693196, 'tstamp': 693196}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349092, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[228b9829-62af-41db-864d-4c5c80f76aa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693196, 'reachable_time': 25964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349093, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.346 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de463239-2447-431c-bf59-e994da3e90de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.417 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[015b3301-6a9a-4858-9791-defe29bbbd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.419 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb8e394-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:33 np0005486808 NetworkManager[44885]: <info>  [1760433153.4213] manager: (tap7cb8e394-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 14 05:12:33 np0005486808 kernel: tap7cb8e394-e0: entered promiscuous mode
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.423 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cb8e394-e0, col_values=(('external_ids', {'iface-id': 'abbcb164-8856-47e0-a7b9-984d66daedac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:33Z|00931|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.451 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.452 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[321d65e5-7b2f-4989-863c-3046ad7c1705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.453 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.457 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'env', 'PROCESS_TAG=haproxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.821 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.822 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.823 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.823 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.824 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.825 2 INFO nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Terminating instance#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.826 2 DEBUG nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:33 np0005486808 kernel: tap26bcc700-59 (unregistering): left promiscuous mode
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.902 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.903 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.904 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.904 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.905 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.906 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-unplugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.906 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.907 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 NetworkManager[44885]: <info>  [1760433153.9079] device (tap26bcc700-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.908 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.908 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.909 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] No waiting events found dispatching network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.910 2 WARNING nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received unexpected event network-vif-plugged-b0cc5216-2023-4add-97a9-4bafe30fd8c3 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.910 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.911 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.912 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.912 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:33Z|00932|binding|INFO|Releasing lport 26bcc700-59c6-4e79-904d-988cd11152c8 from this chassis (sb_readonly=0)
Oct 14 05:12:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:33Z|00933|binding|INFO|Setting lport 26bcc700-59c6-4e79-904d-988cd11152c8 down in Southbound
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.913 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.914 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-unplugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:33Z|00934|binding|INFO|Removing iface tap26bcc700-59 ovn-installed in OVS
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.915 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 295 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.1 MiB/s wr, 323 op/s
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.916 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.917 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.918 2 DEBUG oslo_concurrency.lockutils [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.918 2 DEBUG nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] No waiting events found dispatching network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.918 2 WARNING nova.compute.manager [req-66f023a7-a84f-46c8-9f1b-83cf0a791f50 req-eedc9666-3c0d-402b-8226-e1224a4a07f2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received unexpected event network-vif-plugged-1037d287-c167-4691-9393-55be86ecbab2 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:33 np0005486808 podman[349167]: 2025-10-14 09:12:33.921031172 +0000 UTC m=+0.104851963 container create d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:12:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:33.925 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:94:f3 10.100.0.13'], port_security=['fa:16:3e:46:94:f3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4774788b-1dc2-40c6-87d0-db4e3f54a609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=26bcc700-59c6-4e79-904d-988cd11152c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:33 np0005486808 podman[349167]: 2025-10-14 09:12:33.851441008 +0000 UTC m=+0.035261829 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:33 np0005486808 systemd[1]: Started libpod-conmon-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope.
Oct 14 05:12:33 np0005486808 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 14 05:12:33 np0005486808 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 2.816s CPU time.
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:12:33 np0005486808 nova_compute[259627]: 2025-10-14 09:12:33.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:12:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:33 np0005486808 systemd-machined[214636]: Machine qemu-110-instance-00000059 terminated.
Oct 14 05:12:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29994210cca99f3be79ccbe5f31a62bef181ac26761ce349073eb7da91ec9974/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.015 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:12:34 np0005486808 podman[349167]: 2025-10-14 09:12:34.018921953 +0000 UTC m=+0.202742784 container init d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:12:34 np0005486808 podman[349167]: 2025-10-14 09:12:34.025928595 +0000 UTC m=+0.209749396 container start d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:34 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : New worker (349199) forked
Oct 14 05:12:34 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : Loading success.
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.067 2 INFO nova.virt.libvirt.driver [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Instance destroyed successfully.#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.068 2 DEBUG nova.objects.instance [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 4774788b-1dc2-40c6-87d0-db4e3f54a609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.080 2 DEBUG nova.virt.libvirt.vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1339714429',display_name='tempest-ServersNegativeTestJSON-server-1339714429',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1339714429',id=89,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-3mdh02k5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:31Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=4774788b-1dc2-40c6-87d0-db4e3f54a609,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.080 2 DEBUG nova.network.os_vif_util [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "26bcc700-59c6-4e79-904d-988cd11152c8", "address": "fa:16:3e:46:94:f3", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26bcc700-59", "ovs_interfaceid": "26bcc700-59c6-4e79-904d-988cd11152c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.081 2 DEBUG nova.network.os_vif_util [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.082 2 DEBUG os_vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26bcc700-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.089 2 INFO os_vif [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:94:f3,bridge_name='br-int',has_traffic_filtering=True,id=26bcc700-59c6-4e79-904d-988cd11152c8,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26bcc700-59')#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.152 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 26bcc700-59c6-4e79-904d-988cd11152c8 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.154 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.165 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.166 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.166 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.167 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2fb44c-fc27-479a-b43c-31784997ed49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.211 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0853ac76-005e-4a18-88c4-a99d4a833265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.215 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5516e8-6464-43cd-b836-38fe6994233c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.249 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[813f4d67-2dd5-4128-a66e-2bec2903cae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.276 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36cb50d9-d158-410d-8af4-8f290c1c04f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691215, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349236, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.284 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433154.28369, 9e354e27-d674-43c3-890b-caf8731cb827 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.285 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[010723e9-756b-40c4-8455-084e33b436f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691226, 'tstamp': 691226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349237, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa49b41b4-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691229, 'tstamp': 691229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349237, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.304 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.310 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:34 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:34.311 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.311 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.317 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433154.2838418, 9e354e27-d674-43c3-890b-caf8731cb827 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.317 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.334 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.337 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.363 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.485 2 INFO nova.virt.libvirt.driver [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deleting instance files /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_del#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.485 2 INFO nova.virt.libvirt.driver [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deletion of /var/lib/nova/instances/16c1b8b8-cda9-45f9-994f-3f102eb85e1e_del complete#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.507 2 INFO nova.virt.libvirt.driver [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deleting instance files /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0_del#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.508 2 INFO nova.virt.libvirt.driver [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deletion of /var/lib/nova/instances/ab89bfba-67b2-4767-90f2-7ef5dab476c0_del complete#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.546 2 INFO nova.compute.manager [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.546 2 DEBUG oslo.service.loopingcall [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.547 2 DEBUG nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.547 2 DEBUG nova.network.neutron [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.558 2 INFO nova.compute.manager [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 2.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG oslo.service.loopingcall [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.559 2 DEBUG nova.network.neutron [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.962 2 INFO nova.virt.libvirt.driver [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deleting instance files /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609_del#033[00m
Oct 14 05:12:34 np0005486808 nova_compute[259627]: 2025-10-14 09:12:34.962 2 INFO nova.virt.libvirt.driver [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deletion of /var/lib/nova/instances/4774788b-1dc2-40c6-87d0-db4e3f54a609_del complete#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.013 2 INFO nova.compute.manager [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.014 2 DEBUG oslo.service.loopingcall [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.014 2 DEBUG nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.015 2 DEBUG nova.network.neutron [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433140.0497344, 1141f79e-2e47-40f1-91b0-275a9fac765c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.051 2 INFO nova.compute.manager [-] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.078 2 DEBUG nova.compute.manager [None req-8c4c46dc-7e81-4226-9358-ebac38a5613a - - - - - -] [instance: 1141f79e-2e47-40f1-91b0-275a9fac765c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.791 2 DEBUG nova.network.neutron [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.819 2 INFO nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Took 1.27 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.897 2 DEBUG nova.compute.manager [req-c622664c-053a-4324-9492-efcc16cfd84a req-dc44b5c8-a2a4-4bb8-8821-b76dd3846618 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Received event network-vif-deleted-b0cc5216-2023-4add-97a9-4bafe30fd8c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.908 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.909 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.911 2 DEBUG nova.network.neutron [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.2 MiB/s wr, 549 op/s
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.928 2 INFO nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Took 1.37 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.985 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.995 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.996 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Processing event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.997 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.998 2 WARNING nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.998 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:35 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:35.999 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-unplugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.000 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG oslo_concurrency.lockutils [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.001 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] No waiting events found dispatching network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.001 2 WARNING nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received unexpected event network-vif-plugged-26bcc700-59c6-4e79-904d-988cd11152c8 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.002 2 DEBUG nova.compute.manager [req-7d475305-b7a9-4e04-97d0-b864f10ddfa9 req-5d1e7e75-99f5-4e49-9759-586ffaf5697b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Received event network-vif-deleted-1037d287-c167-4691-9393-55be86ecbab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.002 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.008 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433156.0080884, 9e354e27-d674-43c3-890b-caf8731cb827 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.010 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.015 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance spawned successfully.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.016 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.034 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.043 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.046 2 DEBUG oslo_concurrency.processutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.087 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.092 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.093 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.094 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.095 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.096 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.096 2 DEBUG nova.virt.libvirt.driver [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.151 2 INFO nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 8.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.151 2 DEBUG nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.224 2 INFO nova.compute.manager [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 10.09 seconds to build instance.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.239 2 DEBUG oslo_concurrency.lockutils [None req-b75dd271-ad48-41a5-9599-347edffd512e e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.344 2 DEBUG nova.network.neutron [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.369 2 INFO nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.416 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005298739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.490 2 DEBUG oslo_concurrency.processutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.495 2 DEBUG nova.compute.provider_tree [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.511 2 DEBUG nova.scheduler.client.report [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.531 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.534 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.557 2 INFO nova.scheduler.client.report [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance 16c1b8b8-cda9-45f9-994f-3f102eb85e1e#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.604 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.623 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.624 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.625 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.670 2 DEBUG oslo_concurrency.lockutils [None req-310ff42a-3315-4908-ab7d-acf37666d217 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "16c1b8b8-cda9-45f9-994f-3f102eb85e1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:36 np0005486808 nova_compute[259627]: 2025-10-14 09:12:36.691 2 DEBUG oslo_concurrency.processutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640992374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.226 2 DEBUG oslo_concurrency.processutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.231 2 DEBUG nova.compute.provider_tree [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.250 2 DEBUG nova.scheduler.client.report [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.279 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.282 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.332 2 INFO nova.scheduler.client.report [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance ab89bfba-67b2-4767-90f2-7ef5dab476c0#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.394 2 DEBUG oslo_concurrency.processutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.445 2 DEBUG oslo_concurrency.lockutils [None req-76ab26df-adf6-4e51-ab0c-ae107f5f7b7b f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "ab89bfba-67b2-4767-90f2-7ef5dab476c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967470313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.880 2 DEBUG oslo_concurrency.processutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.885 2 DEBUG nova.compute.provider_tree [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.900 2 DEBUG nova.scheduler.client.report [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.920 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.947 2 INFO nova.scheduler.client.report [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 4774788b-1dc2-40c6-87d0-db4e3f54a609#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:37 np0005486808 nova_compute[259627]: 2025-10-14 09:12:37.988 2 DEBUG nova.compute.manager [req-6c227c34-b233-447f-9a0c-fbb887903fbd req-7ca744c6-19c3-49d7-8aee-e6e83e72cbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Received event network-vif-deleted-26bcc700-59c6-4e79-904d-988cd11152c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:38 np0005486808 nova_compute[259627]: 2025-10-14 09:12:38.007 2 DEBUG oslo_concurrency.lockutils [None req-96b38244-981d-4604-804d-e512c13f72bb 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "4774788b-1dc2-40c6-87d0-db4e3f54a609" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:39 np0005486808 nova_compute[259627]: 2025-10-14 09:12:39.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:39Z|00935|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:12:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:39Z|00936|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:12:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 167 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.0 MiB/s wr, 427 op/s
Oct 14 05:12:39 np0005486808 nova_compute[259627]: 2025-10-14 09:12:39.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:39 np0005486808 NetworkManager[44885]: <info>  [1760433159.9233] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 14 05:12:39 np0005486808 NetworkManager[44885]: <info>  [1760433159.9242] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct 14 05:12:39 np0005486808 nova_compute[259627]: 2025-10-14 09:12:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:12:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:40Z|00937|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:12:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:40Z|00938|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.511 2 DEBUG nova.compute.manager [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.512 2 DEBUG nova.compute.manager [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.512 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.513 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.513 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.832 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.834 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.861 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.879 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.880 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.912 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.947 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.948 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.948 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433145.9477327, 70e3c250-cd38-4718-9a7f-0fbf7bf471fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.949 2 INFO nova.compute.manager [-] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.966 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.966 2 INFO nova.compute.claims [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.977 2 DEBUG nova.compute.manager [None req-79f3e3f6-bb8d-455f-b9b3-172eba258e12 - - - - - -] [instance: 70e3c250-cd38-4718-9a7f-0fbf7bf471fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:40 np0005486808 nova_compute[259627]: 2025-10-14 09:12:40.997 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.135 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 84c455ae-9c4b-449c-88bb-d83c202c8cc1 does not exist
Oct 14 05:12:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0a13b36f-297f-48fa-8e9d-d5fd40f44bab does not exist
Oct 14 05:12:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3d7d6c5c-e3ed-4899-a46f-c19195fa3f00 does not exist
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790926568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.688 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.698 2 DEBUG nova.compute.provider_tree [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.722 2 DEBUG nova.scheduler.client.report [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.751 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.752 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.757 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.771 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.772 2 INFO nova.compute.claims [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.839 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.840 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.859 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.891 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.0 MiB/s wr, 491 op/s
Oct 14 05:12:41 np0005486808 nova_compute[259627]: 2025-10-14 09:12:41.964 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.018 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.021 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.021 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating image(s)#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.051 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.083 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.119 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.123 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.162790305 +0000 UTC m=+0.064411117 container create 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:12:42 np0005486808 systemd[1]: Started libpod-conmon-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope.
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.132657673 +0000 UTC m=+0.034278525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.229 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.234 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.235 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.235 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.248768002 +0000 UTC m=+0.150388854 container init 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.256445021 +0000 UTC m=+0.158065823 container start 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.257 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.260229935 +0000 UTC m=+0.161850787 container attach 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.261 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ad76124-48eb-467e-9a6f-951235efdb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:42 np0005486808 wonderful_jemison[349689]: 167 167
Oct 14 05:12:42 np0005486808 systemd[1]: libpod-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope: Deactivated successfully.
Oct 14 05:12:42 np0005486808 conmon[349689]: conmon 138caf6f8485a3bf99af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope/container/memory.events
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.263975057 +0000 UTC m=+0.165595859 container died 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:12:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-76a8f15d405d8dfacfeabbb6217c37d1a9383af6c7ddb984c0f6518cd5fc7919-merged.mount: Deactivated successfully.
Oct 14 05:12:42 np0005486808 podman[349637]: 2025-10-14 09:12:42.30188049 +0000 UTC m=+0.203501292 container remove 138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:12:42 np0005486808 systemd[1]: libpod-conmon-138caf6f8485a3bf99af2e05c2b8c030424b7c71a610d9715b9a4941a490ea84.scope: Deactivated successfully.
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606086695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.497 2 DEBUG nova.policy [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.504 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:42 np0005486808 podman[349751]: 2025-10-14 09:12:42.507934945 +0000 UTC m=+0.052771111 container create 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.518 2 DEBUG nova.compute.provider_tree [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.562 2 DEBUG nova.scheduler.client.report [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:42 np0005486808 systemd[1]: Started libpod-conmon-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope.
Oct 14 05:12:42 np0005486808 podman[349751]: 2025-10-14 09:12:42.476748767 +0000 UTC m=+0.021584953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.586 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.587 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.590 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ad76124-48eb-467e-9a6f-951235efdb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:42 np0005486808 podman[349751]: 2025-10-14 09:12:42.619040881 +0000 UTC m=+0.163877097 container init 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 05:12:42 np0005486808 podman[349751]: 2025-10-14 09:12:42.626511115 +0000 UTC m=+0.171347301 container start 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:42 np0005486808 podman[349751]: 2025-10-14 09:12:42.630884313 +0000 UTC m=+0.175720519 container attach 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.643 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.644 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.674 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.681 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.704 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.764 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.786 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.786 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Ensure instance console log exists: /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.787 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.804 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.805 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.806 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating image(s)#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.821 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.844 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.863 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.865 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.932 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.933 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.933 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.934 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.955 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.958 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bf38daab-2994-41c4-a44f-91e466acf68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:42 np0005486808 nova_compute[259627]: 2025-10-14 09:12:42.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106605497493 of space, bias 1.0, pg target 0.3319816492479 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.224 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bf38daab-2994-41c4-a44f-91e466acf68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.316 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] resizing rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.363 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.364 2 DEBUG nova.network.neutron [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.439 2 DEBUG nova.policy [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f50b95774c384c5a8414b197ed5d7b82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a057db932754d6eae91f0d2f359f1ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.447 2 DEBUG oslo_concurrency.lockutils [req-37f9743a-215e-4eb6-8a0b-53aadf39db3e req-f18eaf23-2901-4adf-a11d-020df9eaaecb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.455 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'migration_context' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.474 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.474 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Ensure instance console log exists: /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.476 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.477 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:43 np0005486808 nova_compute[259627]: 2025-10-14 09:12:43.478 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:43 np0005486808 lucid_kare[349770]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:12:43 np0005486808 lucid_kare[349770]: --> relative data size: 1.0
Oct 14 05:12:43 np0005486808 lucid_kare[349770]: --> All data devices are unavailable
Oct 14 05:12:43 np0005486808 systemd[1]: libpod-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope: Deactivated successfully.
Oct 14 05:12:43 np0005486808 podman[349751]: 2025-10-14 09:12:43.703651932 +0000 UTC m=+1.248488098 container died 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:12:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b650fa13359511ca08adb33818554ff31dcee4a110e63a2f118798fc551917cb-merged.mount: Deactivated successfully.
Oct 14 05:12:43 np0005486808 podman[349751]: 2025-10-14 09:12:43.758359779 +0000 UTC m=+1.303195945 container remove 56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 05:12:43 np0005486808 systemd[1]: libpod-conmon-56db4ef906d513715ba626baae083910fc18ce8ecce5fe6e25d2b3810320cba2.scope: Deactivated successfully.
Oct 14 05:12:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 119 KiB/s wr, 290 op/s
Oct 14 05:12:44 np0005486808 nova_compute[259627]: 2025-10-14 09:12:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:44 np0005486808 nova_compute[259627]: 2025-10-14 09:12:44.598 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Successfully created port: 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.663714816 +0000 UTC m=+0.061799143 container create 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:12:44 np0005486808 systemd[1]: Started libpod-conmon-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope.
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.633133663 +0000 UTC m=+0.031218040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.752490502 +0000 UTC m=+0.150574829 container init 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.759787422 +0000 UTC m=+0.157871749 container start 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.763635417 +0000 UTC m=+0.161719714 container attach 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 05:12:44 np0005486808 hopeful_panini[350207]: 167 167
Oct 14 05:12:44 np0005486808 systemd[1]: libpod-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope: Deactivated successfully.
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.768700181 +0000 UTC m=+0.166784508 container died 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:12:44 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ebec7d2f0e69c7c43d133c5abe68f8e2c950f19bf7a9365e76cb803411609dc5-merged.mount: Deactivated successfully.
Oct 14 05:12:44 np0005486808 podman[350191]: 2025-10-14 09:12:44.819262947 +0000 UTC m=+0.217347244 container remove 04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:12:44 np0005486808 systemd[1]: libpod-conmon-04aa0f4913e27e6bca232dd343a17d0e79c94183755f5cbe4c96e826834bab5a.scope: Deactivated successfully.
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.069673503 +0000 UTC m=+0.054222006 container create 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:12:45 np0005486808 systemd[1]: Started libpod-conmon-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope.
Oct 14 05:12:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.049484126 +0000 UTC m=+0.034032649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.163817242 +0000 UTC m=+0.148365795 container init 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.175406377 +0000 UTC m=+0.159954920 container start 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.179504508 +0000 UTC m=+0.164053041 container attach 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:45 np0005486808 nova_compute[259627]: 2025-10-14 09:12:45.528 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Successfully created port: 59075c43-66a8-4a9c-a693-31f83575b355 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]: {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    "0": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "devices": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "/dev/loop3"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            ],
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_name": "ceph_lv0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_size": "21470642176",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "name": "ceph_lv0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "tags": {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_name": "ceph",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.crush_device_class": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.encrypted": "0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_id": "0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.vdo": "0"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            },
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "vg_name": "ceph_vg0"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        }
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    ],
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    "1": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "devices": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "/dev/loop4"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            ],
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_name": "ceph_lv1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_size": "21470642176",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "name": "ceph_lv1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "tags": {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_name": "ceph",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.crush_device_class": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.encrypted": "0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_id": "1",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.vdo": "0"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            },
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "vg_name": "ceph_vg1"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        }
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    ],
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    "2": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "devices": [
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "/dev/loop5"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            ],
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_name": "ceph_lv2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_size": "21470642176",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "name": "ceph_lv2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "tags": {
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.cluster_name": "ceph",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.crush_device_class": "",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.encrypted": "0",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osd_id": "2",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:                "ceph.vdo": "0"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            },
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "type": "block",
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:            "vg_name": "ceph_vg2"
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:        }
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]:    ]
Oct 14 05:12:45 np0005486808 pensive_goldstine[350248]: }
Oct 14 05:12:45 np0005486808 systemd[1]: libpod-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope: Deactivated successfully.
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.902948695 +0000 UTC m=+0.887497198 container died 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 344 op/s
Oct 14 05:12:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4f52b86f6300ba98eb7fb4e7de16f1e3c1a82480349f9ae4ebbe281594ab33d1-merged.mount: Deactivated successfully.
Oct 14 05:12:45 np0005486808 podman[350231]: 2025-10-14 09:12:45.974283342 +0000 UTC m=+0.958831845 container remove 6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:12:45 np0005486808 systemd[1]: libpod-conmon-6c28cfec94b50b7820e4b36691d5a42eeb9834313ccddd73fd6e2e1c6912b7f2.scope: Deactivated successfully.
Oct 14 05:12:46 np0005486808 podman[350258]: 2025-10-14 09:12:46.036789271 +0000 UTC m=+0.097050351 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:12:46 np0005486808 podman[350265]: 2025-10-14 09:12:46.053487222 +0000 UTC m=+0.106224577 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 05:12:46 np0005486808 nova_compute[259627]: 2025-10-14 09:12:46.664 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Successfully updated port: 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:46 np0005486808 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:46 np0005486808 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:46 np0005486808 nova_compute[259627]: 2025-10-14 09:12:46.724 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.729453119 +0000 UTC m=+0.058059651 container create 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:12:46 np0005486808 systemd[1]: Started libpod-conmon-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope.
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.696920928 +0000 UTC m=+0.025527490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.815723823 +0000 UTC m=+0.144330375 container init 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.82329025 +0000 UTC m=+0.151896772 container start 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.826515269 +0000 UTC m=+0.155121811 container attach 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:12:46 np0005486808 tender_wescoff[350458]: 167 167
Oct 14 05:12:46 np0005486808 systemd[1]: libpod-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope: Deactivated successfully.
Oct 14 05:12:46 np0005486808 conmon[350458]: conmon 28750fe3c1480f286b28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope/container/memory.events
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.831093582 +0000 UTC m=+0.159700124 container died 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:12:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-88b7d172d6a7f4633f91e8b1053ca53aa26841a3b3013715bf70c5c2da6e5e04-merged.mount: Deactivated successfully.
Oct 14 05:12:46 np0005486808 podman[350442]: 2025-10-14 09:12:46.86876518 +0000 UTC m=+0.197371702 container remove 28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wescoff, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:12:46 np0005486808 systemd[1]: libpod-conmon-28750fe3c1480f286b28baa42f1646e9b537753289ebdaa641052361d9afe3a9.scope: Deactivated successfully.
Oct 14 05:12:47 np0005486808 podman[350482]: 2025-10-14 09:12:47.046447756 +0000 UTC m=+0.044648231 container create dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.086 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433152.0860417, 16c1b8b8-cda9-45f9-994f-3f102eb85e1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.087 2 INFO nova.compute.manager [-] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:47 np0005486808 systemd[1]: Started libpod-conmon-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope.
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.121 2 DEBUG nova.compute.manager [None req-fa3a1d4a-35d2-41ff-9850-2dba7124a0ac - - - - - -] [instance: 16c1b8b8-cda9-45f9-994f-3f102eb85e1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:47 np0005486808 podman[350482]: 2025-10-14 09:12:47.028461723 +0000 UTC m=+0.026662228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:12:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:47 np0005486808 podman[350482]: 2025-10-14 09:12:47.16194133 +0000 UTC m=+0.160141825 container init dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:12:47 np0005486808 podman[350482]: 2025-10-14 09:12:47.168794949 +0000 UTC m=+0.166995424 container start dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 05:12:47 np0005486808 podman[350482]: 2025-10-14 09:12:47.171847274 +0000 UTC m=+0.170047849 container attach dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.249 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433152.2486112, ab89bfba-67b2-4767-90f2-7ef5dab476c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.250 2 INFO nova.compute.manager [-] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.275 2 DEBUG nova.compute.manager [None req-8ecfc52c-3e2f-4dbd-aa92-38631fdd99cf - - - - - -] [instance: ab89bfba-67b2-4767-90f2-7ef5dab476c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.284 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG nova.compute.manager [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-changed-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG nova.compute.manager [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Refreshing instance network info cache due to event network-changed-66a0c3b8-73ab-490e-a3d4-06827c574cb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.349 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:47Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 05:12:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:47Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.652 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Successfully updated port: 59075c43-66a8-4a9c-a693-31f83575b355 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.670 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.671 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquired lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.671 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 05:12:47 np0005486808 nova_compute[259627]: 2025-10-14 09:12:47.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]: {
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_id": 2,
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "type": "bluestore"
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    },
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_id": 1,
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "type": "bluestore"
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    },
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_id": 0,
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:        "type": "bluestore"
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]:    }
Oct 14 05:12:48 np0005486808 beautiful_cray[350499]: }
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.112 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:48 np0005486808 systemd[1]: libpod-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope: Deactivated successfully.
Oct 14 05:12:48 np0005486808 podman[350482]: 2025-10-14 09:12:48.136941602 +0000 UTC m=+1.135142077 container died dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 05:12:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c7148b960e77d113014da8121cd7a7e3394799cd627fb9535d1fc023ed754238-merged.mount: Deactivated successfully.
Oct 14 05:12:48 np0005486808 podman[350482]: 2025-10-14 09:12:48.189840185 +0000 UTC m=+1.188040670 container remove dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 05:12:48 np0005486808 systemd[1]: libpod-conmon-dab42b37ac013b5b84eb46d40f07160b97f955d75bd7796540bf0879829f5a71.scope: Deactivated successfully.
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b8dc6c5f-ba32-4311-938e-5a8dec5d43e0 does not exist
Oct 14 05:12:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 128872c2-1252-4289-b87c-855758ca5bf2 does not exist
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.490 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.512 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance network_info: |[{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.513 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Refreshing network info cache for port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.515 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start _get_guest_xml network_info=[{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.521 2 WARNING nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.527 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.527 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.536 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.536 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.537 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.537 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.538 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.539 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.540 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.543 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486944178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.956 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:48 np0005486808 nova_compute[259627]: 2025-10-14 09:12:48.993 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.000 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.064 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433154.0633433, 4774788b-1dc2-40c6-87d0-db4e3f54a609 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.065 2 INFO nova.compute.manager [-] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.085 2 DEBUG nova.compute.manager [None req-230b8606-0aef-4961-939d-f8e3a97975a9 - - - - - -] [instance: 4774788b-1dc2-40c6-87d0-db4e3f54a609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.345 2 DEBUG nova.network.neutron [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.372 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Releasing lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.373 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance network_info: |[{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.378 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start _get_guest_xml network_info=[{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.383 2 WARNING nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.390 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.391 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.395 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.396 2 DEBUG nova.virt.libvirt.host [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.396 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.397 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.398 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.399 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.399 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.400 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.400 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.401 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.401 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.402 2 DEBUG nova.virt.hardware [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.409 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425337834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.523 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.526 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:41Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.527 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.528 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.531 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.549 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <uuid>4ad76124-48eb-467e-9a6f-951235efdb35</uuid>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <name>instance-0000005b</name>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:name>tempest-MultipleCreateTestJSON-server-1291705969-1</nova:name>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:48</nova:creationTime>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <nova:port uuid="66a0c3b8-73ab-490e-a3d4-06827c574cb6">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="serial">4ad76124-48eb-467e-9a6f-951235efdb35</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="uuid">4ad76124-48eb-467e-9a6f-951235efdb35</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4ad76124-48eb-467e-9a6f-951235efdb35_disk">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4ad76124-48eb-467e-9a6f-951235efdb35_disk.config">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c2:ff:73"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <target dev="tap66a0c3b8-73"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/console.log" append="off"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:49 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:49 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.550 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Preparing to wait for external event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.551 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.552 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.552 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.554 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:41Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.554 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.555 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.556 2 DEBUG os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66a0c3b8-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66a0c3b8-73, col_values=(('external_ids', {'iface-id': '66a0c3b8-73ab-490e-a3d4-06827c574cb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:ff:73', 'vm-uuid': '4ad76124-48eb-467e-9a6f-951235efdb35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:49 np0005486808 NetworkManager[44885]: <info>  [1760433169.5666] manager: (tap66a0c3b8-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.578 2 INFO os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73')#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.650 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.651 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.651 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:c2:ff:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.651 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Using config drive#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.676 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69948782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 260 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.940 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.968 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:49 np0005486808 nova_compute[259627]: 2025-10-14 09:12:49.973 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.444 2 DEBUG nova.compute.manager [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-changed-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.444 2 DEBUG nova.compute.manager [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Refreshing instance network info cache due to event network-changed-59075c43-66a8-4a9c-a693-31f83575b355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.445 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Refreshing network info cache for port 59075c43-66a8-4a9c-a693-31f83575b355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080435401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.487 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.488 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:42Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.489 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.490 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.492 2 DEBUG nova.objects.instance [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'pci_devices' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.515 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Creating config drive at /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.520 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklf_2xrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.572 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updated VIF entry in instance network info cache for port 66a0c3b8-73ab-490e-a3d4-06827c574cb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.573 2 DEBUG nova.network.neutron [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [{"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.577 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <uuid>bf38daab-2994-41c4-a44f-91e466acf68e</uuid>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <name>instance-0000005c</name>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:name>tempest-MultipleCreateTestJSON-server-1291705969-2</nova:name>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:49</nova:creationTime>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:user uuid="f50b95774c384c5a8414b197ed5d7b82">tempest-MultipleCreateTestJSON-2115206001-project-member</nova:user>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:project uuid="5a057db932754d6eae91f0d2f359f1ff">tempest-MultipleCreateTestJSON-2115206001</nova:project>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <nova:port uuid="59075c43-66a8-4a9c-a693-31f83575b355">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="serial">bf38daab-2994-41c4-a44f-91e466acf68e</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="uuid">bf38daab-2994-41c4-a44f-91e466acf68e</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/bf38daab-2994-41c4-a44f-91e466acf68e_disk">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/bf38daab-2994-41c4-a44f-91e466acf68e_disk.config">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:63:b1:fa"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <target dev="tap59075c43-66"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/console.log" append="off"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:50 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:50 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:50 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:50 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.578 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Preparing to wait for external event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.578 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.579 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.579 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.580 2 DEBUG nova.virt.libvirt.vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:42Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.580 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.581 2 DEBUG nova.network.os_vif_util [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.582 2 DEBUG os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59075c43-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59075c43-66, col_values=(('external_ids', {'iface-id': '59075c43-66a8-4a9c-a693-31f83575b355', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:b1:fa', 'vm-uuid': 'bf38daab-2994-41c4-a44f-91e466acf68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:50 np0005486808 NetworkManager[44885]: <info>  [1760433170.5927] manager: (tap59075c43-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.596 2 DEBUG oslo_concurrency.lockutils [req-1d82724b-92de-4952-bc77-49a16597735c req-fe665076-85f7-4e9e-99b9-18c264d855db 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ad76124-48eb-467e-9a6f-951235efdb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.599 2 INFO os_vif [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66')#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.652 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.652 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.653 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] No VIF found with MAC fa:16:3e:63:b1:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.653 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Using config drive#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.678 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.684 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpklf_2xrm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.713 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.717 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.883 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config 4ad76124-48eb-467e-9a6f-951235efdb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.884 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deleting local config drive /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:50 np0005486808 NetworkManager[44885]: <info>  [1760433170.9357] manager: (tap66a0c3b8-73): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Oct 14 05:12:50 np0005486808 kernel: tap66a0c3b8-73: entered promiscuous mode
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:50Z|00939|binding|INFO|Claiming lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 for this chassis.
Oct 14 05:12:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:50Z|00940|binding|INFO|66a0c3b8-73ab-490e-a3d4-06827c574cb6: Claiming fa:16:3e:c2:ff:73 10.100.0.11
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ff:73 10.100.0.11'], port_security=['fa:16:3e:c2:ff:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4ad76124-48eb-467e-9a6f-951235efdb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=66a0c3b8-73ab-490e-a3d4-06827c574cb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.955 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.958 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:50 np0005486808 systemd-udevd[350812]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:50Z|00941|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 ovn-installed in OVS
Oct 14 05:12:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:50Z|00942|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 up in Southbound
Oct 14 05:12:50 np0005486808 nova_compute[259627]: 2025-10-14 09:12:50.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a08e688-c7e3-4a25-bd58-d7849dd4e37a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.984 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0506bb08-71 in ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:12:50 np0005486808 systemd-machined[214636]: New machine qemu-112-instance-0000005b.
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.987 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0506bb08-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.987 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[643ace4e-88b1-4d7e-b4e0-4992db52b96e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:50.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adb6947e-1ab1-4074-906e-3a8488519dc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:50 np0005486808 NetworkManager[44885]: <info>  [1760433170.9990] device (tap66a0c3b8-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:51 np0005486808 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.0006] device (tap66a0c3b8-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f0ac13-0b45-4092-bb36-ed23d2206845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c662e2-be3a-4739-b9c6-6b7d0cd2a682]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.083 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41ba1ffb-6441-4f5a-b2e9-a672781fac84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 systemd-udevd[350817]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.092 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46e56416-49e5-4bec-8b63-fb58cca25e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.0943] manager: (tap0506bb08-70): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.143 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e0979797-4844-4d71-94d7-d12340c3469b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.148 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ac72b0-289d-4720-a25e-e267785fa588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.1845] device (tap0506bb08-70): carrier: link connected
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.195 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7451cdde-a4f6-4b3f-87d1-b607a63d3b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0b0510-0024-4181-8b3c-2e833a13c4e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350850, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[326f99a3-64d5-4d91-b6c4-617366b53627]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c30c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694994, 'tstamp': 694994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350851, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.253 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4fdf2807-03a6-474b-aca6-045db2f39ca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350852, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40f0e841-541c-4eb4-8d69-0e9964a6f7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.343 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a00bf7-c684-4425-aec6-6cdef23784fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.344 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.345 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.345 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.3483] manager: (tap0506bb08-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct 14 05:12:51 np0005486808 kernel: tap0506bb08-70: entered promiscuous mode
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:51Z|00943|binding|INFO|Releasing lport 6cfe11a6-55c2-4d2e-880b-8832ad317040 from this chassis (sb_readonly=0)
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.395 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a165815-c99b-4eac-992e-83739e3e3b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.397 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0506bb08-7957-44ca-9a0f-014c548c7b40.pid.haproxy
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0506bb08-7957-44ca-9a0f-014c548c7b40
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.398 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'env', 'PROCESS_TAG=haproxy-0506bb08-7957-44ca-9a0f-014c548c7b40', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0506bb08-7957-44ca-9a0f-014c548c7b40.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.531 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Creating config drive at /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.540 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlcv2m7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.684 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptlcv2m7e" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.728 2 DEBUG nova.storage.rbd_utils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] rbd image bf38daab-2994-41c4-a44f-91e466acf68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.732 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config bf38daab-2994-41c4-a44f-91e466acf68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.789 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.790 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:51 np0005486808 podman[350945]: 2025-10-14 09:12:51.794924158 +0000 UTC m=+0.057864296 container create e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.806 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:12:51 np0005486808 systemd[1]: Started libpod-conmon-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope.
Oct 14 05:12:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:12:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eaa5e67a067367ecbaeb8ed40f166de47bd4cb062e63d1c476cf7e27a74873/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:12:51 np0005486808 podman[350945]: 2025-10-14 09:12:51.859416866 +0000 UTC m=+0.122357024 container init e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:12:51 np0005486808 podman[350945]: 2025-10-14 09:12:51.764285754 +0000 UTC m=+0.027225902 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:12:51 np0005486808 podman[350945]: 2025-10-14 09:12:51.865737472 +0000 UTC m=+0.128677610 container start e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.883 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.884 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:51 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : New worker (350989) forked
Oct 14 05:12:51 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : Loading success.
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.892 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.892 2 INFO nova.compute.claims [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.922 2 DEBUG oslo_concurrency.processutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config bf38daab-2994-41c4-a44f-91e466acf68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.922 2 INFO nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deleting local config drive /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.948 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433171.9482193, 4ad76124-48eb-467e-9a6f-951235efdb35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:51 np0005486808 kernel: tap59075c43-66: entered promiscuous mode
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.9690] manager: (tap59075c43-66): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct 14 05:12:51 np0005486808 systemd-udevd[350844]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:51Z|00944|binding|INFO|Claiming lport 59075c43-66a8-4a9c-a693-31f83575b355 for this chassis.
Oct 14 05:12:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:51Z|00945|binding|INFO|59075c43-66a8-4a9c-a693-31f83575b355: Claiming fa:16:3e:63:b1:fa 10.100.0.3
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.975 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.977 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b1:fa 10.100.0.3'], port_security=['fa:16:3e:63:b1:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf38daab-2994-41c4-a44f-91e466acf68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=59075c43-66a8-4a9c-a693-31f83575b355) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.979 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 59075c43-66a8-4a9c-a693-31f83575b355 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 bound to our chassis#033[00m
Oct 14 05:12:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:51.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.9838] device (tap59075c43-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:51 np0005486808 NetworkManager[44885]: <info>  [1760433171.9846] device (tap59075c43-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:51Z|00946|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 ovn-installed in OVS
Oct 14 05:12:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:51Z|00947|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 up in Southbound
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.991 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433171.9484267, 4ad76124-48eb-467e-9a6f-951235efdb35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.992 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:51 np0005486808 nova_compute[259627]: 2025-10-14 09:12:51.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.001 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d40ccd-10df-45ba-97e8-c80fc0471c5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.008 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:52 np0005486808 systemd-machined[214636]: New machine qemu-113-instance-0000005c.
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.015 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:52 np0005486808 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.031 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d46d2b53-81f6-4066-89cb-22635603b80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.043 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17828b12-26d4-4311-b08e-da6874901395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.053 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.069 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[442e27c9-ea9a-48ee-b8b3-35c1de8d3fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85f576b2-77ed-4261-afe8-98ec03327514]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351023, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa234be-608e-456b-a052-df0bb4f19f85]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695007, 'tstamp': 695007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351025, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695010, 'tstamp': 695010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351025, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.107 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:52.110 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761441399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.491 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.500 2 DEBUG nova.compute.provider_tree [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.535 2 DEBUG nova.scheduler.client.report [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.577 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.577 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.620 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.620 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.621 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Processing event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.622 2 DEBUG oslo_concurrency.lockutils [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.623 2 DEBUG nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.623 2 WARNING nova.compute.manager [req-5127e236-5c39-4fc0-8d95-6d5b78d22236 req-0db8213f-80c1-45cf-9b53-5f1ead663130 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received unexpected event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.624 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.628 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433172.6284063, 4ad76124-48eb-467e-9a6f-951235efdb35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.628 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.630 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.633 2 INFO nova.virt.libvirt.driver [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance spawned successfully.#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.634 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.639 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.639 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.646 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.650 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.667 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.673 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.674 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.674 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.675 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.683 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.696 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.748 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 10.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.748 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.819 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.821 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.821 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating image(s)#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.842 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.864 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.885 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.888 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.923 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updated VIF entry in instance network info cache for port 59075c43-66a8-4a9c-a693-31f83575b355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.924 2 DEBUG nova.network.neutron [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [{"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.927 2 DEBUG nova.policy [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21d93d87742344a1b7662df0d97a69b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.936 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 12.01 seconds to build instance.#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.956 2 DEBUG oslo_concurrency.lockutils [req-41b2e1be-aee9-416d-9739-18d3cece0d76 req-a977f58d-87d8-4673-9d40-83b0981015bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bf38daab-2994-41c4-a44f-91e466acf68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.957 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.958 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.958 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.959 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.959 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.981 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:52 np0005486808 nova_compute[259627]: 2025-10-14 09:12:52.984 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.055 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433173.0553296, bf38daab-2994-41c4-a44f-91e466acf68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.056 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.078 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.083 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433173.055622, bf38daab-2994-41c4-a44f-91e466acf68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.084 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.241 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.255 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.287 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.330 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] resizing rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.452 2 DEBUG nova.objects.instance [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'migration_context' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.470 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.471 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Ensure instance console log exists: /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.472 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:53 np0005486808 nova_compute[259627]: 2025-10-14 09:12:53.783 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Successfully created port: 63189572-78bc-4d3a-8135-659f8c39ce7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:12:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 292 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.404 2 INFO nova.compute.manager [None req-bb03464d-8d0a-4c8d-9076-b051429c8030 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Get console output#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.408 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.644 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.644 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.644 2 INFO nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Rebooting instance#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.660 2 DEBUG nova.network.neutron [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.696 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.696 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Processing event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.697 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG oslo_concurrency.lockutils [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.698 2 WARNING nova.compute.manager [req-ccb9f4cc-3a2c-4382-a6e7-c28e02b1dbf1 req-763df4ce-a8d9-4a07-872a-3216dc6ac8c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received unexpected event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.698 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.702 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.704 2 INFO nova.virt.libvirt.driver [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance spawned successfully.#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.704 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.706 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433174.7061656, bf38daab-2994-41c4-a44f-91e466acf68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.706 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.723 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.727 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.728 2 DEBUG nova.virt.libvirt.driver [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.731 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.760 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.790 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 11.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.791 2 DEBUG nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.840 2 INFO nova.compute.manager [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 13.87 seconds to build instance.#033[00m
Oct 14 05:12:54 np0005486808 nova_compute[259627]: 2025-10-14 09:12:54.856 2 DEBUG oslo_concurrency.lockutils [None req-d67ae9a9-8872-45e6-a01d-218f8f376014 f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.302 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Successfully updated port: 63189572-78bc-4d3a-8135-659f8c39ce7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.320 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.320 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquired lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.321 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.492 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:12:55 np0005486808 nova_compute[259627]: 2025-10-14 09:12:55.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 232 op/s
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.369 2 DEBUG nova.network.neutron [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.390 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.393 2 DEBUG nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.551 2 DEBUG nova.network.neutron [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.567 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Releasing lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.568 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance network_info: |[{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.574 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start _get_guest_xml network_info=[{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.582 2 WARNING nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.588 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.589 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.594 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.595 2 DEBUG nova.virt.libvirt.host [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.596 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.597 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.598 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.598 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.599 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.599 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.600 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.600 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.601 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.601 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.602 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.602 2 DEBUG nova.virt.hardware [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.608 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.889 2 DEBUG nova.compute.manager [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-changed-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.891 2 DEBUG nova.compute.manager [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Refreshing instance network info cache due to event network-changed-63189572-78bc-4d3a-8135-659f8c39ce7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.891 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.892 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:12:56 np0005486808 nova_compute[259627]: 2025-10-14 09:12:56.893 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Refreshing network info cache for port 63189572-78bc-4d3a-8135-659f8c39ce7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.034 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.034 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.035 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.037 2 INFO nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Terminating instance#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.038 2 DEBUG nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:57 np0005486808 kernel: tap66a0c3b8-73 (unregistering): left promiscuous mode
Oct 14 05:12:57 np0005486808 NetworkManager[44885]: <info>  [1760433177.0784] device (tap66a0c3b8-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00948|binding|INFO|Releasing lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 from this chassis (sb_readonly=0)
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00949|binding|INFO|Setting lport 66a0c3b8-73ab-490e-a3d4-06827c574cb6 down in Southbound
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00950|binding|INFO|Removing iface tap66a0c3b8-73 ovn-installed in OVS
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.093 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ff:73 10.100.0.11'], port_security=['fa:16:3e:c2:ff:73 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4ad76124-48eb-467e-9a6f-951235efdb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=66a0c3b8-73ab-490e-a3d4-06827c574cb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.095 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 66a0c3b8-73ab-490e-a3d4-06827c574cb6 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.097 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0506bb08-7957-44ca-9a0f-014c548c7b40#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 14 05:12:57 np0005486808 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 5.341s CPU time.
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.116 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c05eb111-6454-4f02-9cbd-e48fa5bed71b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 systemd-machined[214636]: Machine qemu-112-instance-0000005b terminated.
Oct 14 05:12:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641616359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.149 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.160 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1a7d63-bb40-47e2-88b4-bee97c96a389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.164 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dd183ba5-9c72-49df-a707-502aa2e8c748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.187 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.192 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c097b-33f0-4812-a6f3-262f7732b8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0659b154-1a15-48c3-ba9a-642e6eba8916]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0506bb08-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c3:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694994, 'reachable_time': 19321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351308, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.266 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6648f5b-dafc-4537-a1ac-2dda59dc938f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695007, 'tstamp': 695007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351310, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0506bb08-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695010, 'tstamp': 695010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351310, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.269 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.279 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0506bb08-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.280 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.280 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0506bb08-70, col_values=(('external_ids', {'iface-id': '6cfe11a6-55c2-4d2e-880b-8832ad317040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.281 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.287 2 INFO nova.virt.libvirt.driver [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Instance destroyed successfully.#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.289 2 DEBUG nova.objects.instance [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid 4ad76124-48eb-467e-9a6f-951235efdb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.306 2 DEBUG nova.virt.libvirt.vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-1',id=91,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=4ad76124-48eb-467e-9a6f-951235efdb35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.306 2 DEBUG nova.network.os_vif_util [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "address": "fa:16:3e:c2:ff:73", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a0c3b8-73", "ovs_interfaceid": "66a0c3b8-73ab-490e-a3d4-06827c574cb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.308 2 DEBUG nova.network.os_vif_util [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.309 2 DEBUG os_vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66a0c3b8-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.320 2 INFO os_vif [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ff:73,bridge_name='br-int',has_traffic_filtering=True,id=66a0c3b8-73ab-490e-a3d4-06827c574cb6,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a0c3b8-73')#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.369 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.370 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.371 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.371 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.372 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.375 2 INFO nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Terminating instance#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.377 2 DEBUG nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:12:57 np0005486808 kernel: tap59075c43-66 (unregistering): left promiscuous mode
Oct 14 05:12:57 np0005486808 NetworkManager[44885]: <info>  [1760433177.4224] device (tap59075c43-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00951|binding|INFO|Releasing lport 59075c43-66a8-4a9c-a693-31f83575b355 from this chassis (sb_readonly=0)
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00952|binding|INFO|Setting lport 59075c43-66a8-4a9c-a693-31f83575b355 down in Southbound
Oct 14 05:12:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:57Z|00953|binding|INFO|Removing iface tap59075c43-66 ovn-installed in OVS
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.439 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b1:fa 10.100.0.3'], port_security=['fa:16:3e:63:b1:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf38daab-2994-41c4-a44f-91e466acf68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0506bb08-7957-44ca-9a0f-014c548c7b40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a057db932754d6eae91f0d2f359f1ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b9d7c47-8a9b-4622-8756-36a8a6e40174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc552e0c-ff32-4755-9304-f9703ae8cc71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=59075c43-66a8-4a9c-a693-31f83575b355) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.440 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 59075c43-66a8-4a9c-a693-31f83575b355 in datapath 0506bb08-7957-44ca-9a0f-014c548c7b40 unbound from our chassis#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.442 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0506bb08-7957-44ca-9a0f-014c548c7b40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba30d5c2-bbb4-4eab-bc9d-e38acbf7c527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 namespace which is not needed anymore#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 14 05:12:57 np0005486808 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 3.686s CPU time.
Oct 14 05:12:57 np0005486808 systemd-machined[214636]: Machine qemu-113-instance-0000005c terminated.
Oct 14 05:12:57 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : haproxy version is 2.8.14-c23fe91
Oct 14 05:12:57 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [NOTICE]   (350984) : path to executable is /usr/sbin/haproxy
Oct 14 05:12:57 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [ALERT]    (350984) : Current worker (350989) exited with code 143 (Terminated)
Oct 14 05:12:57 np0005486808 neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40[350965]: [WARNING]  (350984) : All workers exited. Exiting... (0)
Oct 14 05:12:57 np0005486808 systemd[1]: libpod-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope: Deactivated successfully.
Oct 14 05:12:57 np0005486808 conmon[350965]: conmon e64283190618996fe6a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope/container/memory.events
Oct 14 05:12:57 np0005486808 podman[351378]: 2025-10-14 09:12:57.607442365 +0000 UTC m=+0.053295994 container died e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.627 2 INFO nova.virt.libvirt.driver [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Instance destroyed successfully.#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.628 2 DEBUG nova.objects.instance [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lazy-loading 'resources' on Instance uuid bf38daab-2994-41c4-a44f-91e466acf68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e-userdata-shm.mount: Deactivated successfully.
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.645 2 DEBUG nova.virt.libvirt.vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1291705969',display_name='tempest-MultipleCreateTestJSON-server-1291705969-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1291705969-2',id=92,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-14T09:12:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a057db932754d6eae91f0d2f359f1ff',ramdisk_id='',reservation_id='r-tm8i0hhm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2115206001',owner_user_name='tempest-MultipleCreateTestJSON-2115206001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:12:54Z,user_data=None,user_id='f50b95774c384c5a8414b197ed5d7b82',uuid=bf38daab-2994-41c4-a44f-91e466acf68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.645 2 DEBUG nova.network.os_vif_util [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converting VIF {"id": "59075c43-66a8-4a9c-a693-31f83575b355", "address": "fa:16:3e:63:b1:fa", "network": {"id": "0506bb08-7957-44ca-9a0f-014c548c7b40", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-901776874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a057db932754d6eae91f0d2f359f1ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59075c43-66", "ovs_interfaceid": "59075c43-66a8-4a9c-a693-31f83575b355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.646 2 DEBUG nova.network.os_vif_util [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b8eaa5e67a067367ecbaeb8ed40f166de47bd4cb062e63d1c476cf7e27a74873-merged.mount: Deactivated successfully.
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.647 2 DEBUG os_vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59075c43-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.653 2 INFO os_vif [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b1:fa,bridge_name='br-int',has_traffic_filtering=True,id=59075c43-66a8-4a9c-a693-31f83575b355,network=Network(0506bb08-7957-44ca-9a0f-014c548c7b40),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59075c43-66')#033[00m
Oct 14 05:12:57 np0005486808 podman[351378]: 2025-10-14 09:12:57.658099722 +0000 UTC m=+0.103953361 container cleanup e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:12:57 np0005486808 systemd[1]: libpod-conmon-e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e.scope: Deactivated successfully.
Oct 14 05:12:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:12:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163474572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.718 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.720 2 DEBUG nova.virt.libvirt.vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.721 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.721 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.723 2 DEBUG nova.objects.instance [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:12:57 np0005486808 podman[351429]: 2025-10-14 09:12:57.725199425 +0000 UTC m=+0.044351534 container remove e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.729 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <uuid>69c5d250-71a4-47d5-a3ce-5b606ee9c692</uuid>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <name>instance-0000005d</name>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerTagsTestJSON-server-1449697808</nova:name>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:12:56</nova:creationTime>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:user uuid="21d93d87742344a1b7662df0d97a69b2">tempest-ServerTagsTestJSON-648965588-project-member</nova:user>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:project uuid="fd094b0f9acb49ca8b1f295403a44ec3">tempest-ServerTagsTestJSON-648965588</nova:project>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <nova:port uuid="63189572-78bc-4d3a-8135-659f8c39ce7d">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="serial">69c5d250-71a4-47d5-a3ce-5b606ee9c692</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="uuid">69c5d250-71a4-47d5-a3ce-5b606ee9c692</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:4a:ab:40"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <target dev="tap63189572-78"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/console.log" append="off"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:12:57 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:12:57 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:12:57 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:12:57 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Preparing to wait for external event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.774 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.775 2 DEBUG nova.virt.libvirt.vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:12:52Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG nova.network.os_vif_util [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.776 2 DEBUG os_vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd52fe2b-23ed-45e7-8e48-03ea757d562f]: (4, ('Tue Oct 14 09:12:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e)\ne64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e\nTue Oct 14 09:12:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 (e64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e)\ne64283190618996fe6a56ec5ac0a7ad4952127dd38b0e0a0b9cc6a12bc81b18e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.778 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c39bb614-a239-477a-86a6-50f8e79ec965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.779 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0506bb08-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63189572-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63189572-78, col_values=(('external_ids', {'iface-id': '63189572-78bc-4d3a-8135-659f8c39ce7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:ab:40', 'vm-uuid': '69c5d250-71a4-47d5-a3ce-5b606ee9c692'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:57 np0005486808 kernel: tap0506bb08-70: left promiscuous mode
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:12:57 np0005486808 NetworkManager[44885]: <info>  [1760433177.7839] manager: (tap63189572-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.789 2 INFO nova.virt.libvirt.driver [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deleting instance files /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35_del#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.790 2 INFO nova.virt.libvirt.driver [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deletion of /var/lib/nova/instances/4ad76124-48eb-467e-9a6f-951235efdb35_del complete#033[00m
Oct 14 05:12:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.808 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e30464bc-983e-4d42-85e9-6af240b5340a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.810 2 INFO os_vif [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78')#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.836 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f406c42-a36c-4b68-ab79-b87adeebd58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0aefb180-f7df-4954-aa87-1d0a159f4aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.850 2 INFO nova.compute.manager [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.850 2 DEBUG oslo.service.loopingcall [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.851 2 DEBUG nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.851 2 DEBUG nova.network.neutron [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.854 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c009871a-d52b-4d8a-91c3-66618eb61a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694983, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351453, 'error': None, 'target': 'ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0506bb08\x2d7957\x2d44ca\x2d9a0f\x2d014c548c7b40.mount: Deactivated successfully.
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.857 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0506bb08-7957-44ca-9a0f-014c548c7b40 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.857 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4a7902-87b7-43e0-8f51-b96e10a90f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:57.862 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.865 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.866 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.866 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] No VIF found with MAC fa:16:3e:4a:ab:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.866 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Using config drive#033[00m
Oct 14 05:12:57 np0005486808 nova_compute[259627]: 2025-10-14 09:12:57.892 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.064 2 INFO nova.virt.libvirt.driver [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deleting instance files /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e_del#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.065 2 INFO nova.virt.libvirt.driver [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deletion of /var/lib/nova/instances/bf38daab-2994-41c4-a44f-91e466acf68e_del complete#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.114 2 INFO nova.compute.manager [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG oslo.service.loopingcall [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.115 2 DEBUG nova.network.neutron [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.434 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Creating config drive at /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.442 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w8jo8gr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.605 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4w8jo8gr" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.641 2 DEBUG nova.storage.rbd_utils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] rbd image 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.647 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.701 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updated VIF entry in instance network info cache for port 63189572-78bc-4d3a-8135-659f8c39ce7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.702 2 DEBUG nova.network.neutron [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [{"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.724 2 DEBUG oslo_concurrency.lockutils [req-6e00c1ea-6018-4f74-a001-3f8a27189fc8 req-5fdd63f1-2795-464b-9afd-59ce716946c3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-69c5d250-71a4-47d5-a3ce-5b606ee9c692" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:12:58 np0005486808 kernel: tapce4eb1a6-22 (unregistering): left promiscuous mode
Oct 14 05:12:58 np0005486808 NetworkManager[44885]: <info>  [1760433178.8521] device (tapce4eb1a6-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:12:58 np0005486808 virtqemud[259351]: End of file while reading data: Input/output error
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.871 2 DEBUG oslo_concurrency.processutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config 69c5d250-71a4-47d5-a3ce-5b606ee9c692_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.873 2 INFO nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deleting local config drive /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692/disk.config because it was imported into RBD.#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00954|binding|INFO|Releasing lport ce4eb1a6-2221-4519-98fa-44a39da77b71 from this chassis (sb_readonly=0)
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00955|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 down in Southbound
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00956|binding|INFO|Removing iface tapce4eb1a6-22 ovn-installed in OVS
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.922 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.923 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis#033[00m
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.925 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:12:58 np0005486808 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 14 05:12:58 np0005486808 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 13.308s CPU time.
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29644bec-32c0-4d98-965a-c9f40e2b473b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.928 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace which is not needed anymore#033[00m
Oct 14 05:12:58 np0005486808 systemd-machined[214636]: Machine qemu-111-instance-0000005a terminated.
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 kernel: tap63189572-78: entered promiscuous mode
Oct 14 05:12:58 np0005486808 NetworkManager[44885]: <info>  [1760433178.9450] manager: (tap63189572-78): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct 14 05:12:58 np0005486808 systemd-udevd[351281]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00957|binding|INFO|Claiming lport 63189572-78bc-4d3a-8135-659f8c39ce7d for this chassis.
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00958|binding|INFO|63189572-78bc-4d3a-8135-659f8c39ce7d: Claiming fa:16:3e:4a:ab:40 10.100.0.11
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:58.958 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ab:40 10.100.0.11'], port_security=['fa:16:3e:4a:ab:40 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '69c5d250-71a4-47d5-a3ce-5b606ee9c692', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10487db5-e210-4216-ae01-596f4a7c7ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fab9082-797f-44f0-9d82-af22d5b6d133, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63189572-78bc-4d3a-8135-659f8c39ce7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:58 np0005486808 NetworkManager[44885]: <info>  [1760433178.9609] device (tap63189572-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:58 np0005486808 NetworkManager[44885]: <info>  [1760433178.9618] device (tap63189572-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00959|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d ovn-installed in OVS
Oct 14 05:12:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:58Z|00960|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d up in Southbound
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 nova_compute[259627]: 2025-10-14 09:12:58.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:58 np0005486808 systemd-machined[214636]: New machine qemu-114-instance-0000005d.
Oct 14 05:12:58 np0005486808 podman[351520]: 2025-10-14 09:12:58.990533727 +0000 UTC m=+0.096422446 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:12:59 np0005486808 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Oct 14 05:12:59 np0005486808 podman[351516]: 2025-10-14 09:12:59.007911285 +0000 UTC m=+0.123705048 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 14 05:12:59 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : haproxy version is 2.8.14-c23fe91
Oct 14 05:12:59 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [NOTICE]   (349192) : path to executable is /usr/sbin/haproxy
Oct 14 05:12:59 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [WARNING]  (349192) : Exiting Master process...
Oct 14 05:12:59 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [ALERT]    (349192) : Current worker (349199) exited with code 143 (Terminated)
Oct 14 05:12:59 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[349186]: [WARNING]  (349192) : All workers exited. Exiting... (0)
Oct 14 05:12:59 np0005486808 systemd[1]: libpod-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope: Deactivated successfully.
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.0898] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 14 05:12:59 np0005486808 podman[351594]: 2025-10-14 09:12:59.096173979 +0000 UTC m=+0.053817407 container died d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.096 2 DEBUG nova.network.neutron [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.117 2 DEBUG nova.network.neutron [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.135 2 INFO nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Took 1.28 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958-userdata-shm.mount: Deactivated successfully.
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.142 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-29994210cca99f3be79ccbe5f31a62bef181ac26761ce349073eb7da91ec9974-merged.mount: Deactivated successfully.
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.144 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-unplugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.145 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.146 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] No waiting events found dispatching network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.146 2 WARNING nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received unexpected event network-vif-plugged-66a0c3b8-73ab-490e-a3d4-06827c574cb6 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.147 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-unplugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.148 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG oslo_concurrency.lockutils [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.149 2 DEBUG nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] No waiting events found dispatching network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.150 2 WARNING nova.compute.manager [req-56cdb1ab-60db-4812-b227-de4cb576bb47 req-e1b226d4-7357-423d-9425-4a4164e7472c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received unexpected event network-vif-plugged-59075c43-66a8-4a9c-a693-31f83575b355 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:12:59 np0005486808 podman[351594]: 2025-10-14 09:12:59.151143072 +0000 UTC m=+0.108786500 container cleanup d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:12:59 np0005486808 systemd[1]: libpod-conmon-d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958.scope: Deactivated successfully.
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.159 2 INFO nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.196 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.197 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.214 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.218 2 DEBUG nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.218 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG oslo_concurrency.lockutils [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.219 2 DEBUG nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.220 2 WARNING nova.compute.manager [req-f7a9bd74-92da-456b-b9bd-e9d96917c3f4 req-f4c28cbe-9c94-4ee6-bd9c-f087ccf610a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state reboot_started.#033[00m
Oct 14 05:12:59 np0005486808 podman[351636]: 2025-10-14 09:12:59.22780672 +0000 UTC m=+0.051091149 container remove d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e00a4297-d395-406d-a427-a982acec65ee]: (4, ('Tue Oct 14 09:12:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958)\nd1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958\nTue Oct 14 09:12:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (d1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958)\nd1745c1e7bf13cb23652734496b35f2bad619e414bc795125c091f410d559958\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.235 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e36e88-01ea-4532-813d-61a818fefe34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.235 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 kernel: tap7cb8e394-e0: left promiscuous mode
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.259 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c88a942f-2b14-4b98-a678-08a003b66e97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.284 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae19393-f472-4b84-8c74-e67773157ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be99da9-58ce-435f-9465-cd44b41464f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.310 2 DEBUG oslo_concurrency.processutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.325 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2658afa1-03df-4b21-9134-b7c6c8e34ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693185, 'reachable_time': 16629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351667, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.328 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.328 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1ac215-cc8c-4e16-9eaf-9d153863a92f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.330 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63189572-78bc-4d3a-8135-659f8c39ce7d in datapath 84e9c235-fb90-472a-8cac-f7ae999c18dd unbound from our chassis#033[00m
Oct 14 05:12:59 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7cb8e394\x2debca\x2d4b27\x2d8174\x2d62c6b6f3a7da.mount: Deactivated successfully.
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.333 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84e9c235-fb90-472a-8cac-f7ae999c18dd#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.348 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be7a8d6e-55bf-4422-bf20-6ba5c1fa3722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.349 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84e9c235-f1 in ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.351 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84e9c235-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e73c46d6-6582-46da-8f15-6afe746aa910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.353 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78526ea0-734a-45fb-849a-9d969908f28c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.372 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8db44d6d-2139-4417-9b5a-6adc3dc2d96d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.392 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccfbd61-99e5-4554-9b0d-2f556c630991]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f4bbc-ae91-49c0-9857-0a387b060547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.4283] manager: (tap84e9c235-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.428 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a79a8dc8-5ad8-4bd4-bfb5-c5b39774c977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[22f2eff7-0e34-4bd1-9ab0-e547f8255e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.478 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c14d1dbb-fb93-46da-96a0-dacfad9ca5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.5019] device (tap84e9c235-f0): carrier: link connected
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.510 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3653ff-6730-40fa-ba8e-f21dc06828fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.538 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e129c79-a2b3-40d7-afda-1e237d404d12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84e9c235-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:ac:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695826, 'reachable_time': 43866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351743, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.555 2 INFO nova.virt.libvirt.driver [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance shutdown successfully.#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db1eebf6-4343-4181-872f-fe8d155c9246]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:ac64'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695826, 'tstamp': 695826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351744, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e654d127-add3-4f54-b5f2-0160947807cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84e9c235-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:ac:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695826, 'reachable_time': 43866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351746, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.6256] manager: (tapce4eb1a6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Oct 14 05:12:59 np0005486808 systemd-udevd[351718]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:12:59 np0005486808 kernel: tapce4eb1a6-22: entered promiscuous mode
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:59Z|00961|binding|INFO|Claiming lport ce4eb1a6-2221-4519-98fa-44a39da77b71 for this chassis.
Oct 14 05:12:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:59Z|00962|binding|INFO|ce4eb1a6-2221-4519-98fa-44a39da77b71: Claiming fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.639 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.6418] device (tapce4eb1a6-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.6425] device (tapce4eb1a6-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.647 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd4da9b-0901-4495-b57d-7ba7f273e256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:59Z|00963|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 ovn-installed in OVS
Oct 14 05:12:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:59Z|00964|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 up in Southbound
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 systemd-machined[214636]: New machine qemu-115-instance-0000005a.
Oct 14 05:12:59 np0005486808 systemd[1]: Started Virtual Machine qemu-115-instance-0000005a.
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.703 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb93d9c-d1b6-426f-8353-e2aab03ffcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.704 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84e9c235-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.705 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.705 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84e9c235-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 NetworkManager[44885]: <info>  [1760433179.7088] manager: (tap84e9c235-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 14 05:12:59 np0005486808 kernel: tap84e9c235-f0: entered promiscuous mode
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.712 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84e9c235-f0, col_values=(('external_ids', {'iface-id': '89fbd290-97fc-455c-8b54-34d7628205e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:12:59Z|00965|binding|INFO|Releasing lport 89fbd290-97fc-455c-8b54-34d7628205e3 from this chassis (sb_readonly=0)
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.734 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.735 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d14054-f1f1-4de9-81cb-abfa1085089b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.737 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-84e9c235-fb90-472a-8cac-f7ae999c18dd
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/84e9c235-fb90-472a-8cac-f7ae999c18dd.pid.haproxy
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 84e9c235-fb90-472a-8cac-f7ae999c18dd
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:12:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:12:59.738 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'env', 'PROCESS_TAG=haproxy-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84e9c235-fb90-472a-8cac-f7ae999c18dd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:12:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:12:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482927218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.780 2 DEBUG oslo_concurrency.processutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.785 2 DEBUG nova.compute.provider_tree [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.801 2 DEBUG nova.scheduler.client.report [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.821 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.823 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.860 2 INFO nova.scheduler.client.report [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance 4ad76124-48eb-467e-9a6f-951235efdb35#033[00m
Oct 14 05:12:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 339 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.932 2 DEBUG oslo_concurrency.lockutils [None req-cc7bc6f5-f401-4881-8210-d7c83572631f f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "4ad76124-48eb-467e-9a6f-951235efdb35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:12:59 np0005486808 nova_compute[259627]: 2025-10-14 09:12:59.957 2 DEBUG oslo_concurrency.processutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.006 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433180.0061214, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.007 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Started (Lifecycle Event)#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.030 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.037 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433180.0063717, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.037 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.070 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:00 np0005486808 podman[351798]: 2025-10-14 09:13:00.117549493 +0000 UTC m=+0.062095021 container create 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:00 np0005486808 systemd[1]: Started libpod-conmon-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope.
Oct 14 05:13:00 np0005486808 podman[351798]: 2025-10-14 09:13:00.093035039 +0000 UTC m=+0.037580597 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:13:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f0fb587d1d33dedf087780ab0b81814fff666e633a1b92da6547d217849fe2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:00 np0005486808 podman[351798]: 2025-10-14 09:13:00.211863675 +0000 UTC m=+0.156409253 container init 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:13:00 np0005486808 podman[351798]: 2025-10-14 09:13:00.217273969 +0000 UTC m=+0.161819517 container start 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:13:00 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : New worker (351838) forked
Oct 14 05:13:00 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : Loading success.
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.295 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.309 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52e016bd-8530-4ada-a9a9-b48eb7e7f85b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7cb8e394-e1 in ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.312 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7cb8e394-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9608da-f375-4c1a-869a-b0cc25b08632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.313 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36ae6675-b4c3-42af-8d8b-bdfc54edb838]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.324 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[57231915-169c-4709-95d7-6cce4d8c744a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.345 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45764ea8-e268-4a5d-8630-07bff044f0d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.372 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[050bcd29-6dba-4e6a-8872-c51e78bea2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 NetworkManager[44885]: <info>  [1760433180.3792] manager: (tap7cb8e394-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53923f25-3f4b-4b3f-a5cb-9b5bdb2a0798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 systemd-udevd[351854]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.430 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[71302f0e-6291-4c21-ab27-0ab0cdf8a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.434 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[999706e1-06f9-4b34-bd58-ffa00ab1cb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263679068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:00 np0005486808 NetworkManager[44885]: <info>  [1760433180.4649] device (tap7cb8e394-e0): carrier: link connected
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.474 2 DEBUG oslo_concurrency.processutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa910f4b-0445-4cc5-ad67-2f3225d9b197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.480 2 DEBUG nova.compute.provider_tree [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c15350fb-c0fb-46a3-83fb-0f4fea0affc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695922, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351875, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.495 2 DEBUG nova.scheduler.client.report [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[05e80341-0e3f-42be-8570-436f3f0b0402]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:c43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695922, 'tstamp': 695922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351876, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.513 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[07bc643e-50f8-41ff-b725-c908420f9b69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7cb8e394-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0c:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695922, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351877, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.537 2 INFO nova.scheduler.client.report [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Deleted allocations for instance bf38daab-2994-41c4-a44f-91e466acf68e#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.557 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d11565-71c3-48a9-9fc5-f50dd353903f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.604 2 DEBUG oslo_concurrency.lockutils [None req-309dff00-5b99-4de5-b31c-f0d14d4a83fb f50b95774c384c5a8414b197ed5d7b82 5a057db932754d6eae91f0d2f359f1ff - - default default] Lock "bf38daab-2994-41c4-a44f-91e466acf68e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb4ea7c-2b64-41fe-9ea0-03ceb1a2cf37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.614 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.615 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb8e394-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:00 np0005486808 NetworkManager[44885]: <info>  [1760433180.6239] manager: (tap7cb8e394-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct 14 05:13:00 np0005486808 kernel: tap7cb8e394-e0: entered promiscuous mode
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.630 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7cb8e394-e0, col_values=(('external_ids', {'iface-id': 'abbcb164-8856-47e0-a7b9-984d66daedac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:00Z|00966|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.635 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.636 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63c1bacb-9e25-4df1-8e52-75f3a2a0f427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.637 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.pid.haproxy
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7cb8e394-ebca-4b27-8174-62c6b6f3a7da
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:13:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:00.640 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'env', 'PROCESS_TAG=haproxy-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7cb8e394-ebca-4b27-8174-62c6b6f3a7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:13:00 np0005486808 nova_compute[259627]: 2025-10-14 09:13:00.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:01 np0005486808 podman[351951]: 2025-10-14 09:13:01.050403166 +0000 UTC m=+0.068918379 container create 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 05:13:01 np0005486808 podman[351951]: 2025-10-14 09:13:01.011341054 +0000 UTC m=+0.029856287 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:13:01 np0005486808 systemd[1]: Started libpod-conmon-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope.
Oct 14 05:13:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65901d3e784ae2e04a1afc9a7cae7d402065e9d57d7aaf3b11e83a0ac7ab76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:01 np0005486808 podman[351951]: 2025-10-14 09:13:01.171104348 +0000 UTC m=+0.189619601 container init 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:01 np0005486808 podman[351951]: 2025-10-14 09:13:01.176533352 +0000 UTC m=+0.195048575 container start 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:13:01 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : New worker (351972) forked
Oct 14 05:13:01 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : Loading success.
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.379 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 9e354e27-d674-43c3-890b-caf8731cb827 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.380 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.3791194, 9e354e27-d674-43c3-890b-caf8731cb827 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.380 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.387 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance running successfully.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.388 2 INFO nova.virt.libvirt.driver [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance soft rebooted successfully.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.388 2 DEBUG nova.compute.manager [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.422 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.425 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.453 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.453 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.3820546, 9e354e27-d674-43c3-890b-caf8731cb827 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.454 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Started (Lifecycle Event)#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.464 2 DEBUG oslo_concurrency.lockutils [None req-99235e08-0667-4fd6-8732-cf4828e0a0a4 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.479 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.482 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.554 2 DEBUG nova.compute.manager [req-37ebbd07-3660-4f25-9df8-76a5b652b073 req-83544a5f-e9b4-4249-8bd9-c8719440f1d5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Received event network-vif-deleted-66a0c3b8-73ab-490e-a3d4-06827c574cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.554 2 DEBUG nova.compute.manager [req-37ebbd07-3660-4f25-9df8-76a5b652b073 req-83544a5f-e9b4-4249-8bd9-c8719440f1d5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Received event network-vif-deleted-59075c43-66a8-4a9c-a693-31f83575b355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.590 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.591 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.592 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.592 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.593 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Processing event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.594 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.595 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.596 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.596 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received unexpected event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.597 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.597 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.598 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.598 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.599 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.600 2 DEBUG oslo_concurrency.lockutils [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.600 2 DEBUG nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.600 2 WARNING nova.compute.manager [req-3982769b-1ba5-4611-8694-8ba2b0229b76 req-57557a26-4c64-4d47-b3a0-be38980ed4f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.602 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.607 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.607 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433181.6068354, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.608 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.614 2 INFO nova.virt.libvirt.driver [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance spawned successfully.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.615 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.629 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.643 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.644 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.645 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.646 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.646 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.647 2 DEBUG nova.virt.libvirt.driver [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.652 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.690 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.716 2 INFO nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.717 2 DEBUG nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.814 2 INFO nova.compute.manager [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 9.96 seconds to build instance.#033[00m
Oct 14 05:13:01 np0005486808 nova_compute[259627]: 2025-10-14 09:13:01.838 2 DEBUG oslo_concurrency.lockutils [None req-2c3ea568-7a4c-40f8-9ac4-42a22000322d 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 304 op/s
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:02 np0005486808 nova_compute[259627]: 2025-10-14 09:13:02.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.829320) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182829407, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1575, "num_deletes": 250, "total_data_size": 2256590, "memory_usage": 2293880, "flush_reason": "Manual Compaction"}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182842174, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1340912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35401, "largest_seqno": 36975, "table_properties": {"data_size": 1335477, "index_size": 2511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15272, "raw_average_key_size": 21, "raw_value_size": 1323174, "raw_average_value_size": 1827, "num_data_blocks": 113, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433037, "oldest_key_time": 1760433037, "file_creation_time": 1760433182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12923 microseconds, and 8632 cpu microseconds.
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.842252) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1340912 bytes OK
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.842279) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844067) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844088) EVENT_LOG_v1 {"time_micros": 1760433182844081, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.844111) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2249611, prev total WAL file size 2249611, number of live WAL files 2.
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.845163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323530' seq:72057594037927935, type:22 .. '6D6772737461740031353031' seq:0, type:0; will stop at (end)
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1309KB)], [77(9676KB)]
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182845197, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11249159, "oldest_snapshot_seqno": -1}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6283 keys, 8824563 bytes, temperature: kUnknown
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182891269, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8824563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8782661, "index_size": 25088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 157979, "raw_average_key_size": 25, "raw_value_size": 8670062, "raw_average_value_size": 1379, "num_data_blocks": 1019, "num_entries": 6283, "num_filter_entries": 6283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.891660) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8824563 bytes
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.893578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.7 rd, 191.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(15.0) write-amplify(6.6) OK, records in: 6731, records dropped: 448 output_compression: NoCompression
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.893609) EVENT_LOG_v1 {"time_micros": 1760433182893595, "job": 44, "event": "compaction_finished", "compaction_time_micros": 46165, "compaction_time_cpu_micros": 20741, "output_level": 6, "num_output_files": 1, "total_output_size": 8824563, "num_input_records": 6731, "num_output_records": 6283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182894260, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433182897713, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.845108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:02 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:02.897791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:03 np0005486808 nova_compute[259627]: 2025-10-14 09:13:03.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 246 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.192 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.193 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.194 2 INFO nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Terminating instance#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.195 2 DEBUG nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:13:05 np0005486808 kernel: tap63189572-78 (unregistering): left promiscuous mode
Oct 14 05:13:05 np0005486808 NetworkManager[44885]: <info>  [1760433185.2612] device (tap63189572-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:13:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:05Z|00967|binding|INFO|Releasing lport 63189572-78bc-4d3a-8135-659f8c39ce7d from this chassis (sb_readonly=0)
Oct 14 05:13:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:05Z|00968|binding|INFO|Setting lport 63189572-78bc-4d3a-8135-659f8c39ce7d down in Southbound
Oct 14 05:13:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:05Z|00969|binding|INFO|Removing iface tap63189572-78 ovn-installed in OVS
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.283 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ab:40 10.100.0.11'], port_security=['fa:16:3e:4a:ab:40 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '69c5d250-71a4-47d5-a3ce-5b606ee9c692', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd094b0f9acb49ca8b1f295403a44ec3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10487db5-e210-4216-ae01-596f4a7c7ce6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fab9082-797f-44f0-9d82-af22d5b6d133, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63189572-78bc-4d3a-8135-659f8c39ce7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.284 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63189572-78bc-4d3a-8135-659f8c39ce7d in datapath 84e9c235-fb90-472a-8cac-f7ae999c18dd unbound from our chassis#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84e9c235-fb90-472a-8cac-f7ae999c18dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0b13b2-8d03-4091-9975-acd35d6e4728]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.288 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd namespace which is not needed anymore#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 14 05:13:05 np0005486808 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 4.507s CPU time.
Oct 14 05:13:05 np0005486808 systemd-machined[214636]: Machine qemu-114-instance-0000005d terminated.
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.426 2 INFO nova.virt.libvirt.driver [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Instance destroyed successfully.#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.426 2 DEBUG nova.objects.instance [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lazy-loading 'resources' on Instance uuid 69c5d250-71a4-47d5-a3ce-5b606ee9c692 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : haproxy version is 2.8.14-c23fe91
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [NOTICE]   (351836) : path to executable is /usr/sbin/haproxy
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : Exiting Master process...
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : Exiting Master process...
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.440 2 DEBUG nova.virt.libvirt.vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1449697808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1449697808',id=93,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd094b0f9acb49ca8b1f295403a44ec3',ramdisk_id='',reservation_id='r-0pjevjco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-648965588',owner_user_name='tempest-ServerTagsTestJSON-648965588-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:01Z,user_data=None,user_id='21d93d87742344a1b7662df0d97a69b2',uuid=69c5d250-71a4-47d5-a3ce-5b606ee9c692,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [ALERT]    (351836) : Current worker (351838) exited with code 143 (Terminated)
Oct 14 05:13:05 np0005486808 neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd[351832]: [WARNING]  (351836) : All workers exited. Exiting... (0)
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.441 2 DEBUG nova.network.os_vif_util [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converting VIF {"id": "63189572-78bc-4d3a-8135-659f8c39ce7d", "address": "fa:16:3e:4a:ab:40", "network": {"id": "84e9c235-fb90-472a-8cac-f7ae999c18dd", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-148072854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd094b0f9acb49ca8b1f295403a44ec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63189572-78", "ovs_interfaceid": "63189572-78bc-4d3a-8135-659f8c39ce7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.441 2 DEBUG nova.network.os_vif_util [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.442 2 DEBUG os_vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:13:05 np0005486808 systemd[1]: libpod-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope: Deactivated successfully.
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63189572-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 podman[352004]: 2025-10-14 09:13:05.449952344 +0000 UTC m=+0.059021694 container died 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.455 2 INFO os_vif [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ab:40,bridge_name='br-int',has_traffic_filtering=True,id=63189572-78bc-4d3a-8135-659f8c39ce7d,network=Network(84e9c235-fb90-472a-8cac-f7ae999c18dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63189572-78')#033[00m
Oct 14 05:13:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:13:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-09f0fb587d1d33dedf087780ab0b81814fff666e633a1b92da6547d217849fe2-merged.mount: Deactivated successfully.
Oct 14 05:13:05 np0005486808 podman[352004]: 2025-10-14 09:13:05.492178034 +0000 UTC m=+0.101247384 container cleanup 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:05 np0005486808 systemd[1]: libpod-conmon-14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd.scope: Deactivated successfully.
Oct 14 05:13:05 np0005486808 podman[352059]: 2025-10-14 09:13:05.564754852 +0000 UTC m=+0.045769389 container remove 14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.571 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9aac613-835d-468d-8d35-292d9a1471c3]: (4, ('Tue Oct 14 09:13:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd (14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd)\n14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd\nTue Oct 14 09:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd (14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd)\n14321f61322052b4bbb57cf9392611bace413015cd068ac6e1bb558b247c87dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.573 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffcdfde-3601-450e-8842-50591177b03c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.573 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84e9c235-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:05 np0005486808 kernel: tap84e9c235-f0: left promiscuous mode
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.600 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae4597e-29fe-4952-a76f-f783694c6972]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:13:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.627 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bebf9f-d28a-47dc-94c1-f2b89527b2ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:13:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147017175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.628 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74d3495d-da57-439c-a8c3-29020a472077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1c145d-938b-45b1-806a-bfd54d6bd6b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695817, 'reachable_time': 22286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352074, 'error': None, 'target': 'ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 systemd[1]: run-netns-ovnmeta\x2d84e9c235\x2dfb90\x2d472a\x2d8cac\x2df7ae999c18dd.mount: Deactivated successfully.
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.646 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84e9c235-fb90-472a-8cac-f7ae999c18dd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.646 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7737ecbe-3bbf-47d4-9a2a-49e0d45fab6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.842 2 INFO nova.virt.libvirt.driver [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deleting instance files /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692_del#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.843 2 INFO nova.virt.libvirt.driver [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deletion of /var/lib/nova/instances/69c5d250-71a4-47d5-a3ce-5b606ee9c692_del complete#033[00m
Oct 14 05:13:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:05.863 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.914 2 INFO nova.compute.manager [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG oslo.service.loopingcall [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:13:05 np0005486808 nova_compute[259627]: 2025-10-14 09:13:05.916 2 DEBUG nova.network.neutron [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:13:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 382 op/s
Oct 14 05:13:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.029 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:07.030 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:07Z|00970|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:13:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:07Z|00971|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.797 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.797 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG oslo_concurrency.lockutils [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.798 2 DEBUG nova.compute.manager [req-c4ce80b7-a335-478b-87f9-a5ae57380905 req-556dcb3d-d86f-4c1a-bf92-1090739dcf58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-unplugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:13:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 05:13:07 np0005486808 nova_compute[259627]: 2025-10-14 09:13:07.999 2 DEBUG nova.network.neutron [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.018 2 INFO nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Took 2.10 seconds to deallocate network for instance.#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.058 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.059 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.081 2 DEBUG nova.compute.manager [req-f340ee66-6f8f-42a6-a511-669659ddfb5e req-63b39487-3446-4bbd-b58f-35a537d7c510 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-deleted-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.139 2 DEBUG oslo_concurrency.processutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874453173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.549 2 DEBUG oslo_concurrency.processutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.555 2 DEBUG nova.compute.provider_tree [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.574 2 DEBUG nova.scheduler.client.report [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.601 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.634 2 INFO nova.scheduler.client.report [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Deleted allocations for instance 69c5d250-71a4-47d5-a3ce-5b606ee9c692#033[00m
Oct 14 05:13:08 np0005486808 nova_compute[259627]: 2025-10-14 09:13:08.733 2 DEBUG oslo_concurrency.lockutils [None req-4be8627d-8e09-4cfd-8425-f61a103d6d92 21d93d87742344a1b7662df0d97a69b2 fd094b0f9acb49ca8b1f295403a44ec3 - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.914 2 DEBUG nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG oslo_concurrency.lockutils [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "69c5d250-71a4-47d5-a3ce-5b606ee9c692-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.915 2 DEBUG nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] No waiting events found dispatching network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:09 np0005486808 nova_compute[259627]: 2025-10-14 09:13:09.915 2 WARNING nova.compute.manager [req-e2fcf457-5c7d-4488-84aa-2cb08da1feb5 req-089b4028-bcb5-46c8-91d0-de6d8ca1bc6e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Received unexpected event network-vif-plugged-63189572-78bc-4d3a-8135-659f8c39ce7d for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:13:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 229 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct 14 05:13:10 np0005486808 nova_compute[259627]: 2025-10-14 09:13:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:11Z|00972|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:13:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:11Z|00973|binding|INFO|Releasing lport abbcb164-8856-47e0-a7b9-984d66daedac from this chassis (sb_readonly=0)
Oct 14 05:13:11 np0005486808 nova_compute[259627]: 2025-10-14 09:13:11.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 36 KiB/s wr, 285 op/s
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.282 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433177.2761786, 4ad76124-48eb-467e-9a6f-951235efdb35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.282 2 INFO nova.compute.manager [-] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.301 2 DEBUG nova.compute.manager [None req-1b6de2f0-c753-4cd6-8dbf-e5d9e0209af7 - - - - - -] [instance: 4ad76124-48eb-467e-9a6f-951235efdb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.623 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433177.6219695, bf38daab-2994-41c4-a44f-91e466acf68e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.624 2 INFO nova.compute.manager [-] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:12 np0005486808 nova_compute[259627]: 2025-10-14 09:13:12.639 2 DEBUG nova.compute.manager [None req-6ffc8255-f604-4e0b-8513-9cef3200cae7 - - - - - -] [instance: bf38daab-2994-41c4-a44f-91e466acf68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:13 np0005486808 nova_compute[259627]: 2025-10-14 09:13:13.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:13Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:9c:bd 10.100.0.7
Oct 14 05:13:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 KiB/s wr, 159 op/s
Oct 14 05:13:15 np0005486808 nova_compute[259627]: 2025-10-14 09:13:15.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 15 KiB/s wr, 202 op/s
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.317 2 INFO nova.compute.manager [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Pausing#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.319 2 DEBUG nova.objects.instance [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.352 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433196.3518085, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.352 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.354 2 DEBUG nova.compute.manager [None req-22cc4328-8bda-4035-abc4-5e559b02d24e 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.380 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.387 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.415 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct 14 05:13:16 np0005486808 podman[352098]: 2025-10-14 09:13:16.667260858 +0000 UTC m=+0.080453712 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:13:16 np0005486808 podman[352099]: 2025-10-14 09:13:16.672247201 +0000 UTC m=+0.078680019 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:16 np0005486808 nova_compute[259627]: 2025-10-14 09:13:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.178 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.288 2 INFO nova.compute.manager [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Unpausing#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.291 2 DEBUG nova.objects.instance [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.331 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433197.330903, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.331 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:17 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.340 2 DEBUG nova.virt.libvirt.guest [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.341 2 DEBUG nova.compute.manager [None req-f137f94f-93e6-4596-bf78-5823e16a02bd 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.372 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.378 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:17 np0005486808 nova_compute[259627]: 2025-10-14 09:13:17.418 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct 14 05:13:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 05:13:18 np0005486808 nova_compute[259627]: 2025-10-14 09:13:18.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:19 np0005486808 nova_compute[259627]: 2025-10-14 09:13:19.051 2 INFO nova.compute.manager [None req-24960b08-3c49-4313-8a93-c290d9923c81 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Get console output#033[00m
Oct 14 05:13:19 np0005486808 nova_compute[259627]: 2025-10-14 09:13:19.057 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:13:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 200 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.005 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.007 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.007 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.008 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.009 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.011 2 INFO nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Terminating instance#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.012 2 DEBUG nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:13:20 np0005486808 kernel: tapce4eb1a6-22 (unregistering): left promiscuous mode
Oct 14 05:13:20 np0005486808 NetworkManager[44885]: <info>  [1760433200.0726] device (tapce4eb1a6-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:13:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:20Z|00974|binding|INFO|Releasing lport ce4eb1a6-2221-4519-98fa-44a39da77b71 from this chassis (sb_readonly=0)
Oct 14 05:13:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:20Z|00975|binding|INFO|Setting lport ce4eb1a6-2221-4519-98fa-44a39da77b71 down in Southbound
Oct 14 05:13:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:20Z|00976|binding|INFO|Removing iface tapce4eb1a6-22 ovn-installed in OVS
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.096 2 DEBUG nova.compute.manager [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.097 2 DEBUG nova.compute.manager [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing instance network info cache due to event network-changed-ce4eb1a6-2221-4519-98fa-44a39da77b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.097 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.098 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:bd 10.100.0.7'], port_security=['fa:16:3e:7f:9c:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9e354e27-d674-43c3-890b-caf8731cb827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c85ef4e1-bf02-447d-8de0-60f2d978738d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7211eb9-3be4-4007-bf83-d7812e6ec9fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ce4eb1a6-2221-4519-98fa-44a39da77b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.099 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ce4eb1a6-2221-4519-98fa-44a39da77b71 in datapath 7cb8e394-ebca-4b27-8174-62c6b6f3a7da unbound from our chassis#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.098 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.100 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.099 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Refreshing network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e525b56-fd62-4a2b-82ef-d25a0ea4970e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.101 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da namespace which is not needed anymore#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 14 05:13:20 np0005486808 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005a.scope: Consumed 13.224s CPU time.
Oct 14 05:13:20 np0005486808 systemd-machined[214636]: Machine qemu-115-instance-0000005a terminated.
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.264 2 INFO nova.virt.libvirt.driver [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Instance destroyed successfully.#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.266 2 DEBUG nova.objects.instance [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 9e354e27-d674-43c3-890b-caf8731cb827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:20 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : haproxy version is 2.8.14-c23fe91
Oct 14 05:13:20 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [NOTICE]   (351970) : path to executable is /usr/sbin/haproxy
Oct 14 05:13:20 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [WARNING]  (351970) : Exiting Master process...
Oct 14 05:13:20 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [ALERT]    (351970) : Current worker (351972) exited with code 143 (Terminated)
Oct 14 05:13:20 np0005486808 neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da[351966]: [WARNING]  (351970) : All workers exited. Exiting... (0)
Oct 14 05:13:20 np0005486808 systemd[1]: libpod-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope: Deactivated successfully.
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.292 2 DEBUG nova.virt.libvirt.vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-461250371',display_name='tempest-TestNetworkAdvancedServerOps-server-461250371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-461250371',id=90,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcVmsUIwo920oPyHLmJGrkrYCQ5UunB9Yv0/Buc9cCiSWB2ZOXdvOp0s2cEsPfEAttRSx6VdIWgt0joL5sdVyP2CI3WgYA2zF+RirB/x5531ApwlIJzNgUQx7hgxyfijg==',key_name='tempest-TestNetworkAdvancedServerOps-806517333',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-dt6vqls1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:01Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=9e354e27-d674-43c3-890b-caf8731cb827,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.293 2 DEBUG nova.network.os_vif_util [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.294 2 DEBUG nova.network.os_vif_util [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.294 2 DEBUG os_vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4eb1a6-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:20 np0005486808 podman[352163]: 2025-10-14 09:13:20.29786524 +0000 UTC m=+0.065434273 container died 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.337 2 INFO os_vif [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:9c:bd,bridge_name='br-int',has_traffic_filtering=True,id=ce4eb1a6-2221-4519-98fa-44a39da77b71,network=Network(7cb8e394-ebca-4b27-8174-62c6b6f3a7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce4eb1a6-22')#033[00m
Oct 14 05:13:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:13:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cb65901d3e784ae2e04a1afc9a7cae7d402065e9d57d7aaf3b11e83a0ac7ab76-merged.mount: Deactivated successfully.
Oct 14 05:13:20 np0005486808 podman[352163]: 2025-10-14 09:13:20.380779352 +0000 UTC m=+0.148348365 container cleanup 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:20 np0005486808 systemd[1]: libpod-conmon-0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd.scope: Deactivated successfully.
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.424 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433185.4242532, 69c5d250-71a4-47d5-a3ce-5b606ee9c692 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.427 2 INFO nova.compute.manager [-] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:20 np0005486808 podman[352220]: 2025-10-14 09:13:20.459713326 +0000 UTC m=+0.053415577 container remove 0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.464 2 DEBUG nova.compute.manager [None req-f94bd0d2-51ae-4d69-bccb-f2ea3df69216 - - - - - -] [instance: 69c5d250-71a4-47d5-a3ce-5b606ee9c692] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0af50a-09c6-430e-ac22-eee9faa697dc]: (4, ('Tue Oct 14 09:13:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd)\n0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd\nTue Oct 14 09:13:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da (0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd)\n0cf7bcdd9615bb7e33b88d7cee6e1494013d7f2c9874e170dd3c4be919a61bfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.473 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c726f53-d548-4a56-b5e7-b96daff0f980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.474 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb8e394-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 kernel: tap7cb8e394-e0: left promiscuous mode
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb73ade5-4a0d-46c8-9187-f45c0db3e055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a09a6277-2cd2-48df-883c-2d12206d5db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec57b8c-3155-4284-abfe-09edc0f493b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77b2991f-bbb0-4542-a32a-65cf31d172fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695912, 'reachable_time': 38739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352236, 'error': None, 'target': 'ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7cb8e394\x2debca\x2d4b27\x2d8174\x2d62c6b6f3a7da.mount: Deactivated successfully.
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.546 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7cb8e394-ebca-4b27-8174-62c6b6f3a7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:13:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:20.546 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c539de68-bc75-4637-8598-9b0356b6d26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.755 2 INFO nova.virt.libvirt.driver [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deleting instance files /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827_del#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.755 2 INFO nova.virt.libvirt.driver [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deletion of /var/lib/nova/instances/9e354e27-d674-43c3-890b-caf8731cb827_del complete#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.824 2 INFO nova.compute.manager [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.825 2 DEBUG oslo.service.loopingcall [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.825 2 DEBUG nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:13:20 np0005486808 nova_compute[259627]: 2025-10-14 09:13:20.826 2 DEBUG nova.network.neutron [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.589 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updated VIF entry in instance network info cache for port ce4eb1a6-2221-4519-98fa-44a39da77b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.590 2 DEBUG nova.network.neutron [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [{"id": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "address": "fa:16:3e:7f:9c:bd", "network": {"id": "7cb8e394-ebca-4b27-8174-62c6b6f3a7da", "bridge": "br-int", "label": "tempest-network-smoke--1640681957", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce4eb1a6-22", "ovs_interfaceid": "ce4eb1a6-2221-4519-98fa-44a39da77b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.611 2 DEBUG oslo_concurrency.lockutils [req-2cb9d5bd-1b31-44e6-8356-0f162903b167 req-ecbfc925-75d0-4731-8cd8-bbd29f99c9fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9e354e27-d674-43c3-890b-caf8731cb827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 32 KiB/s wr, 65 op/s
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.959 2 DEBUG nova.network.neutron [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:21 np0005486808 nova_compute[259627]: 2025-10-14 09:13:21.981 2 INFO nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.036 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.036 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.374 2 DEBUG nova.compute.manager [req-8309c113-4691-4288-a0c5-2925069e3df8 req-69ae693c-e197-4b9c-9db3-8ba575a567a9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-deleted-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.398 2 DEBUG oslo_concurrency.processutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.512 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 WARNING nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-unplugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9e354e27-d674-43c3-890b-caf8731cb827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.513 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.514 2 DEBUG oslo_concurrency.lockutils [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.514 2 DEBUG nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] No waiting events found dispatching network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.514 2 WARNING nova.compute.manager [req-460f4a22-c011-45a8-9924-2da52ddce64f req-ccaef871-66ed-4a8d-99a9-e80719548c9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Received unexpected event network-vif-plugged-ce4eb1a6-2221-4519-98fa-44a39da77b71 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:13:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681891022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.895 2 DEBUG oslo_concurrency.processutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.901 2 DEBUG nova.compute.provider_tree [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:22 np0005486808 nova_compute[259627]: 2025-10-14 09:13:22.937 2 DEBUG nova.scheduler.client.report [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:23 np0005486808 nova_compute[259627]: 2025-10-14 09:13:23.008 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:23 np0005486808 nova_compute[259627]: 2025-10-14 09:13:23.050 2 INFO nova.scheduler.client.report [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 9e354e27-d674-43c3-890b-caf8731cb827#033[00m
Oct 14 05:13:23 np0005486808 nova_compute[259627]: 2025-10-14 09:13:23.124 2 DEBUG oslo_concurrency.lockutils [None req-68c50af1-997a-42e4-96ab-de5c2377d76c e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "9e354e27-d674-43c3-890b-caf8731cb827" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:23 np0005486808 nova_compute[259627]: 2025-10-14 09:13:23.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 163 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 30 KiB/s wr, 47 op/s
Oct 14 05:13:25 np0005486808 nova_compute[259627]: 2025-10-14 09:13:25.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:25 np0005486808 nova_compute[259627]: 2025-10-14 09:13:25.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 32 KiB/s wr, 73 op/s
Oct 14 05:13:26 np0005486808 nova_compute[259627]: 2025-10-14 09:13:26.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:26Z|00977|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:13:26 np0005486808 nova_compute[259627]: 2025-10-14 09:13:26.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:26Z|00978|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:13:26 np0005486808 nova_compute[259627]: 2025-10-14 09:13:26.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:27 np0005486808 nova_compute[259627]: 2025-10-14 09:13:27.013 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:27 np0005486808 nova_compute[259627]: 2025-10-14 09:13:27.014 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:27 np0005486808 nova_compute[259627]: 2025-10-14 09:13:27.015 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 05:13:28 np0005486808 nova_compute[259627]: 2025-10-14 09:13:28.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.109 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.110 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.127 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:13:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:13:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 29K writes, 116K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.84 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.24 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4460 syncs, 2.54 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.213 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.214 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.223 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.224 2 INFO nova.compute.claims [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.406 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:29 np0005486808 podman[352282]: 2025-10-14 09:13:29.652916711 +0000 UTC m=+0.062319495 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:13:29 np0005486808 podman[352268]: 2025-10-14 09:13:29.71012995 +0000 UTC m=+0.116511490 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:13:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557985574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.892 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.900 2 DEBUG nova.compute.provider_tree [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.917 2 DEBUG nova.scheduler.client.report [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 121 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.946 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.947 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:29 np0005486808 nova_compute[259627]: 2025-10-14 09:13:29.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.067 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.067 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.087 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.105 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.185 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.186 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.186 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating image(s)#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.214 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.244 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.276 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.282 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.333 2 DEBUG nova.policy [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c3ef1ada21b467b9c1717b790fabb93', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dffdc75b179b426c85be76e05489a77a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.388 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.389 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.389 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.390 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.419 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.423 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626864788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.478 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.597 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.598 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.772 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.849 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] resizing rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.956 2 DEBUG nova.objects.instance [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'migration_context' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.974 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.974 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.975 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Ensure instance console log exists: /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.976 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.977 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:30 np0005486808 nova_compute[259627]: 2025-10-14 09:13:30.993 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.006 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3674MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.081 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.123 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.153 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.153 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.185 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Successfully created port: 33bf675d-c42f-486f-b483-87fa5091b0ef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.224 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126330429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.680 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.689 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.724 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.786 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.786 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.787 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.788 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.798 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:13:31 np0005486808 nova_compute[259627]: 2025-10-14 09:13:31.799 2 INFO nova.compute.claims [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:13:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 33 KiB/s wr, 35 op/s
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.056 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.171 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Successfully updated port: 33bf675d-c42f-486f-b483-87fa5091b0ef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.186 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.186 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquired lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.187 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.386 2 DEBUG nova.compute.manager [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-changed-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.387 2 DEBUG nova.compute.manager [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Refreshing instance network info cache due to event network-changed-33bf675d-c42f-486f-b483-87fa5091b0ef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.387 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673281914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.498 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.505 2 DEBUG nova.compute.provider_tree [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.532 2 DEBUG nova.scheduler.client.report [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.558 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.559 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.621 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.622 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.643 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.663 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.757 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.758 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.759 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating image(s)#033[00m
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:13:32
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', 'vms', 'volumes']
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:13:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.801 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.840 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.884 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:32 np0005486808 nova_compute[259627]: 2025-10-14 09:13:32.891 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.000 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.001 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.002 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.002 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.028 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.032 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.346 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.382 2 DEBUG nova.policy [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.389 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.474 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.597 2 DEBUG nova.objects.instance [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.614 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.615 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Ensure instance console log exists: /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.615 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.616 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:33 np0005486808 nova_compute[259627]: 2025-10-14 09:13:33.616 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 125 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 31 op/s
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.243 2 DEBUG nova.network.neutron [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.263 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Releasing lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance network_info: |[{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.264 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Refreshing network info cache for port 33bf675d-c42f-486f-b483-87fa5091b0ef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.267 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start _get_guest_xml network_info=[{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.271 2 WARNING nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.275 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.276 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.278 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.libvirt.host [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.279 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.280 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.281 2 DEBUG nova.virt.hardware [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.284 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:13:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 30K writes, 115K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.82 writes per sync, written: 0.10 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 36.96 MB, 0.06 MB/s#012Interval WAL: 10K writes, 4455 syncs, 2.41 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:13:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562269786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.752 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.787 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.794 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.828 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.829 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.829 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.869 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:13:34 np0005486808 nova_compute[259627]: 2025-10-14 09:13:34.870 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.070 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.070 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.071 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.071 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.170 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Successfully created port: 05e92470-2658-4ea2-9c44-e91cd5226905 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:13:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3960128247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.233 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.236 2 DEBUG nova.virt.libvirt.vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:30Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.237 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.238 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.241 2 DEBUG nova.objects.instance [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'pci_devices' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.261 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <uuid>fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</uuid>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <name>instance-0000005e</name>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerAddressesTestJSON-server-1606551920</nova:name>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:13:34</nova:creationTime>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:user uuid="9c3ef1ada21b467b9c1717b790fabb93">tempest-ServerAddressesTestJSON-1889343396-project-member</nova:user>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:project uuid="dffdc75b179b426c85be76e05489a77a">tempest-ServerAddressesTestJSON-1889343396</nova:project>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <nova:port uuid="33bf675d-c42f-486f-b483-87fa5091b0ef">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="serial">fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="uuid">fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9b:b9:3e"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <target dev="tap33bf675d-c4"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/console.log" append="off"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:13:35 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:13:35 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:13:35 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:13:35 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.263 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Preparing to wait for external event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.264 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.265 2 DEBUG nova.virt.libvirt.vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:30Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.265 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.266 2 DEBUG nova.network.os_vif_util [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.266 2 DEBUG os_vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.268 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433200.2614918, 9e354e27-d674-43c3-890b-caf8731cb827 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.268 2 INFO nova.compute.manager [-] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33bf675d-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33bf675d-c4, col_values=(('external_ids', {'iface-id': '33bf675d-c42f-486f-b483-87fa5091b0ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:b9:3e', 'vm-uuid': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:35 np0005486808 NetworkManager[44885]: <info>  [1760433215.2764] manager: (tap33bf675d-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.284 2 INFO os_vif [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4')#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.288 2 DEBUG nova.compute.manager [None req-002c8720-3d66-48d1-bb52-1d6331fdd9eb - - - - - -] [instance: 9e354e27-d674-43c3-890b-caf8731cb827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.336 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] No VIF found with MAC fa:16:3e:9b:b9:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.337 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Using config drive#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.357 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.802 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Creating config drive at /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config#033[00m
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.811 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5xzszvre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct 14 05:13:35 np0005486808 nova_compute[259627]: 2025-10-14 09:13:35.987 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5xzszvre" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.025 2 DEBUG nova.storage.rbd_utils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] rbd image fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.029 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.224 2 DEBUG oslo_concurrency.processutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.225 2 INFO nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deleting local config drive /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf/disk.config because it was imported into RBD.#033[00m
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.2857] manager: (tap33bf675d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct 14 05:13:36 np0005486808 kernel: tap33bf675d-c4: entered promiscuous mode
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:36Z|00979|binding|INFO|Claiming lport 33bf675d-c42f-486f-b483-87fa5091b0ef for this chassis.
Oct 14 05:13:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:36Z|00980|binding|INFO|33bf675d-c42f-486f-b483-87fa5091b0ef: Claiming fa:16:3e:9b:b9:3e 10.100.0.9
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.308 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:b9:3e 10.100.0.9'], port_security=['fa:16:3e:9b:b9:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d74d8641-f56f-4d53-bd5c-d5364a316407', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffdc75b179b426c85be76e05489a77a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '403c796c-1a50-44ae-b551-913e7b6b57c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c445f64b-1565-4e25-83d1-f207b66a7e54, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=33bf675d-c42f-486f-b483-87fa5091b0ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.310 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 33bf675d-c42f-486f-b483-87fa5091b0ef in datapath d74d8641-f56f-4d53-bd5c-d5364a316407 bound to our chassis#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d74d8641-f56f-4d53-bd5c-d5364a316407#033[00m
Oct 14 05:13:36 np0005486808 systemd-machined[214636]: New machine qemu-116-instance-0000005e.
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.330 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d50e3095-a998-44f8-9d4a-5a4c958ccb0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.331 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd74d8641-f1 in ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.333 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd74d8641-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9d6025-9738-4e54-89c9-ff660618084d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.334 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b0030-868a-460c-bbc2-fbc60a4d8c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.346 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[03e735b1-67b4-440b-b73f-ece1cc61ac01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Oct 14 05:13:36 np0005486808 systemd-udevd[352866]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7afa1b-bf6b-46d5-bf34-1551e17604c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:36Z|00981|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef ovn-installed in OVS
Oct 14 05:13:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:36Z|00982|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef up in Southbound
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.4002] device (tap33bf675d-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.4020] device (tap33bf675d-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6d254dba-2fd1-4829-9870-5837d2d8ead5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.425 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5756bc58-b93b-41f4-a8f6-435a989b5d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 systemd-udevd[352873]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.4272] manager: (tapd74d8641-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.461 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[89af97ee-154d-45a3-9810-d8db59302e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.465 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[78e193df-a5d6-4d7a-85e8-a24d8c9acf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.4866] device (tapd74d8641-f0): carrier: link connected
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.493 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7b1276-e48c-43ca-ab7c-e767867f03d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.513 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c31bb3a8-880e-4d1a-bffc-6d31ef3ba80d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd74d8641-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4d:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699524, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352896, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.524 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updated VIF entry in instance network info cache for port 33bf675d-c42f-486f-b483-87fa5091b0ef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.527 2 DEBUG nova.network.neutron [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [{"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d48a83-12e9-4188-9ddc-1ea5a3aba4ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:4d6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699524, 'tstamp': 699524}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352897, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.548 2 DEBUG oslo_concurrency.lockutils [req-507c0465-7c1a-46ee-89f9-50f97b9f8edd req-2db62818-b172-472c-b7bf-eba8bb734b2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.564 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9590d7cb-9f13-46d9-9454-5b8f31d5c94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd74d8641-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4d:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699524, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352898, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[510680f7-e1bb-4ee2-b562-3b9a4fb95922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.656 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf87ce0-b884-4435-9627-2350ed64d2a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.657 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd74d8641-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.658 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.658 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd74d8641-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:36 np0005486808 NetworkManager[44885]: <info>  [1760433216.6905] manager: (tapd74d8641-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct 14 05:13:36 np0005486808 kernel: tapd74d8641-f0: entered promiscuous mode
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.693 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd74d8641-f0, col_values=(('external_ids', {'iface-id': '4af3feec-e627-47f7-a581-09cf28b78f23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:36Z|00983|binding|INFO|Releasing lport 4af3feec-e627-47f7-a581-09cf28b78f23 from this chassis (sb_readonly=0)
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.723 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3f3eab-fec5-4640-b822-d4b5ad927cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.725 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-d74d8641-f56f-4d53-bd5c-d5364a316407
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/d74d8641-f56f-4d53-bd5c-d5364a316407.pid.haproxy
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID d74d8641-f56f-4d53-bd5c-d5364a316407
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:13:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:36.726 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'env', 'PROCESS_TAG=haproxy-d74d8641-f56f-4d53-bd5c-d5364a316407', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d74d8641-f56f-4d53-bd5c-d5364a316407.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.790 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.808 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.809 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.809 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.810 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.810 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.989 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.990 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:36 np0005486808 nova_compute[259627]: 2025-10-14 09:13:36.990 2 INFO nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shelving#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.012 2 DEBUG nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.072 2 DEBUG nova.compute.manager [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG oslo_concurrency.lockutils [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.073 2 DEBUG nova.compute.manager [req-bbef9a92-c891-4740-9000-fb0337ae0f02 req-2a738638-0e40-4584-a357-4c6a10ebfe5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Processing event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:13:37 np0005486808 podman[352930]: 2025-10-14 09:13:37.123358019 +0000 UTC m=+0.068843616 container create f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:13:37 np0005486808 systemd[1]: Started libpod-conmon-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope.
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.177 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Successfully updated port: 05e92470-2658-4ea2-9c44-e91cd5226905 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:13:37 np0005486808 podman[352930]: 2025-10-14 09:13:37.090799207 +0000 UTC m=+0.036284824 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.190 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.190 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.191 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d550779a9bd77bd7e2ed61b22d656826dfaf6fc1b4e1ee3150f2fd155f6368/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:37 np0005486808 podman[352930]: 2025-10-14 09:13:37.227361311 +0000 UTC m=+0.172846928 container init f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:13:37 np0005486808 podman[352930]: 2025-10-14 09:13:37.234560758 +0000 UTC m=+0.180046355 container start f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:13:37 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : New worker (352970) forked
Oct 14 05:13:37 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : Loading success.
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.386 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.729 2 DEBUG nova.compute.manager [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-changed-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.729 2 DEBUG nova.compute.manager [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Refreshing instance network info cache due to event network-changed-05e92470-2658-4ea2-9c44-e91cd5226905. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.730 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.862 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.865 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8635738, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Started (Lifecycle Event)#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.869 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.873 2 INFO nova.virt.libvirt.driver [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance spawned successfully.#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.874 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.897 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.906 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.914 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.915 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.916 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.917 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.918 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.919 2 DEBUG nova.virt.libvirt.driver [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.960 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.961 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8637056, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.961 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:13:37 np0005486808 nova_compute[259627]: 2025-10-14 09:13:37.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.003 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433217.8683548, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.003 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.008 2 INFO nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 7.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.009 2 DEBUG nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.035 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.039 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.068 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.085 2 INFO nova.compute.manager [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 8.90 seconds to build instance.#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.127 2 DEBUG oslo_concurrency.lockutils [None req-e93993a2-8277-496e-8027-d46007e8114e 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:38 np0005486808 nova_compute[259627]: 2025-10-14 09:13:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 05:13:39 np0005486808 NetworkManager[44885]: <info>  [1760433219.2313] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:13:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:39Z|00984|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:39Z|00985|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 05:13:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:39Z|00986|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.292 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.294 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.297 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14ab801f-280f-410b-975f-6f7b57340d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.300 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 05:13:39 np0005486808 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Consumed 15.601s CPU time.
Oct 14 05:13:39 np0005486808 systemd-machined[214636]: Machine qemu-107-instance-00000056 terminated.
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.390 2 DEBUG nova.network.neutron [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.436 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.437 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance network_info: |[{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.438 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.438 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Refreshing network info cache for port 05e92470-2658-4ea2-9c44-e91cd5226905 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.442 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start _get_guest_xml network_info=[{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.446 2 WARNING nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.452 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.453 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.456 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.457 2 DEBUG nova.virt.libvirt.host [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.457 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.458 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.458 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.459 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.459 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.460 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.461 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.462 2 DEBUG nova.virt.hardware [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.465 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : haproxy version is 2.8.14-c23fe91
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [NOTICE]   (347034) : path to executable is /usr/sbin/haproxy
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : Exiting Master process...
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : Exiting Master process...
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [ALERT]    (347034) : Current worker (347036) exited with code 143 (Terminated)
Oct 14 05:13:39 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[347030]: [WARNING]  (347034) : All workers exited. Exiting... (0)
Oct 14 05:13:39 np0005486808 systemd[1]: libpod-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope: Deactivated successfully.
Oct 14 05:13:39 np0005486808 podman[353025]: 2025-10-14 09:13:39.522886544 +0000 UTC m=+0.066785546 container died 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:13:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0a5b768a60b37b396b810a626d7725822e051f41211fb6861ae019660974357e-merged.mount: Deactivated successfully.
Oct 14 05:13:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864-userdata-shm.mount: Deactivated successfully.
Oct 14 05:13:39 np0005486808 podman[353025]: 2025-10-14 09:13:39.59664178 +0000 UTC m=+0.140540782 container cleanup 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:13:39 np0005486808 systemd[1]: libpod-conmon-993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864.scope: Deactivated successfully.
Oct 14 05:13:39 np0005486808 podman[353062]: 2025-10-14 09:13:39.671848282 +0000 UTC m=+0.049137911 container remove 993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06ba8c69-b12a-49e1-ad05-f745af81a250]: (4, ('Tue Oct 14 09:13:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864)\n993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864\nTue Oct 14 09:13:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864)\n993e2ded677d0879cc8ca199b149c7650aea4311c2053b17479e0731c08d9864\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.679 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8013aff2-05de-4d62-bd12-0655d0a99608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[209d4c43-76a4-45f6-8018-e3ab3d3fa6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ca146342-17e2-4e8f-a71c-5e3a762277ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.749 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[83cfeb60-1cb6-4d4e-a80a-0068ea039b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.752 2 DEBUG nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:13:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 25K writes, 98K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8861 syncs, 2.85 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8289 writes, 32K keys, 8289 commit groups, 1.0 writes per commit group, ingest: 31.87 MB, 0.05 MB/s#012Interval WAL: 8289 writes, 3376 syncs, 2.46 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.760 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b152a8-adfd-477d-8868-ae0e654c9179]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691206, 'reachable_time': 22013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353100, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.768 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.768 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:39.768 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f5306a-af5d-41d0-bb34-c40aa0ca1c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:39 np0005486808 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.775 2 DEBUG oslo_concurrency.lockutils [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.776 2 DEBUG nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.776 2 WARNING nova.compute.manager [req-5d4affe1-37a4-4f2d-8af6-ded5b7f69cf5 req-fbef0be6-4dd9-4a50-9ff4-33dc3e53e2d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received unexpected event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.904 2 DEBUG nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.905 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG oslo_concurrency.lockutils [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.906 2 DEBUG nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.907 2 WARNING nova.compute.manager [req-69c4434a-df56-48df-822d-a12ea5448ca7 req-a8c3b9f0-046a-4616-8aeb-e8f760919c1e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state shelving.#033[00m
Oct 14 05:13:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 05:13:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715862422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.969 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.990 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:39 np0005486808 nova_compute[259627]: 2025-10-14 09:13:39.993 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.036 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance shutdown successfully after 3 seconds.#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.047 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.048 2 DEBUG nova.objects.instance [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.306 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.307 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.308 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.309 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.309 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.311 2 INFO nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Terminating instance#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.313 2 DEBUG nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:13:40 np0005486808 kernel: tap33bf675d-c4 (unregistering): left promiscuous mode
Oct 14 05:13:40 np0005486808 NetworkManager[44885]: <info>  [1760433220.3565] device (tap33bf675d-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:40Z|00987|binding|INFO|Releasing lport 33bf675d-c42f-486f-b483-87fa5091b0ef from this chassis (sb_readonly=0)
Oct 14 05:13:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:40Z|00988|binding|INFO|Setting lport 33bf675d-c42f-486f-b483-87fa5091b0ef down in Southbound
Oct 14 05:13:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:40Z|00989|binding|INFO|Removing iface tap33bf675d-c4 ovn-installed in OVS
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.425 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:b9:3e 10.100.0.9'], port_security=['fa:16:3e:9b:b9:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d74d8641-f56f-4d53-bd5c-d5364a316407', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dffdc75b179b426c85be76e05489a77a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '403c796c-1a50-44ae-b551-913e7b6b57c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c445f64b-1565-4e25-83d1-f207b66a7e54, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=33bf675d-c42f-486f-b483-87fa5091b0ef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.427 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 33bf675d-c42f-486f-b483-87fa5091b0ef in datapath d74d8641-f56f-4d53-bd5c-d5364a316407 unbound from our chassis#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.430 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d74d8641-f56f-4d53-bd5c-d5364a316407, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.431 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8cd2e3-abd1-49dd-86b5-d398ad9ece54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.432 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 namespace which is not needed anymore#033[00m
Oct 14 05:13:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699146280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.460 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.463 2 DEBUG nova.virt.libvirt.vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:32Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.463 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.465 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:40 np0005486808 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 14 05:13:40 np0005486808 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 3.948s CPU time.
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.467 2 DEBUG nova.objects.instance [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.470 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Beginning cold snapshot process#033[00m
Oct 14 05:13:40 np0005486808 systemd-machined[214636]: Machine qemu-116-instance-0000005e terminated.
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.506 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <uuid>d46b6953-9413-4e6a-94f7-7b5ac9634c16</uuid>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <name>instance-0000005f</name>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:name>tempest-₡-1224416488</nova:name>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:13:39</nova:creationTime>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <nova:port uuid="05e92470-2658-4ea2-9c44-e91cd5226905">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="serial">d46b6953-9413-4e6a-94f7-7b5ac9634c16</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="uuid">d46b6953-9413-4e6a-94f7-7b5ac9634c16</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:7f:fb:45"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <target dev="tap05e92470-26"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/console.log" append="off"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:13:40 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:13:40 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:13:40 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:13:40 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Preparing to wait for external event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.507 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.508 2 DEBUG nova.virt.libvirt.vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:32Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.508 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.509 2 DEBUG nova.network.os_vif_util [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.509 2 DEBUG os_vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05e92470-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05e92470-26, col_values=(('external_ids', {'iface-id': '05e92470-2658-4ea2-9c44-e91cd5226905', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:fb:45', 'vm-uuid': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 NetworkManager[44885]: <info>  [1760433220.5167] manager: (tap05e92470-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.525 2 INFO os_vif [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26')#033[00m
Oct 14 05:13:40 np0005486808 NetworkManager[44885]: <info>  [1760433220.5338] manager: (tap33bf675d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.545 2 INFO nova.virt.libvirt.driver [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Instance destroyed successfully.#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.546 2 DEBUG nova.objects.instance [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lazy-loading 'resources' on Instance uuid fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.574 2 DEBUG nova.virt.libvirt.vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1606551920',display_name='tempest-ServerAddressesTestJSON-server-1606551920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1606551920',id=94,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dffdc75b179b426c85be76e05489a77a',ramdisk_id='',reservation_id='r-io2ehiyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1889343396',owner_user_name='tempest-ServerAddressesTestJSON-1889343396-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:38Z,user_data=None,user_id='9c3ef1ada21b467b9c1717b790fabb93',uuid=fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.574 2 DEBUG nova.network.os_vif_util [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converting VIF {"id": "33bf675d-c42f-486f-b483-87fa5091b0ef", "address": "fa:16:3e:9b:b9:3e", "network": {"id": "d74d8641-f56f-4d53-bd5c-d5364a316407", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-41681639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dffdc75b179b426c85be76e05489a77a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33bf675d-c4", "ovs_interfaceid": "33bf675d-c42f-486f-b483-87fa5091b0ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.575 2 DEBUG nova.network.os_vif_util [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.575 2 DEBUG os_vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33bf675d-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:40 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : haproxy version is 2.8.14-c23fe91
Oct 14 05:13:40 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [NOTICE]   (352950) : path to executable is /usr/sbin/haproxy
Oct 14 05:13:40 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [WARNING]  (352950) : Exiting Master process...
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [ALERT]    (352950) : Current worker (352970) exited with code 143 (Terminated)
Oct 14 05:13:40 np0005486808 neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407[352946]: [WARNING]  (352950) : All workers exited. Exiting... (0)
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:40 np0005486808 systemd[1]: libpod-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope: Deactivated successfully.
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.590 2 INFO os_vif [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:b9:3e,bridge_name='br-int',has_traffic_filtering=True,id=33bf675d-c42f-486f-b483-87fa5091b0ef,network=Network(d74d8641-f56f-4d53-bd5c-d5364a316407),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33bf675d-c4')#033[00m
Oct 14 05:13:40 np0005486808 podman[353171]: 2025-10-14 09:13:40.593145841 +0000 UTC m=+0.048041334 container died f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:13:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:13:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46-userdata-shm.mount: Deactivated successfully.
Oct 14 05:13:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d4d550779a9bd77bd7e2ed61b22d656826dfaf6fc1b4e1ee3150f2fd155f6368-merged.mount: Deactivated successfully.
Oct 14 05:13:40 np0005486808 podman[353171]: 2025-10-14 09:13:40.652936723 +0000 UTC m=+0.107832236 container cleanup f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:40 np0005486808 systemd[1]: libpod-conmon-f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46.scope: Deactivated successfully.
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.663 2 DEBUG nova.virt.libvirt.imagebackend [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.667 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.668 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.668 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:7f:fb:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.668 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Using config drive#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.689 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:40 np0005486808 podman[353259]: 2025-10-14 09:13:40.723980883 +0000 UTC m=+0.050431293 container remove f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffcf5d0-c313-4dad-98f8-9d515a816fa3]: (4, ('Tue Oct 14 09:13:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 (f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46)\nf05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46\nTue Oct 14 09:13:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 (f05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46)\nf05cf73534208456390ebd0387a1a7059b49edf38e29d501bde84b2252daae46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.731 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55f1b2-7c1b-49a0-ae6f-f8f4ac879062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.732 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd74d8641-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 kernel: tapd74d8641-f0: left promiscuous mode
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cf1ef2-6241-4357-9a35-7db204346758]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[247bacb9-a35e-4495-b334-c79bafd7506f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f39e7d7b-5598-45d9-88c0-5c020f292e24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.803 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2646f598-1957-44fb-914b-9ffb25f39190]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699516, 'reachable_time': 16683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353300, 'error': None, 'target': 'ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 systemd[1]: run-netns-ovnmeta\x2dd74d8641\x2df56f\x2d4d53\x2dbd5c\x2dd5364a316407.mount: Deactivated successfully.
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.808 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d74d8641-f56f-4d53-bd5c-d5364a316407 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:13:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:40.808 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37b53176-9c5e-4772-b806-53c9981d51a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.943 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] creating snapshot(b246f6613b8749a58e5b42598e1fdcf4) on rbd image(2534f8b9-e832-4b78-ada4-e551429bdc75_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:13:40 np0005486808 nova_compute[259627]: 2025-10-14 09:13:40.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.025 2 INFO nova.virt.libvirt.driver [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deleting instance files /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_del#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.026 2 INFO nova.virt.libvirt.driver [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deletion of /var/lib/nova/instances/fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf_del complete#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.093 2 INFO nova.compute.manager [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.093 2 DEBUG oslo.service.loopingcall [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.094 2 DEBUG nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.094 2 DEBUG nova.network.neutron [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.276 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updated VIF entry in instance network info cache for port 05e92470-2658-4ea2-9c44-e91cd5226905. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.277 2 DEBUG nova.network.neutron [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.291 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Creating config drive at /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.301 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywyd55ii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.367 2 DEBUG oslo_concurrency.lockutils [req-b4b026c3-d700-4231-9ad0-51a114e64e02 req-d92ef929-3ec3-4cc8-b7cd-37d568e14be0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.467 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywyd55ii" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.489 2 DEBUG nova.storage.rbd_utils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.493 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 14 05:13:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 14 05:13:41 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.661 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] cloning vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk@b246f6613b8749a58e5b42598e1fdcf4 to images/7b536765-adaa-4682-86b5-b3ff0be769bf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.720 2 DEBUG oslo_concurrency.processutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config d46b6953-9413-4e6a-94f7-7b5ac9634c16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.721 2 INFO nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deleting local config drive /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16/disk.config because it was imported into RBD.#033[00m
Oct 14 05:13:41 np0005486808 systemd-udevd[353056]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:41 np0005486808 kernel: tap05e92470-26: entered promiscuous mode
Oct 14 05:13:41 np0005486808 NetworkManager[44885]: <info>  [1760433221.7894] manager: (tap05e92470-26): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Oct 14 05:13:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:41Z|00990|binding|INFO|Claiming lport 05e92470-2658-4ea2-9c44-e91cd5226905 for this chassis.
Oct 14 05:13:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:41Z|00991|binding|INFO|05e92470-2658-4ea2-9c44-e91cd5226905: Claiming fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:41 np0005486808 NetworkManager[44885]: <info>  [1760433221.8140] device (tap05e92470-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:13:41 np0005486808 NetworkManager[44885]: <info>  [1760433221.8155] device (tap05e92470-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.819 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:fb:45 10.100.0.13'], port_security=['fa:16:3e:7f:fb:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=05e92470-2658-4ea2-9c44-e91cd5226905) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.821 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 05e92470-2658-4ea2-9c44-e91cd5226905 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:13:41 np0005486808 systemd-machined[214636]: New machine qemu-117-instance-0000005f.
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a32a488e-a37e-4d0e-92ea-1af611545dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.848 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbecee11-41 in ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.850 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbecee11-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[742e9c03-a074-474b-96c7-b7ba656403c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 systemd[1]: Started Virtual Machine qemu-117-instance-0000005f.
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8265d6b3-1529-4028-b274-6828c8b316c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.859 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] flattening images/7b536765-adaa-4682-86b5-b3ff0be769bf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.871 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb1c6ce-34e9-4010-b467-3972898c2aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49836e53-c2b3-4db8-9382-7abbf5513d79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:41Z|00992|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 ovn-installed in OVS
Oct 14 05:13:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:41Z|00993|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 up in Southbound
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.931 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6b77a2d2-1524-49ce-a67f-1de9b43992df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 NetworkManager[44885]: <info>  [1760433221.9424] manager: (tapfbecee11-40): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[618bb619-f995-402c-8c2c-4681afe1fe82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 152 op/s
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:41 np0005486808 nova_compute[259627]: 2025-10-14 09:13:41.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:13:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:41.997 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b041b7a-3104-4726-bc93-a674608cf3f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.002 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa00bf5-85b3-4a17-98bf-bb1800a6a69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:13:42 np0005486808 NetworkManager[44885]: <info>  [1760433222.0328] device (tapfbecee11-40): carrier: link connected
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1952914e-6739-4b60-a846-3dc874b20857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.062 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8e0a76-f81b-4be4-889f-17acb66819cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353465, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.087 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a18ae3b-1356-4ad0-bbc6-dddd1fb97bb5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:b3fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700079, 'tstamp': 700079}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353466, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.109 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03406c87-82a7-445b-bed7-542518e07f5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353467, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5cfd85-ebcc-4a03-8599-b55019caae6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.243 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8307a0-c379-4ef9-8e72-b0206745cc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.244 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.245 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.246 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:42 np0005486808 NetworkManager[44885]: <info>  [1760433222.2482] manager: (tapfbecee11-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct 14 05:13:42 np0005486808 kernel: tapfbecee11-40: entered promiscuous mode
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.253 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:42Z|00994|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.290 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31049402-5145-4342-9ff2-ad02b262d79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.291 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/fbecee11-4892-4e36-88d8-98879af7bb1e.pid.haproxy
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID fbecee11-4892-4e36-88d8-98879af7bb1e
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:13:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:42.292 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'env', 'PROCESS_TAG=haproxy-fbecee11-4892-4e36-88d8-98879af7bb1e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbecee11-4892-4e36-88d8-98879af7bb1e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.298 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] removing snapshot(b246f6613b8749a58e5b42598e1fdcf4) on rbd image(2534f8b9-e832-4b78-ada4-e551429bdc75_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.512 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.513 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-unplugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.514 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.515 2 DEBUG oslo_concurrency.lockutils [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.515 2 DEBUG nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] No waiting events found dispatching network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.515 2 WARNING nova.compute.manager [req-087c5674-1de1-45ab-8cf6-a193e3c6026d req-8d83c897-4a91-4e36-a786-302257097de0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received unexpected event network-vif-plugged-33bf675d-c42f-486f-b483-87fa5091b0ef for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:13:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 14 05:13:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 14 05:13:42 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.649 2 DEBUG nova.storage.rbd_utils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] creating snapshot(snap) on rbd image(7b536765-adaa-4682-86b5-b3ff0be769bf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:13:42 np0005486808 podman[353565]: 2025-10-14 09:13:42.73865754 +0000 UTC m=+0.072981079 container create 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:13:42 np0005486808 podman[353565]: 2025-10-14 09:13:42.705743819 +0000 UTC m=+0.040067368 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:13:42 np0005486808 systemd[1]: Started libpod-conmon-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope.
Oct 14 05:13:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b011f8a1370d78d7a628d9aaf1bebb9c89844fdc71c852fd6c3b4b8bc5c51cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.854 2 DEBUG nova.network.neutron [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:42 np0005486808 podman[353565]: 2025-10-14 09:13:42.869922662 +0000 UTC m=+0.204246221 container init 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.872 2 INFO nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Took 1.78 seconds to deallocate network for instance.#033[00m
Oct 14 05:13:42 np0005486808 podman[353565]: 2025-10-14 09:13:42.882610125 +0000 UTC m=+0.216933654 container start 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.916 2 DEBUG nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.917 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.917 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.918 2 DEBUG oslo_concurrency.lockutils [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.918 2 DEBUG nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.918 2 WARNING nova.compute.manager [req-c6bda9e7-ba5b-450c-9961-26b073f07315 req-179cd4e0-7741-4baa-a15d-cb23be0be5b3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct 14 05:13:42 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : New worker (353605) forked
Oct 14 05:13:42 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : Loading success.
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.924 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:42 np0005486808 nova_compute[259627]: 2025-10-14 09:13:42.925 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.042 2 DEBUG oslo_concurrency.processutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.110 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433223.0681987, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.111 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Started (Lifecycle Event)#033[00m
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.142 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014536835807774956 of space, bias 1.0, pg target 0.4361050742332487 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.160 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433223.0686975, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.161 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.187 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.192 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.217 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519279891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.592 2 DEBUG oslo_concurrency.processutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.602 2 DEBUG nova.compute.provider_tree [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 14 05:13:43 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.625 2 DEBUG nova.scheduler.client.report [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.651 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.679 2 INFO nova.scheduler.client.report [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Deleted allocations for instance fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf#033[00m
Oct 14 05:13:43 np0005486808 nova_compute[259627]: 2025-10-14 09:13:43.759 2 DEBUG oslo_concurrency.lockutils [None req-e101297e-ba4a-4ea9-9697-a7f1c080a8c1 9c3ef1ada21b467b9c1717b790fabb93 dffdc75b179b426c85be76e05489a77a - - default default] Lock "fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 156 op/s
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.760 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.761 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.788 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.853 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.854 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.861 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:13:44 np0005486808 nova_compute[259627]: 2025-10-14 09:13:44.862 2 INFO nova.compute.claims [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.016 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.081 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Snapshot image upload complete#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.083 2 DEBUG nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.086 2 DEBUG nova.compute.manager [req-2e8d67c7-cb07-4746-966b-2b315b9b3eaa req-8ab4f584-4588-4861-9d9d-0270af90c3da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Received event network-vif-deleted-33bf675d-c42f-486f-b483-87fa5091b0ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.187 2 INFO nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Shelve offloading#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.200 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.201 2 DEBUG nova.compute.manager [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.205 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.205 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.206 2 DEBUG nova.network.neutron [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528808103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.554 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.562 2 DEBUG nova.compute.provider_tree [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.576 2 DEBUG nova.scheduler.client.report [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.602 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.602 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.641 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.641 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.662 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.678 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.773 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.774 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.775 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating image(s)#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.795 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.817 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.839 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.843 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.938 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.938 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.939 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.939 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.960 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:45 np0005486808 nova_compute[259627]: 2025-10-14 09:13:45.962 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.167 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.200 2 WARNING nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.201 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid b595141f-123e-4250-bfec-888d866fd0c6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.202 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.213 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.272 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.365 2 DEBUG nova.objects.instance [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.382 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.382 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ensure instance console log exists: /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.383 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:46 np0005486808 nova_compute[259627]: 2025-10-14 09:13:46.389 2 DEBUG nova.policy [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.346 2 DEBUG nova.network.neutron [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.365 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:47 np0005486808 podman[353824]: 2025-10-14 09:13:47.690755243 +0000 UTC m=+0.087547418 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd)
Oct 14 05:13:47 np0005486808 podman[353825]: 2025-10-14 09:13:47.691356538 +0000 UTC m=+0.087536878 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:13:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 14 05:13:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 14 05:13:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG nova.compute.manager [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.891 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.892 2 DEBUG oslo_concurrency.lockutils [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.892 2 DEBUG nova.compute.manager [req-4ce80f17-4f84-4466-8744-876de7db9d7d req-8ad290a1-aa46-4db7-8eea-0881f823ea33 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Processing event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.893 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.907 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433227.9036868, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.907 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.909 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.915 2 INFO nova.virt.libvirt.driver [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance spawned successfully.#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.916 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:13:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 239 op/s
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.959 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.968 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.973 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.974 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.975 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.975 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.976 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:47 np0005486808 nova_compute[259627]: 2025-10-14 09:13:47.977 2 DEBUG nova.virt.libvirt.driver [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.029 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.086 2 INFO nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 15.33 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.087 2 DEBUG nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.172 2 INFO nova.compute.manager [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 17.11 seconds to build instance.#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.193 2 DEBUG oslo_concurrency.lockutils [None req-f562affe-2a5a-471c-8c32-8b9beba915e9 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.194 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.194 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.195 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.391 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Successfully created port: 7103ce4a-69e8-454b-aed3-251ecb109232 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:13:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:48Z|00995|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.794 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.794 2 DEBUG nova.objects.instance [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.virt.libvirt.vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:40Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.network.os_vif_util [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.805 2 DEBUG nova.network.os_vif_util [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.806 2 DEBUG os_vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f827284-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:48 np0005486808 nova_compute[259627]: 2025-10-14 09:13:48.812 2 INFO os_vif [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.187 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting instance files /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.188 2 INFO nova.virt.libvirt.driver [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deletion of /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del complete#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.285 2 INFO nova.scheduler.client.report [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 2534f8b9-e832-4b78-ada4-e551429bdc75#033[00m
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 94cdd114-f753-4819-94b2-3f2c875c0562 does not exist
Oct 14 05:13:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 831f4773-c2d3-4854-9d36-6c13bd915468 does not exist
Oct 14 05:13:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5d2e485d-adab-4e41-ae34-780976b25a6f does not exist
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.322 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.323 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.410 2 DEBUG oslo_concurrency.processutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.531 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Successfully updated port: 7103ce4a-69e8-454b-aed3-251ecb109232 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.639 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.639 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.640 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.689 2 DEBUG nova.compute.manager [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.690 2 DEBUG nova.compute.manager [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.690 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/804750047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.864 2 DEBUG oslo_concurrency.processutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.871 2 DEBUG nova.compute.provider_tree [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.901 2 DEBUG nova.scheduler.client.report [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.938 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 195 op/s
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.997 2 DEBUG oslo_concurrency.lockutils [None req-a2d1ebb9-bd49-4f52-baf3-0c06f75d83ab 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.998 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct 14 05:13:49 np0005486808 nova_compute[259627]: 2025-10-14 09:13:49.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.000132812 +0000 UTC m=+0.048449746 container create 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:13:50 np0005486808 systemd[1]: Started libpod-conmon-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope.
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:49.985643384 +0000 UTC m=+0.033960358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.100641229 +0000 UTC m=+0.148958233 container init 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.112788478 +0000 UTC m=+0.161105442 container start 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.117274099 +0000 UTC m=+0.165591073 container attach 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:13:50 np0005486808 busy_rubin[354190]: 167 167
Oct 14 05:13:50 np0005486808 systemd[1]: libpod-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope: Deactivated successfully.
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.123972364 +0000 UTC m=+0.172289308 container died 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:13:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d614982d3b557190254fb4b853975d4d53d7c8a1024f32e54b4466346bf4c8f2-merged.mount: Deactivated successfully.
Oct 14 05:13:50 np0005486808 podman[354174]: 2025-10-14 09:13:50.169214869 +0000 UTC m=+0.217531783 container remove 6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:13:50 np0005486808 systemd[1]: libpod-conmon-6c42b2a695037a4ae3c3953fe0d856a72a63d5a45195309cdfdf7a64bb3981dd.scope: Deactivated successfully.
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.295 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:13:50 np0005486808 podman[354213]: 2025-10-14 09:13:50.390317768 +0000 UTC m=+0.076645520 container create 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:13:50 np0005486808 systemd[1]: Started libpod-conmon-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope.
Oct 14 05:13:50 np0005486808 podman[354213]: 2025-10-14 09:13:50.360820981 +0000 UTC m=+0.047148773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:50 np0005486808 podman[354213]: 2025-10-14 09:13:50.494755662 +0000 UTC m=+0.181083404 container init 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:13:50 np0005486808 podman[354213]: 2025-10-14 09:13:50.505866946 +0000 UTC m=+0.192194668 container start 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:13:50 np0005486808 podman[354213]: 2025-10-14 09:13:50.50924776 +0000 UTC m=+0.195575482 container attach 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.705 2 DEBUG nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.705 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG oslo_concurrency.lockutils [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.706 2 DEBUG nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:50 np0005486808 nova_compute[259627]: 2025-10-14 09:13:50.706 2 WARNING nova.compute.manager [req-b5783159-2eba-4c7a-95d9-ece8acdb52aa req-642f12b4-3496-478e-8763-2b2aefb55585 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.514 2 DEBUG nova.network.neutron [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.533 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.533 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance network_info: |[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.534 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.534 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.537 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start _get_guest_xml network_info=[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.543 2 WARNING nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.548 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.549 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.560 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.libvirt.host [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.561 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.562 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.563 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.564 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.564 2 DEBUG nova.virt.hardware [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:13:51 np0005486808 nova_compute[259627]: 2025-10-14 09:13:51.568 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:51 np0005486808 nice_heyrovsky[354229]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:13:51 np0005486808 nice_heyrovsky[354229]: --> relative data size: 1.0
Oct 14 05:13:51 np0005486808 nice_heyrovsky[354229]: --> All data devices are unavailable
Oct 14 05:13:51 np0005486808 systemd[1]: libpod-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Deactivated successfully.
Oct 14 05:13:51 np0005486808 podman[354213]: 2025-10-14 09:13:51.698680384 +0000 UTC m=+1.385008116 container died 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 14 05:13:51 np0005486808 systemd[1]: libpod-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Consumed 1.114s CPU time.
Oct 14 05:13:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1d5713c95db712a65124b0a5ba48bbe31803c715d7b1e4ed83f535b712b502bc-merged.mount: Deactivated successfully.
Oct 14 05:13:51 np0005486808 podman[354213]: 2025-10-14 09:13:51.766727021 +0000 UTC m=+1.453054733 container remove 8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:13:51 np0005486808 systemd[1]: libpod-conmon-8c55511047a28057840e327736d30929ba3e35526ea8c2d75af9cff41db68e46.scope: Deactivated successfully.
Oct 14 05:13:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 8.2 MiB/s wr, 342 op/s
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.008 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:13:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272355481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.070 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.110 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.113 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:13:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152691683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.571926417 +0000 UTC m=+0.078314381 container create edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.583 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.585 2 DEBUG nova.virt.libvirt.vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:45Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.585 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.586 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.588 2 DEBUG nova.objects.instance [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <uuid>b595141f-123e-4250-bfec-888d866fd0c6</uuid>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <name>instance-00000060</name>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399788817</nova:name>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:13:51</nova:creationTime>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <nova:port uuid="7103ce4a-69e8-454b-aed3-251ecb109232">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="serial">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="uuid">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk.config">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9d:3c:de"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <target dev="tap7103ce4a-69"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log" append="off"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:13:52 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:13:52 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:13:52 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:13:52 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Preparing to wait for external event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.604 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.605 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.605 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.606 2 DEBUG nova.virt.libvirt.vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:45Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.606 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.607 2 DEBUG nova.network.os_vif_util [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.608 2 DEBUG os_vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7103ce4a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7103ce4a-69, col_values=(('external_ids', {'iface-id': '7103ce4a-69e8-454b-aed3-251ecb109232', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:3c:de', 'vm-uuid': 'b595141f-123e-4250-bfec-888d866fd0c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:52 np0005486808 NetworkManager[44885]: <info>  [1760433232.6177] manager: (tap7103ce4a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:13:52 np0005486808 systemd[1]: Started libpod-conmon-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope.
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.53349821 +0000 UTC m=+0.039886234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.626 2 INFO os_vif [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')#033[00m
Oct 14 05:13:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.677391626 +0000 UTC m=+0.183779560 container init edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.692503669 +0000 UTC m=+0.198891633 container start edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:13:52 np0005486808 systemd[1]: libpod-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope: Deactivated successfully.
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.698737572 +0000 UTC m=+0.205125546 container attach edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:13:52 np0005486808 frosty_boyd[354485]: 167 167
Oct 14 05:13:52 np0005486808 conmon[354485]: conmon edd676102a5ffe3b476e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope/container/memory.events
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.700582268 +0000 UTC m=+0.206970202 container died edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.706 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.706 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.707 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:9d:3c:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.707 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Using config drive#033[00m
Oct 14 05:13:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5caa65ebfce6e665e0a96b8b39de8a1b68eb4608b624e1923b704c81e7171543-merged.mount: Deactivated successfully.
Oct 14 05:13:52 np0005486808 nova_compute[259627]: 2025-10-14 09:13:52.742 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:52 np0005486808 podman[354466]: 2025-10-14 09:13:52.749260567 +0000 UTC m=+0.255648501 container remove edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:13:52 np0005486808 systemd[1]: libpod-conmon-edd676102a5ffe3b476e20489ab47e3c2e2bebb64fa0661d5048b9c98272b7f5.scope: Deactivated successfully.
Oct 14 05:13:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:52 np0005486808 podman[354529]: 2025-10-14 09:13:52.97819793 +0000 UTC m=+0.073515943 container create f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 05:13:53 np0005486808 systemd[1]: Started libpod-conmon-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope.
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:52.950740043 +0000 UTC m=+0.046058146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:53.091347509 +0000 UTC m=+0.186665592 container init f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:53.102497344 +0000 UTC m=+0.197815367 container start f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:53.105902007 +0000 UTC m=+0.201220140 container attach f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.293 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.293 2 DEBUG nova.network.neutron [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.371 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating config drive at /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.380 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv0bdg22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.438 2 DEBUG oslo_concurrency.lockutils [req-4a37d3ee-e0b7-4919-9f26-73d96d8a0c99 req-122a4080-8814-4438-b9aa-730643c129e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.549 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv0bdg22" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.586 2 DEBUG nova.storage.rbd_utils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.594 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.818 2 DEBUG oslo_concurrency.processutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.820 2 INFO nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting local config drive /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config because it was imported into RBD.#033[00m
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]: {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    "0": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "devices": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "/dev/loop3"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            ],
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_name": "ceph_lv0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_size": "21470642176",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "name": "ceph_lv0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "tags": {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_name": "ceph",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.crush_device_class": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.encrypted": "0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_id": "0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.vdo": "0"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            },
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "vg_name": "ceph_vg0"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        }
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    ],
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    "1": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "devices": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "/dev/loop4"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            ],
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_name": "ceph_lv1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_size": "21470642176",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "name": "ceph_lv1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "tags": {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_name": "ceph",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.crush_device_class": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.encrypted": "0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_id": "1",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.vdo": "0"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            },
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "vg_name": "ceph_vg1"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        }
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    ],
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    "2": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "devices": [
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "/dev/loop5"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            ],
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_name": "ceph_lv2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_size": "21470642176",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "name": "ceph_lv2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "tags": {
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.cluster_name": "ceph",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.crush_device_class": "",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.encrypted": "0",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osd_id": "2",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:                "ceph.vdo": "0"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            },
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "type": "block",
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:            "vg_name": "ceph_vg2"
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:        }
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]:    ]
Oct 14 05:13:53 np0005486808 interesting_taussig[354546]: }
Oct 14 05:13:53 np0005486808 systemd[1]: libpod-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope: Deactivated successfully.
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:53.890178787 +0000 UTC m=+0.985496830 container died f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:13:53 np0005486808 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 05:13:53 np0005486808 NetworkManager[44885]: <info>  [1760433233.9329] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct 14 05:13:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7205f3e902fa343198bc88b6c822db548b58a50558c1505cf164a3f879360917-merged.mount: Deactivated successfully.
Oct 14 05:13:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:53Z|00996|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 05:13:53 np0005486808 nova_compute[259627]: 2025-10-14 09:13:53.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:53Z|00997|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:13:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct 14 05:13:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.956 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f bound to our chassis#033[00m
Oct 14 05:13:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.957 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 563aa000-400f-4c19-ba83-9377cc50d29f#033[00m
Oct 14 05:13:53 np0005486808 podman[354529]: 2025-10-14 09:13:53.97915488 +0000 UTC m=+1.074472893 container remove f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:13:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dba3551c-26a2-4f66-ae23-2198a5f9bb81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.991 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap563aa000-41 in ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:13:53 np0005486808 systemd-machined[214636]: New machine qemu-118-instance-00000060.
Oct 14 05:13:53 np0005486808 systemd-udevd[354621]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.997 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap563aa000-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.998 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac82974d-b1ba-4064-966e-cda024ead5a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:53.999 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[371c769b-ff61-4d76-b5a7-1eb7d08392ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Oct 14 05:13:54 np0005486808 NetworkManager[44885]: <info>  [1760433234.0175] device (tap7103ce4a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:13:54 np0005486808 systemd[1]: libpod-conmon-f0a8c278e1136d2c330614cd9f4d8b4c46748861140a79a585cae55b9b5d209f.scope: Deactivated successfully.
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.017 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[146b37c7-26b6-4000-a97a-682b3a1724d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 NetworkManager[44885]: <info>  [1760433234.0198] device (tap7103ce4a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.043 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[296a9068-c081-44f5-adf1-d8a98216a3e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:54Z|00998|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 05:13:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:54Z|00999|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.091 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76093755-4701-4280-b618-f7de245f64c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 NetworkManager[44885]: <info>  [1760433234.1021] manager: (tap563aa000-40): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Oct 14 05:13:54 np0005486808 systemd-udevd[354624]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.103 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ddf93c-75ab-4ae5-8404-a06568b9eed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.147 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[70a75dd5-9df6-46d9-aaa6-544e2e5bc08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.152 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[adfd28fb-b278-4e40-a941-ae417178e2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 NetworkManager[44885]: <info>  [1760433234.1792] device (tap563aa000-40): carrier: link connected
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.186 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb11300-945f-44b7-8717-00193f41463f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.221 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3e86f2a3-8385-4999-b419-7e1c5518b78d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701294, 'reachable_time': 23044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354701, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.246 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6afae12-1730-4225-a854-d629787687ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:cd84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701294, 'tstamp': 701294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354716, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.268 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5a22bb-5f3e-4d15-b781-d0ac816367b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701294, 'reachable_time': 23044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354728, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa20d8b-06ec-4096-b24d-1028106351b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.354 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b98516dc-8b0e-4f14-9e9a-21e3eb56e8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.356 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.356 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563aa000-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.395 2 DEBUG nova.compute.manager [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG oslo_concurrency.lockutils [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.396 2 DEBUG nova.compute.manager [req-d703d15d-b0f2-4621-b755-2d5e74d663e0 req-9156f79c-d3b2-48df-9ccf-90245602f9bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Processing event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 kernel: tap563aa000-40: entered promiscuous mode
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 NetworkManager[44885]: <info>  [1760433234.4139] manager: (tap563aa000-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.414 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap563aa000-40, col_values=(('external_ids', {'iface-id': '4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:54Z|01000|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.418 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.419 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[59e3fa9c-3eff-413c-a994-f9577c94d631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.419 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:13:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:54.421 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'env', 'PROCESS_TAG=haproxy-563aa000-400f-4c19-ba83-9377cc50d29f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/563aa000-400f-4c19-ba83-9377cc50d29f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.520 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433219.5156043, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.521 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:54 np0005486808 nova_compute[259627]: 2025-10-14 09:13:54.551 2 DEBUG nova.compute.manager [None req-722941fe-e62d-49d7-aa37-4d0050afa25d - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.77969531 +0000 UTC m=+0.056464203 container create 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:13:54 np0005486808 systemd[1]: Started libpod-conmon-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope.
Oct 14 05:13:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.756998761 +0000 UTC m=+0.033767674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.857353644 +0000 UTC m=+0.134122547 container init 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:13:54 np0005486808 podman[354876]: 2025-10-14 09:13:54.861839445 +0000 UTC m=+0.070066688 container create a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.868874658 +0000 UTC m=+0.145643551 container start 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:13:54 np0005486808 heuristic_bohr[354890]: 167 167
Oct 14 05:13:54 np0005486808 systemd[1]: libpod-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope: Deactivated successfully.
Oct 14 05:13:54 np0005486808 conmon[354890]: conmon 023a090a3ec291cf7ec4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope/container/memory.events
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.875004469 +0000 UTC m=+0.151773362 container attach 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.875651465 +0000 UTC m=+0.152420358 container died 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:13:54 np0005486808 systemd[1]: Started libpod-conmon-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope.
Oct 14 05:13:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9c6f5f2a0976fd8dfc863cc7c970840de7260395b7504c2b5d7239e961e1a322-merged.mount: Deactivated successfully.
Oct 14 05:13:54 np0005486808 podman[354853]: 2025-10-14 09:13:54.912430471 +0000 UTC m=+0.189199364 container remove 023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:13:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:54 np0005486808 systemd[1]: libpod-conmon-023a090a3ec291cf7ec466bd16e5f7c9cb18756e5f2ec2c33ad15ec3104501c8.scope: Deactivated successfully.
Oct 14 05:13:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e223c53edf6cce1508ede20bb72625545cc9c2f087f47ad069c77aa13b5410e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:54 np0005486808 podman[354876]: 2025-10-14 09:13:54.832567053 +0000 UTC m=+0.040794336 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:13:54 np0005486808 podman[354876]: 2025-10-14 09:13:54.94319627 +0000 UTC m=+0.151423513 container init a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 05:13:54 np0005486808 podman[354876]: 2025-10-14 09:13:54.951678109 +0000 UTC m=+0.159905352 container start a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:54 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : New worker (354920) forked
Oct 14 05:13:54 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : Loading success.
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.093 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.093 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.111 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:13:55 np0005486808 podman[354934]: 2025-10-14 09:13:55.141076487 +0000 UTC m=+0.058866172 container create a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:13:55 np0005486808 systemd[1]: Started libpod-conmon-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope.
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.199 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.201 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:55 np0005486808 podman[354934]: 2025-10-14 09:13:55.119996387 +0000 UTC m=+0.037786072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.214 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.214 2 INFO nova.compute.claims [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:13:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:13:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:13:55 np0005486808 podman[354934]: 2025-10-14 09:13:55.260785507 +0000 UTC m=+0.178575262 container init a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:13:55 np0005486808 podman[354934]: 2025-10-14 09:13:55.275792877 +0000 UTC m=+0.193582552 container start a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:13:55 np0005486808 podman[354934]: 2025-10-14 09:13:55.280160845 +0000 UTC m=+0.197950550 container attach a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.325 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.323485, b595141f-123e-4250-bfec-888d866fd0c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.326 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Started (Lifecycle Event)#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.328 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.332 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.337 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance spawned successfully.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.337 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.347 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.353 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.394 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.406 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.407 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.408 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.409 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.409 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.410 2 DEBUG nova.virt.libvirt.driver [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.437 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.438 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.3235881, b595141f-123e-4250-bfec-888d866fd0c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.438 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.460 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.467 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433235.3312664, b595141f-123e-4250-bfec-888d866fd0c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.467 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.471 2 INFO nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 9.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.471 2 DEBUG nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.498 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.534 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.549 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433220.5435143, fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.553 2 INFO nova.compute.manager [-] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.572 2 INFO nova.compute.manager [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 10.73 seconds to build instance.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.575 2 DEBUG nova.compute.manager [None req-f71cc039-f54e-4167-9ad0-cc47d1dc5ee3 - - - - - -] [instance: fd930ad2-bc8c-42a0-a5f4-ed32e9a577bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.575 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.576 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.576 2 INFO nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Unshelving#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.593 2 DEBUG oslo_concurrency.lockutils [None req-68cf028d-ff11-4be5-a09a-f55c391de303 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.593 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.594 2 INFO nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.594 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.647 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315042023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.810 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.816 2 DEBUG nova.compute.provider_tree [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.829 2 DEBUG nova.scheduler.client.report [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.854 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.855 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.859 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.864 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.887 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.901 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.902 2 INFO nova.compute.claims [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.908 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.909 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:13:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.960 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:13:55 np0005486808 nova_compute[259627]: 2025-10-14 09:13:55.982 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.066 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.068 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.069 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating image(s)#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.100 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.134 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.173 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.181 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]: {
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_id": 2,
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "type": "bluestore"
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    },
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_id": 1,
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "type": "bluestore"
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    },
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_id": 0,
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:        "type": "bluestore"
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]:    }
Oct 14 05:13:56 np0005486808 intelligent_maxwell[354950]: }
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.226 2 DEBUG nova.policy [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.257 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:56 np0005486808 systemd[1]: libpod-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope: Deactivated successfully.
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.306 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.307 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.307 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.308 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:56 np0005486808 podman[355063]: 2025-10-14 09:13:56.330783459 +0000 UTC m=+0.040315084 container died a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.343 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.349 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d5de3978-2377-4d8e-aeaf-c952912130a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:13:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-800df221c41dd539573dc71d916bca4de2725dde4344d4ac0cc8b9e53c3d697e-merged.mount: Deactivated successfully.
Oct 14 05:13:56 np0005486808 podman[355063]: 2025-10-14 09:13:56.396102759 +0000 UTC m=+0.105634354 container remove a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:13:56 np0005486808 systemd[1]: libpod-conmon-a45d75ce9bb95d16eddd8b6dc4fbde8da369d30ac387f9ee7197cf70c810a14f.scope: Deactivated successfully.
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e3f08df5-6227-4e70-8efb-37b5717ec684 does not exist
Oct 14 05:13:56 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7e663705-c38a-4b7c-a65f-b1b0cf17a4de does not exist
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.483 2 DEBUG nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG oslo_concurrency.lockutils [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.484 2 DEBUG nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.485 2 WARNING nova.compute.manager [req-832538aa-07f1-4f98-a752-7009493af03a req-24bab0cc-c3ce-4330-83d2-26e74638a199 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.701 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d5de3978-2377-4d8e-aeaf-c952912130a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.762 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:13:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449513898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.794 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.799 2 DEBUG nova.compute.provider_tree [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.821 2 DEBUG nova.scheduler.client.report [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.855 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.861 2 DEBUG nova.objects.instance [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.875 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.875 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Ensure instance console log exists: /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:13:56 np0005486808 nova_compute[259627]: 2025-10-14 09:13:56.876 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:13:57 np0005486808 nova_compute[259627]: 2025-10-14 09:13:57.070 2 INFO nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating port 4f827284-f357-43c5-bdde-c69731b52914 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct 14 05:13:57 np0005486808 nova_compute[259627]: 2025-10-14 09:13:57.118 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Successfully created port: dc177f85-b331-40ec-b30f-1f667878bcb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:13:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:13:57 np0005486808 nova_compute[259627]: 2025-10-14 09:13:57.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:13:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:57.899 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:13:57 np0005486808 nova_compute[259627]: 2025-10-14 09:13:57.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:13:57.900 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:13:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:58 np0005486808 NetworkManager[44885]: <info>  [1760433238.1702] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct 14 05:13:58 np0005486808 NetworkManager[44885]: <info>  [1760433238.1734] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:58Z|01001|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 05:13:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:13:58Z|01002|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.610 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.610 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.611 2 DEBUG nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.832 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Successfully updated port: dc177f85-b331-40ec-b30f-1f667878bcb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.845 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.846 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:58 np0005486808 nova_compute[259627]: 2025-10-14 09:13:58.846 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG nova.compute.manager [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG nova.compute.manager [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.050 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.051 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.051 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.052 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.169 2 DEBUG nova.compute.manager [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-changed-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.169 2 DEBUG nova.compute.manager [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing instance network info cache due to event network-changed-4f827284-f357-43c5-bdde-c69731b52914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:13:59 np0005486808 nova_compute[259627]: 2025-10-14 09:13:59.170 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.482257) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239482364, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 795, "num_deletes": 252, "total_data_size": 934370, "memory_usage": 949000, "flush_reason": "Manual Compaction"}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239491937, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 924395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36976, "largest_seqno": 37770, "table_properties": {"data_size": 920344, "index_size": 1767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9502, "raw_average_key_size": 19, "raw_value_size": 912024, "raw_average_value_size": 1912, "num_data_blocks": 78, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433183, "oldest_key_time": 1760433183, "file_creation_time": 1760433239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 9888 microseconds, and 5676 cpu microseconds.
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.492151) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 924395 bytes OK
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.492252) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494108) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494134) EVENT_LOG_v1 {"time_micros": 1760433239494126, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.494155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 930329, prev total WAL file size 930329, number of live WAL files 2.
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.495392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(902KB)], [80(8617KB)]
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239495444, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9748958, "oldest_snapshot_seqno": -1}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6241 keys, 8089915 bytes, temperature: kUnknown
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239545266, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8089915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8048949, "index_size": 24280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157876, "raw_average_key_size": 25, "raw_value_size": 7937754, "raw_average_value_size": 1271, "num_data_blocks": 978, "num_entries": 6241, "num_filter_entries": 6241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.545572) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8089915 bytes
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.547526) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(19.3) write-amplify(8.8) OK, records in: 6760, records dropped: 519 output_compression: NoCompression
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.547559) EVENT_LOG_v1 {"time_micros": 1760433239547545, "job": 46, "event": "compaction_finished", "compaction_time_micros": 49912, "compaction_time_cpu_micros": 31793, "output_level": 6, "num_output_files": 1, "total_output_size": 8089915, "num_input_records": 6760, "num_output_records": 6241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239548067, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433239551282, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.495334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:13:59.551349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:13:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 213 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 05:14:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:00Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 05:14:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:00Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:fb:45 10.100.0.13
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.262 2 DEBUG nova.network.neutron [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.280 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.280 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance network_info: |[{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.283 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start _get_guest_xml network_info=[{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.288 2 WARNING nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.294 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.295 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.298 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.libvirt.host [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.299 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.300 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.301 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.302 2 DEBUG nova.virt.hardware [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.305 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.515 2 DEBUG nova.network.neutron [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.544 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.547 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.548 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating image(s)#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.594 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.612 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.613 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.614 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Refreshing network info cache for port 4f827284-f357-43c5-bdde-c69731b52914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.674 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:00 np0005486808 podman[355296]: 2025-10-14 09:14:00.695815342 +0000 UTC m=+0.093289000 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.699 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:00 np0005486808 podman[355280]: 2025-10-14 09:14:00.708169756 +0000 UTC m=+0.105646485 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.710 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.711 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523065631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.822 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.837 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.840 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.874 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.875 2 DEBUG nova.network.neutron [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:00 np0005486808 nova_compute[259627]: 2025-10-14 09:14:00.895 2 DEBUG oslo_concurrency.lockutils [req-2351d9a7-de4f-4dab-bc22-618f09ffec92 req-b375a674-bde9-4423-8cc3-01624187b476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.005 2 DEBUG nova.virt.libvirt.imagebackend [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.044 2 DEBUG nova.virt.libvirt.imagebackend [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/7b536765-adaa-4682-86b5-b3ff0be769bf/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.045 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] cloning images/7b536765-adaa-4682-86b5-b3ff0be769bf@snap to None/2534f8b9-e832-4b78-ada4-e551429bdc75_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.141 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "ad4745c71f608ccd993c23b78f9a7e19f70d6f59" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.235 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'migration_context' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065249320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.281 2 DEBUG nova.compute.manager [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-changed-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG nova.compute.manager [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Refreshing instance network info cache due to event network-changed-dc177f85-b331-40ec-b30f-1f667878bcb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.282 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Refreshing network info cache for port dc177f85-b331-40ec-b30f-1f667878bcb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.284 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.285 2 DEBUG nova.virt.libvirt.vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:56Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.285 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.286 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.288 2 DEBUG nova.objects.instance [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.292 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] flattening vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.323 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <uuid>d5de3978-2377-4d8e-aeaf-c952912130a2</uuid>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <name>instance-00000061</name>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-1227005018</nova:name>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:00</nova:creationTime>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <nova:port uuid="dc177f85-b331-40ec-b30f-1f667878bcb8">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="serial">d5de3978-2377-4d8e-aeaf-c952912130a2</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="uuid">d5de3978-2377-4d8e-aeaf-c952912130a2</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d5de3978-2377-4d8e-aeaf-c952912130a2_disk">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:6a:28:0f"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <target dev="tapdc177f85-b3"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/console.log" append="off"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:01 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:01 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:01 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:01 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.329 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Preparing to wait for external event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.330 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.331 2 DEBUG nova.virt.libvirt.vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:56Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.333 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.333 2 DEBUG nova.network.os_vif_util [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.334 2 DEBUG os_vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc177f85-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc177f85-b3, col_values=(('external_ids', {'iface-id': 'dc177f85-b331-40ec-b30f-1f667878bcb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:28:0f', 'vm-uuid': 'd5de3978-2377-4d8e-aeaf-c952912130a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:01 np0005486808 NetworkManager[44885]: <info>  [1760433241.3500] manager: (tapdc177f85-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.366 2 INFO os_vif [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3')#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.452 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.455 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.455 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:6a:28:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.455 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Using config drive#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.476 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.604 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Image rbd:vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.605 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.605 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Ensure instance console log exists: /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.606 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.606 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.607 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.610 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start _get_guest_xml network_info=[{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:13:36Z,direct_url=<?>,disk_format='raw',id=7b536765-adaa-4682-86b5-b3ff0be769bf,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-17250352-shelved',owner='517aafb84156407c8672042097e3ef4f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:13:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.614 2 WARNING nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.619 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.619 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.host [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.622 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:13:36Z,direct_url=<?>,disk_format='raw',id=7b536765-adaa-4682-86b5-b3ff0be769bf,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-17250352-shelved',owner='517aafb84156407c8672042097e3ef4f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:13:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.623 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.624 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.virt.hardware [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.625 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.642 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.866 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Creating config drive at /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config#033[00m
Oct 14 05:14:01 np0005486808 nova_compute[259627]: 2025-10-14 09:14:01.882 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoe9xwa6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 276 op/s
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.049 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoe9xwa6" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510585488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.090 2 DEBUG nova.storage.rbd_utils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.096 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.164 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.199 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.203 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.250 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updated VIF entry in instance network info cache for port 4f827284-f357-43c5-bdde-c69731b52914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.251 2 DEBUG nova.network.neutron [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.271 2 DEBUG oslo_concurrency.lockutils [req-e4b5ed06-e3ee-4608-b7cc-4af3574bc9ff req-875a23c0-4188-445e-a206-eb240226ef44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.311 2 DEBUG oslo_concurrency.processutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config d5de3978-2377-4d8e-aeaf-c952912130a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.312 2 INFO nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deleting local config drive /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:02 np0005486808 kernel: tapdc177f85-b3: entered promiscuous mode
Oct 14 05:14:02 np0005486808 NetworkManager[44885]: <info>  [1760433242.3699] manager: (tapdc177f85-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Oct 14 05:14:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:02Z|01003|binding|INFO|Claiming lport dc177f85-b331-40ec-b30f-1f667878bcb8 for this chassis.
Oct 14 05:14:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:02Z|01004|binding|INFO|dc177f85-b331-40ec-b30f-1f667878bcb8: Claiming fa:16:3e:6a:28:0f 10.100.0.14
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:28:0f 10.100.0.14'], port_security=['fa:16:3e:6a:28:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd5de3978-2377-4d8e-aeaf-c952912130a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dc177f85-b331-40ec-b30f-1f667878bcb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.379 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dc177f85-b331-40ec-b30f-1f667878bcb8 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:02Z|01005|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 ovn-installed in OVS
Oct 14 05:14:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:02Z|01006|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 up in Southbound
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d72c456-6ef7-43b6-9fcf-36b1e3e8bce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 systemd-machined[214636]: New machine qemu-119-instance-00000061.
Oct 14 05:14:02 np0005486808 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.441 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba54c46-f2d7-4d59-831b-2820c259b9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.445 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3b9e6c-535e-4344-9f54-21bb588c10e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 systemd-udevd[355712]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:02 np0005486808 NetworkManager[44885]: <info>  [1760433242.4608] device (tapdc177f85-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:02 np0005486808 NetworkManager[44885]: <info>  [1760433242.4619] device (tapdc177f85-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.486 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f9f389-7aec-4209-a75d-6e33244aa7af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[588ec1f7-2e4e-4b2a-9126-76147a6dfbd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355722, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.538 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31e5f951-670c-42b6-b566-9aba45c09d19]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355724, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355724, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.541 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.545 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.546 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.658 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updated VIF entry in instance network info cache for port dc177f85-b331-40ec-b30f-1f667878bcb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.659 2 DEBUG nova.network.neutron [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [{"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023730173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.677 2 DEBUG oslo_concurrency.lockutils [req-5a0e9638-d60b-421b-993d-9ddd156cf1be req-bc9b45f3-2d2d-447a-8cde-4deaa7d88c2e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d5de3978-2377-4d8e-aeaf-c952912130a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.696 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.697 2 DEBUG nova.virt.libvirt.vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='7b536765-adaa-4682-86b5-b3ff0be769bf',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:55Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.697 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.698 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.699 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.728 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <uuid>2534f8b9-e832-4b78-ada4-e551429bdc75</uuid>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <name>instance-00000056</name>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersNegativeTestJSON-server-17250352</nova:name>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:01</nova:creationTime>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:user uuid="92e59e145f6942b78d0ffbebc4d89e76">tempest-ServersNegativeTestJSON-1475695514-project-member</nova:user>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:project uuid="517aafb84156407c8672042097e3ef4f">tempest-ServersNegativeTestJSON-1475695514</nova:project>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="7b536765-adaa-4682-86b5-b3ff0be769bf"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <nova:port uuid="4f827284-f357-43c5-bdde-c69731b52914">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="serial">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="uuid">2534f8b9-e832-4b78-ada4-e551429bdc75</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8b:d7:f7"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <target dev="tap4f827284-f3"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/console.log" append="off"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:02 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:02 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:02 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:02 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.733 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Preparing to wait for external event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.733 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.734 2 DEBUG nova.virt.libvirt.vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='7b536765-adaa-4682-86b5-b3ff0be769bf',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member',shelved_at='2025-10-14T09:13:45.082732',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7b536765-adaa-4682-86b5-b3ff0be769bf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:13:55Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG nova.network.os_vif_util [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.735 2 DEBUG os_vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 NetworkManager[44885]: <info>  [1760433242.7410] manager: (tap4f827284-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.746 2 INFO os_vif [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')#033[00m
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.813 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] No VIF found with MAC fa:16:3e:8b:d7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.814 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Using config drive#033[00m
Oct 14 05:14:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.843 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.866 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:02.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:02 np0005486808 nova_compute[259627]: 2025-10-14 09:14:02.929 2 DEBUG nova.objects.instance [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'keypairs' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.305 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Creating config drive at /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.314 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddp6k2q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.490 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptddp6k2q" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.532 2 DEBUG nova.storage.rbd_utils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] rbd image 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.535 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.692 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433243.6916466, d5de3978-2377-4d8e-aeaf-c952912130a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.693 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.706 2 DEBUG oslo_concurrency.processutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config 2534f8b9-e832-4b78-ada4-e551429bdc75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.706 2 INFO nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting local config drive /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.735 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433243.6934407, d5de3978-2377-4d8e-aeaf-c952912130a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.735 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:03 np0005486808 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 05:14:03 np0005486808 NetworkManager[44885]: <info>  [1760433243.7474] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct 14 05:14:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:03Z|01007|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 05:14:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:03Z|01008|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.757 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.758 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.760 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.763 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:03 np0005486808 NetworkManager[44885]: <info>  [1760433243.7687] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:03 np0005486808 NetworkManager[44885]: <info>  [1760433243.7706] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.775 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e7879cc1-35b1-4209-a0d8-d7605d766547]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.776 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.779 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.779 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[48a1a8cc-ce1a-4e4e-8287-562b0ee459da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.780 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40ad56f6-f4dd-4be3-807d-043112a29b54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:03Z|01009|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 05:14:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:03Z|01010|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.783 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.791 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[de114988-f56f-45b2-9835-4f8016d3366c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 systemd-machined[214636]: New machine qemu-120-instance-00000056.
Oct 14 05:14:03 np0005486808 nova_compute[259627]: 2025-10-14 09:14:03.802 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:03 np0005486808 systemd[1]: Started Virtual Machine qemu-120-instance-00000056.
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51c03d05-96ba-4a98-8f2a-1b1b499714f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.861 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38747671-4168-4359-8036-4997daee741c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.866 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c824d5e-5abe-4c68-b0c8-714e6578af82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 NetworkManager[44885]: <info>  [1760433243.8686] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.922 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c0743b-a168-4f33-981c-24b6188df4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.926 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf41a509-fc5d-40d1-abe5-94417e39e6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 NetworkManager[44885]: <info>  [1760433243.9561] device (tapa49b41b4-20): carrier: link connected
Oct 14 05:14:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 289 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.959 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c7b86e-e25b-42fc-b868-4c6165ad56f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:03.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1b17bc-b3b9-492e-8c45-3f6aeef33d0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702271, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355874, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.009 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42abb07e-b393-4367-bfd6-b4dcab049c7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702271, 'tstamp': 702271}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355875, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.030 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9594c57a-de51-4301-a37b-006cbd5bf938]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702271, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355876, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.072 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d72c3885-15a5-4429-b9d8-ccbcf064f0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.162 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[edfe9334-3e35-437d-aeb6-62e4f98e8aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.164 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.165 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.167 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:04 np0005486808 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 05:14:04 np0005486808 NetworkManager[44885]: <info>  [1760433244.1702] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.176 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:04Z|01011|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.181 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[73b16dd4-b5f9-485c-b43e-26cef6ac7526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.183 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:14:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:04.189 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.666 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433244.665723, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.666 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:04 np0005486808 podman[355950]: 2025-10-14 09:14:04.677543427 +0000 UTC m=+0.064090310 container create ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.685 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.689 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433244.66791, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.689 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.705 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.708 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:04 np0005486808 systemd[1]: Started libpod-conmon-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope.
Oct 14 05:14:04 np0005486808 nova_compute[259627]: 2025-10-14 09:14:04.726 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:04 np0005486808 podman[355950]: 2025-10-14 09:14:04.651481005 +0000 UTC m=+0.038027908 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:14:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6024105aa89e38c50917ad3e414821ec8f7b4c448c18cb2e7568bb5ccc24ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:04 np0005486808 podman[355950]: 2025-10-14 09:14:04.775072801 +0000 UTC m=+0.161619684 container init ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:14:04 np0005486808 podman[355950]: 2025-10-14 09:14:04.780373642 +0000 UTC m=+0.166920525 container start ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:14:04 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : New worker (355971) forked
Oct 14 05:14:04 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : Loading success.
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG nova.compute.manager [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.206 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG oslo_concurrency.lockutils [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG nova.compute.manager [req-ecb14e5c-fea7-4746-96dc-2a82a4487cfb req-82ad344e-1680-464d-9159-a1e5c8c8b540 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Processing event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.207 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.212 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433245.2121453, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.212 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.213 2 DEBUG nova.virt.libvirt.driver [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.217 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance spawned successfully.#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.232 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.236 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.255 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.410 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.411 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.425 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.492 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.492 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.499 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.499 2 INFO nova.compute.claims [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:14:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3296468083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.669 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:05 np0005486808 nova_compute[259627]: 2025-10-14 09:14:05.900 2 DEBUG nova.compute.manager [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.004 2 DEBUG oslo_concurrency.lockutils [None req-0f18a07b-b30d-4239-8310-0850db34067d 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763446376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.220 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.228 2 DEBUG nova.compute.provider_tree [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.250 2 DEBUG nova.scheduler.client.report [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.277 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.278 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.322 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.323 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.354 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.376 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.427 2 DEBUG nova.compute.manager [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.428 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.428 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.429 2 DEBUG oslo_concurrency.lockutils [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.429 2 DEBUG nova.compute.manager [req-cb458d2b-757e-42f2-919b-df62bc45631d req-1f8493cf-3da6-4493-865c-2c00c4e67ec7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Processing event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.430 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.443 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.444 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433246.4423149, d5de3978-2377-4d8e-aeaf-c952912130a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.444 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.450 2 INFO nova.virt.libvirt.driver [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance spawned successfully.#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.451 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.482 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.491 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.496 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.497 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.497 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating image(s)#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.524 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.563 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.591 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.596 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.647 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.648 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.649 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.649 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.650 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.650 2 DEBUG nova.virt.libvirt.driver [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.656 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.678 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.679 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.680 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.681 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.706 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.711 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84080e43-a9f4-4b6a-889f-d76167ff715a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.755 2 INFO nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 10.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.757 2 DEBUG nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.857 2 INFO nova.compute.manager [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 11.69 seconds to build instance.#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.880 2 DEBUG oslo_concurrency.lockutils [None req-f36399a9-b83d-44da-9f43-db9ccf8ce61a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:06 np0005486808 nova_compute[259627]: 2025-10-14 09:14:06.997 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84080e43-a9f4-4b6a-889f-d76167ff715a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.030 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.070 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] resizing rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.190 2 DEBUG nova.objects.instance [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'migration_context' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.209 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.209 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Ensure instance console log exists: /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.210 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.249 2 DEBUG nova.policy [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559a8ea4f81141efa5e11da9b174482d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de05cc126e24608be28a7d5dea18bf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.382 2 DEBUG nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.382 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG oslo_concurrency.lockutils [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.383 2 DEBUG nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.383 2 WARNING nova.compute.manager [req-d1a78b19-e037-4df1-9cb9-04c00c9f4d7f req-fe87dffc-5dc5-4674-8b85-684fccaeecd3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 05:14:07 np0005486808 nova_compute[259627]: 2025-10-14 09:14:07.965 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Successfully created port: 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:14:08 np0005486808 nova_compute[259627]: 2025-10-14 09:14:08.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:08Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:08Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.769 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Successfully updated port: 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.776 2 DEBUG nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.776 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.777 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.777 2 DEBUG oslo_concurrency.lockutils [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.778 2 DEBUG nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.778 2 WARNING nova.compute.manager [req-53b7f0c4-88a2-4d4b-b459-4856291cd060 req-8620bd54-900a-417a-98a0-1274568fa9ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received unexpected event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.787 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.787 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquired lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.788 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:09 np0005486808 nova_compute[259627]: 2025-10-14 09:14:09.919 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:14:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 372 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 314 op/s
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.174 2 DEBUG nova.compute.manager [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-changed-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.174 2 DEBUG nova.compute.manager [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Refreshing instance network info cache due to event network-changed-1e3d49fe-52bd-40cb-ae1a-86eb664df473. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.175 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.701 2 DEBUG nova.network.neutron [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.728 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Releasing lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.729 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance network_info: |[{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.729 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.730 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Refreshing network info cache for port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.735 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start _get_guest_xml network_info=[{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.742 2 WARNING nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.749 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.750 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.754 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.755 2 DEBUG nova.virt.libvirt.host [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.755 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.756 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.756 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.757 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.758 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.758 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.759 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.759 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.760 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.761 2 DEBUG nova.virt.hardware [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.766 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.899 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.900 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.901 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.902 2 INFO nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Terminating instance#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.903 2 DEBUG nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:14:10 np0005486808 kernel: tapdc177f85-b3 (unregistering): left promiscuous mode
Oct 14 05:14:10 np0005486808 NetworkManager[44885]: <info>  [1760433250.9542] device (tapdc177f85-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:10Z|01012|binding|INFO|Releasing lport dc177f85-b331-40ec-b30f-1f667878bcb8 from this chassis (sb_readonly=0)
Oct 14 05:14:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:10Z|01013|binding|INFO|Setting lport dc177f85-b331-40ec-b30f-1f667878bcb8 down in Southbound
Oct 14 05:14:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:10Z|01014|binding|INFO|Removing iface tapdc177f85-b3 ovn-installed in OVS
Oct 14 05:14:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.978 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:28:0f 10.100.0.14'], port_security=['fa:16:3e:6a:28:0f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd5de3978-2377-4d8e-aeaf-c952912130a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=dc177f85-b331-40ec-b30f-1f667878bcb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Port dc177f85-b331-40ec-b30f-1f667878bcb8 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:14:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:10.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:10 np0005486808 nova_compute[259627]: 2025-10-14 09:14:10.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.003 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40ec4e19-2a3a-44bd-afdf-2fea9fa1806e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 14 05:14:11 np0005486808 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 5.588s CPU time.
Oct 14 05:14:11 np0005486808 systemd-machined[214636]: Machine qemu-119-instance-00000061 terminated.
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.037 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec475d74-8542-47ae-bce3-06e36ca28c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13714929-19cc-4c04-8203-3495f668aeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.063 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cb55f6c0-c97b-401f-86c4-ba33ff21cdf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51db7615-6552-4916-864c-9d677bf9d60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356199, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.110 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e593c83c-389b-4287-ab43-511d4b4533f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356200, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356200, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.115 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.123 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.124 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.125 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.149 2 INFO nova.virt.libvirt.driver [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Instance destroyed successfully.#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.150 2 DEBUG nova.objects.instance [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid d5de3978-2377-4d8e-aeaf-c952912130a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.163 2 DEBUG nova.virt.libvirt.vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1227005018',display_name='tempest-ServersTestJSON-server-1227005018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1227005018',id=97,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-0z0rtrem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d5de3978-2377-4d8e-aeaf-c952912130a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.163 2 DEBUG nova.network.os_vif_util [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "dc177f85-b331-40ec-b30f-1f667878bcb8", "address": "fa:16:3e:6a:28:0f", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc177f85-b3", "ovs_interfaceid": "dc177f85-b331-40ec-b30f-1f667878bcb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.164 2 DEBUG nova.network.os_vif_util [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.165 2 DEBUG os_vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc177f85-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.173 2 INFO os_vif [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:28:0f,bridge_name='br-int',has_traffic_filtering=True,id=dc177f85-b331-40ec-b30f-1f667878bcb8,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc177f85-b3')#033[00m
Oct 14 05:14:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239772539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.252 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.272 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.277 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.453 2 DEBUG nova.objects.instance [None req-f9282da8-119f-461e-93f7-f0c7d3345cc6 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.477 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433251.4770362, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.478 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.503 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.515 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.542 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.579 2 INFO nova.virt.libvirt.driver [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deleting instance files /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2_del#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.580 2 INFO nova.virt.libvirt.driver [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deletion of /var/lib/nova/instances/d5de3978-2377-4d8e-aeaf-c952912130a2_del complete#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.625 2 INFO nova.compute.manager [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG oslo.service.loopingcall [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.626 2 DEBUG nova.network.neutron [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:14:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088650112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.726 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.727 2 DEBUG nova.virt.libvirt.vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.728 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.729 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.730 2 DEBUG nova.objects.instance [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.753 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <uuid>84080e43-a9f4-4b6a-889f-d76167ff715a</uuid>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <name>instance-00000062</name>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerMetadataTestJSON-server-250281827</nova:name>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:10</nova:creationTime>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:user uuid="559a8ea4f81141efa5e11da9b174482d">tempest-ServerMetadataTestJSON-10451963-project-member</nova:user>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:project uuid="2de05cc126e24608be28a7d5dea18bf3">tempest-ServerMetadataTestJSON-10451963</nova:project>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <nova:port uuid="1e3d49fe-52bd-40cb-ae1a-86eb664df473">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="serial">84080e43-a9f4-4b6a-889f-d76167ff715a</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="uuid">84080e43-a9f4-4b6a-889f-d76167ff715a</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/84080e43-a9f4-4b6a-889f-d76167ff715a_disk">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:2d:6b:bc"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <target dev="tap1e3d49fe-52"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/console.log" append="off"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:11 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:11 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:11 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:11 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Preparing to wait for external event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.759 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG nova.virt.libvirt.vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:06Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.760 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.761 2 DEBUG nova.network.os_vif_util [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.761 2 DEBUG os_vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3d49fe-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e3d49fe-52, col_values=(('external_ids', {'iface-id': '1e3d49fe-52bd-40cb-ae1a-86eb664df473', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:6b:bc', 'vm-uuid': '84080e43-a9f4-4b6a-889f-d76167ff715a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 NetworkManager[44885]: <info>  [1760433251.7679] manager: (tap1e3d49fe-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.773 2 INFO os_vif [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52')#033[00m
Oct 14 05:14:11 np0005486808 NetworkManager[44885]: <info>  [1760433251.7792] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:11Z|01015|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 05:14:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:11Z|01016|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 05:14:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:11Z|01017|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.802 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.803 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.804 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35373295-bae5-4891-9f74-ca31da94dcfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:11.806 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.849 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:11 np0005486808 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.850 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.851 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] No VIF found with MAC fa:16:3e:2d:6b:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.851 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Using config drive#033[00m
Oct 14 05:14:11 np0005486808 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000056.scope: Consumed 7.169s CPU time.
Oct 14 05:14:11 np0005486808 systemd-machined[214636]: Machine qemu-120-instance-00000056 terminated.
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.873 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:11 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : haproxy version is 2.8.14-c23fe91
Oct 14 05:14:11 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [NOTICE]   (355969) : path to executable is /usr/sbin/haproxy
Oct 14 05:14:11 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [WARNING]  (355969) : Exiting Master process...
Oct 14 05:14:11 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [ALERT]    (355969) : Current worker (355971) exited with code 143 (Terminated)
Oct 14 05:14:11 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[355965]: [WARNING]  (355969) : All workers exited. Exiting... (0)
Oct 14 05:14:11 np0005486808 systemd[1]: libpod-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope: Deactivated successfully.
Oct 14 05:14:11 np0005486808 nova_compute[259627]: 2025-10-14 09:14:11.961 2 DEBUG nova.compute.manager [None req-f9282da8-119f-461e-93f7-f0c7d3345cc6 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:11 np0005486808 podman[356317]: 2025-10-14 09:14:11.961956652 +0000 UTC m=+0.052037703 container died ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:14:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 9.4 MiB/s wr, 410 op/s
Oct 14 05:14:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6b6024105aa89e38c50917ad3e414821ec8f7b4c448c18cb2e7568bb5ccc24ae-merged.mount: Deactivated successfully.
Oct 14 05:14:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62-userdata-shm.mount: Deactivated successfully.
Oct 14 05:14:11 np0005486808 podman[356317]: 2025-10-14 09:14:11.998581385 +0000 UTC m=+0.088662446 container cleanup ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:12 np0005486808 systemd[1]: libpod-conmon-ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62.scope: Deactivated successfully.
Oct 14 05:14:12 np0005486808 podman[356360]: 2025-10-14 09:14:12.064348416 +0000 UTC m=+0.044366695 container remove ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.070 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6e94de-9e94-4fc2-8039-c93fa3932979]: (4, ('Tue Oct 14 09:14:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62)\ned73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62\nTue Oct 14 09:14:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (ed73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62)\ned73332abdf5a657180e7768a69336cecfef6ef0eb1db51e4313923b3563ff62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.071 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1824aae2-3898-458b-8c36-e879f42d837f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.072 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:12 np0005486808 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.101 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df75f7af-e333-4cbb-ba68-644760774af3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6482cc3-7574-446c-9fc4-8d753f9a02d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.127 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[270c65e4-4298-4afe-8dae-f83dc726d10d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb37128e-9a22-4cbc-a6ef-fd6a232ae01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702261, 'reachable_time': 39374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.148 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:14:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:12.148 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[32d2bf8f-f6b3-4e85-8054-8bd3fb859805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:12 np0005486808 nova_compute[259627]: 2025-10-14 09:14:12.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 14 05:14:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 14 05:14:12 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.372 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Creating config drive at /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.381 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0uviuo5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.468 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.468 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.469 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.469 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.470 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.470 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-unplugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.471 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.472 2 DEBUG oslo_concurrency.lockutils [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.472 2 DEBUG nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] No waiting events found dispatching network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.472 2 WARNING nova.compute.manager [req-6a9b0169-7cf6-4f32-9f59-39ebf5748945 req-3ee4a0e9-6475-4edd-877a-ac5f2f88d06c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received unexpected event network-vif-plugged-dc177f85-b331-40ec-b30f-1f667878bcb8 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.551 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0uviuo5" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.615 2 DEBUG nova.storage.rbd_utils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] rbd image 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.624 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.698 2 DEBUG nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.700 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.700 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.701 2 DEBUG oslo_concurrency.lockutils [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.701 2 DEBUG nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.701 2 WARNING nova.compute.manager [req-1f931542-07a8-48c9-8331-16c9d290b164 req-33e59c50-3bd4-4998-a92f-d848baea6e2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.709 2 DEBUG nova.network.neutron [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.737 2 INFO nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Took 2.11 seconds to deallocate network for instance.#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.796 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.797 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.854 2 DEBUG oslo_concurrency.processutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config 84080e43-a9f4-4b6a-889f-d76167ff715a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.855 2 INFO nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deleting local config drive /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.915 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updated VIF entry in instance network info cache for port 1e3d49fe-52bd-40cb-ae1a-86eb664df473. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.917 2 DEBUG nova.network.neutron [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [{"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.932 2 DEBUG oslo_concurrency.lockutils [req-a00b9d15-59a8-43b8-8b81-2311516d2669 req-85d7d570-3f8f-4031-8c8d-85cc0ce6dc0c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-84080e43-a9f4-4b6a-889f-d76167ff715a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:13 np0005486808 kernel: tap1e3d49fe-52: entered promiscuous mode
Oct 14 05:14:13 np0005486808 systemd-udevd[356190]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:13 np0005486808 NetworkManager[44885]: <info>  [1760433253.9449] manager: (tap1e3d49fe-52): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:13Z|01018|binding|INFO|Claiming lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 for this chassis.
Oct 14 05:14:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:13Z|01019|binding|INFO|1e3d49fe-52bd-40cb-ae1a-86eb664df473: Claiming fa:16:3e:2d:6b:bc 10.100.0.4
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.960 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:6b:bc 10.100.0.4'], port_security=['fa:16:3e:2d:6b:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84080e43-a9f4-4b6a-889f-d76167ff715a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de05cc126e24608be28a7d5dea18bf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f10ca33-82fc-4f36-9ded-7e23e5949e23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3db415ba-b4a8-48a5-b3a3-5e4c9b11b067, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e3d49fe-52bd-40cb-ae1a-86eb664df473) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.962 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 in datapath 8110a1ba-8e30-49ef-ba1c-c72228086a20 bound to our chassis#033[00m
Oct 14 05:14:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 372 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.8 MiB/s wr, 436 op/s
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.964 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8110a1ba-8e30-49ef-ba1c-c72228086a20#033[00m
Oct 14 05:14:13 np0005486808 NetworkManager[44885]: <info>  [1760433253.9681] device (tap1e3d49fe-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:13 np0005486808 NetworkManager[44885]: <info>  [1760433253.9693] device (tap1e3d49fe-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:13 np0005486808 nova_compute[259627]: 2025-10-14 09:14:13.977 2 DEBUG oslo_concurrency.processutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9633c3b4-044b-4303-8b78-cb7661e192be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.986 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8110a1ba-81 in ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.988 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8110a1ba-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d6128f9e-77ef-4c54-a00f-bf8c7afcbec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:13.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2871d7-9749-4cfe-b133-d1d1dd876183]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:13Z|01020|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 ovn-installed in OVS
Oct 14 05:14:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:13Z|01021|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 up in Southbound
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed2ad5f-b3ab-402a-8d27-3fb863503f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 systemd-machined[214636]: New machine qemu-121-instance-00000062.
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.024 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa32c0e1-b4f4-4949-b06e-924ccab85ff3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:14 np0005486808 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.069 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[884e66a6-c8ff-4747-9d8d-3e2c273f51ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.076 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dba3c4-69b5-45ab-aa42-3151229d910c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 NetworkManager[44885]: <info>  [1760433254.0780] manager: (tap8110a1ba-80): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Oct 14 05:14:14 np0005486808 systemd-udevd[356447]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e8803951-6800-4017-951c-15ce51efaf7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.129 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49af67-9307-436a-841c-d71e6d74a1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 NetworkManager[44885]: <info>  [1760433254.1571] device (tap8110a1ba-80): carrier: link connected
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.165 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4157a261-6281-4e83-a306-363b7061e72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.190 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[218cfbe6-d78e-4919-8c13-8d4bcf636d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8110a1ba-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5b:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703291, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356495, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.218 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[589eb77d-41c9-4b54-a050-c6b1d59f9184]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5b8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703291, 'tstamp': 703291}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356496, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.243 2 INFO nova.compute.manager [None req-3bcce2a9-c5d0-43ef-becb-35cfb9686988 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Get console output#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.251 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e666f-55ad-447c-9cb3-323cc80b10ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8110a1ba-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5b:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703291, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356497, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de2f408d-530c-4764-bd9e-a9de44dc79c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.372 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c89bb7d8-1947-4007-9389-b1c2572e99d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8110a1ba-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.375 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.375 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8110a1ba-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:14 np0005486808 NetworkManager[44885]: <info>  [1760433254.3792] manager: (tap8110a1ba-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct 14 05:14:14 np0005486808 kernel: tap8110a1ba-80: entered promiscuous mode
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.388 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8110a1ba-80, col_values=(('external_ids', {'iface-id': '08780918-7f13-4df3-8cab-973d8a442035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:14Z|01022|binding|INFO|Releasing lport 08780918-7f13-4df3-8cab-973d8a442035 from this chassis (sb_readonly=0)
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.461 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3c5867-51e9-4aa6-8988-e99a0f9ada70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.463 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-8110a1ba-8e30-49ef-ba1c-c72228086a20
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/8110a1ba-8e30-49ef-ba1c-c72228086a20.pid.haproxy
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 8110a1ba-8e30-49ef-ba1c-c72228086a20
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:14:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:14.464 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'env', 'PROCESS_TAG=haproxy-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8110a1ba-8e30-49ef-ba1c-c72228086a20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:14:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258488040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.496 2 DEBUG oslo_concurrency.processutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.502 2 DEBUG nova.compute.provider_tree [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.522 2 DEBUG nova.scheduler.client.report [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.554 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.591 2 INFO nova.scheduler.client.report [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance d5de3978-2377-4d8e-aeaf-c952912130a2#033[00m
Oct 14 05:14:14 np0005486808 nova_compute[259627]: 2025-10-14 09:14:14.678 2 DEBUG oslo_concurrency.lockutils [None req-63bbe61d-c8ca-4419-8f0f-d4fc6bce4e38 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d5de3978-2377-4d8e-aeaf-c952912130a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:14 np0005486808 podman[356577]: 2025-10-14 09:14:14.845286095 +0000 UTC m=+0.057931599 container create 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:14 np0005486808 systemd[1]: Started libpod-conmon-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope.
Oct 14 05:14:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c703b8ad82eb6af106c099e66a521ffd9ab04da878e5ff9418cd428162b5e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:14 np0005486808 podman[356577]: 2025-10-14 09:14:14.815155413 +0000 UTC m=+0.027800957 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:14:14 np0005486808 podman[356577]: 2025-10-14 09:14:14.916690915 +0000 UTC m=+0.129336439 container init 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:14:14 np0005486808 podman[356577]: 2025-10-14 09:14:14.927139333 +0000 UTC m=+0.139784837 container start 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:14:14 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : New worker (356600) forked
Oct 14 05:14:14 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : Loading success.
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.315 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.3144155, 84080e43-a9f4-4b6a-889f-d76167ff715a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.315 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.337 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.342 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.3146741, 84080e43-a9f4-4b6a-889f-d76167ff715a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.343 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.360 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.364 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.382 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.632 2 INFO nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Resuming#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.633 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'flavor' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.670 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.670 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquired lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.671 2 DEBUG nova.network.neutron [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.689 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.690 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.691 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Processing event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.692 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.692 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.693 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.693 2 DEBUG oslo_concurrency.lockutils [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.694 2 DEBUG nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.694 2 WARNING nova.compute.manager [req-a1d4709b-41d9-4c64-8aa9-161171154316 req-14f6e636-9391-48cd-8936-a1ed0a50690b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.696 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.704 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433255.7037997, 84080e43-a9f4-4b6a-889f-d76167ff715a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.704 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.708 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.713 2 INFO nova.virt.libvirt.driver [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance spawned successfully.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.713 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.728 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.737 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.743 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.743 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.744 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.745 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.746 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.747 2 DEBUG nova.virt.libvirt.driver [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.775 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.810 2 INFO nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 9.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.811 2 DEBUG nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.830 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Received event network-vif-deleted-dc177f85-b331-40ec-b30f-1f667878bcb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.831 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.831 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.832 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.832 2 DEBUG oslo_concurrency.lockutils [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.833 2 DEBUG nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.833 2 WARNING nova.compute.manager [req-952ef1f5-4b96-4f40-b7e6-6c72667533e7 req-469077bd-bfbd-4f04-b64a-4a21967930b0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.884 2 INFO nova.compute.manager [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 10.42 seconds to build instance.#033[00m
Oct 14 05:14:15 np0005486808 nova_compute[259627]: 2025-10-14 09:14:15.902 2 DEBUG oslo_concurrency.lockutils [None req-b8306081-e695-47cb-89c1-58c342c88578 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.168 2 INFO nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Rebuilding instance#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.442 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'trusted_certs' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.461 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.526 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_requests' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.537 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.547 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.557 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.566 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.570 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:14:16 np0005486808 nova_compute[259627]: 2025-10-14 09:14:16.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.479 2 DEBUG nova.network.neutron [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [{"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.495 2 DEBUG oslo_concurrency.lockutils [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Releasing lock "refresh_cache-2534f8b9-e832-4b78-ada4-e551429bdc75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.502 2 DEBUG nova.virt.libvirt.vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:12Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.503 2 DEBUG nova.network.os_vif_util [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.505 2 DEBUG nova.network.os_vif_util [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.506 2 DEBUG os_vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f827284-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f827284-f3, col_values=(('external_ids', {'iface-id': '4f827284-f357-43c5-bdde-c69731b52914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d7:f7', 'vm-uuid': '2534f8b9-e832-4b78-ada4-e551429bdc75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.516 2 INFO os_vif [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')#033[00m
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.547 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:17 np0005486808 kernel: tap4f827284-f3: entered promiscuous mode
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.6211] manager: (tap4f827284-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Oct 14 05:14:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:17Z|01023|binding|INFO|Claiming lport 4f827284-f357-43c5-bdde-c69731b52914 for this chassis.
Oct 14 05:14:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:17Z|01024|binding|INFO|4f827284-f357-43c5-bdde-c69731b52914: Claiming fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.635 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.637 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c bound to our chassis#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.652 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4671a0e6-c214-4da5-9d3d-9faa9d7574d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.652 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa49b41b4-21 in ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa49b41b4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[81e39661-5507-4259-b282-3ab34431c9d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2422212f-a6e7-4475-9bec-d5aeca0db21a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:17Z|01025|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 ovn-installed in OVS
Oct 14 05:14:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:17Z|01026|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 up in Southbound
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.672 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5aee29-70ee-4606-a3ee-ebd285705877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 systemd-machined[214636]: New machine qemu-122-instance-00000056.
Oct 14 05:14:17 np0005486808 systemd[1]: Started Virtual Machine qemu-122-instance-00000056.
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d781e5-9036-455a-91b3-827f593954d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 systemd-udevd[356626]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.7192] device (tap4f827284-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.7208] device (tap4f827284-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.726 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a99029db-b9f0-4ef9-a332-2207243e03f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.7364] manager: (tapa49b41b4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d395c3da-30ea-476c-b933-b891ce5909e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.775 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dfb07c-c2a8-4b9d-adc2-3a0335e79a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.778 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[53608889-74f4-4209-b597-7481a9e0ef10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.8029] device (tapa49b41b4-20): carrier: link connected
Oct 14 05:14:17 np0005486808 podman[356631]: 2025-10-14 09:14:17.805954405 +0000 UTC m=+0.075525263 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.813 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eb25837f-35d9-4c7f-a44b-fdbd41c96f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 podman[356629]: 2025-10-14 09:14:17.821481848 +0000 UTC m=+0.096865889 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.839 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74452377-bfc9-44f0-92c4-cd68c9180e7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703656, 'reachable_time': 25331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356693, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.856 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5895f24e-ed37-441f-b960-b4dad4e4ecdf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:5b6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703656, 'tstamp': 703656}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356694, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[409a6b31-329c-4ca2-8524-ea0cbf985d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa49b41b4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:5b:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703656, 'reachable_time': 25331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356695, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.912 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6fff46ed-a0ea-4040-af17-32732a2ef22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32ff6e2a-a0b6-4bf9-b3cf-39d0c723942d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.987 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49b41b4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 NetworkManager[44885]: <info>  [1760433257.9900] manager: (tapa49b41b4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:17 np0005486808 kernel: tapa49b41b4-20: entered promiscuous mode
Oct 14 05:14:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:17.997 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa49b41b4-20, col_values=(('external_ids', {'iface-id': '61fe5571-a8eb-446a-8c4c-1f6f6758b146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:17Z|01027|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:14:17 np0005486808 nova_compute[259627]: 2025-10-14 09:14:17.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.033 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.034 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1a5533-4860-4e93-b8cd-2238e09c9d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.035 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.pid.haproxy
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a49b41b4-2559-4a22-a274-a6c7bbe75f2c
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.037 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'env', 'PROCESS_TAG=haproxy-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a49b41b4-2559-4a22-a274-a6c7bbe75f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:18 np0005486808 podman[356769]: 2025-10-14 09:14:18.403839461 +0000 UTC m=+0.048878466 container create 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:18 np0005486808 systemd[1]: Started libpod-conmon-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope.
Oct 14 05:14:18 np0005486808 podman[356769]: 2025-10-14 09:14:18.381645744 +0000 UTC m=+0.026684759 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:14:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6781cf622e873ac119e8ad8166fc07516eec1ccc73d027486934f81c24ca8b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:18 np0005486808 podman[356769]: 2025-10-14 09:14:18.507500576 +0000 UTC m=+0.152539591 container init 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 05:14:18 np0005486808 podman[356769]: 2025-10-14 09:14:18.514360975 +0000 UTC m=+0.159399970 container start 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:18 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : New worker (356790) forked
Oct 14 05:14:18 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : Loading success.
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.555 2 DEBUG nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.556 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.557 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.557 2 DEBUG oslo_concurrency.lockutils [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.558 2 DEBUG nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.558 2 WARNING nova.compute.manager [req-9843a03e-883d-4b79-93bf-c015d6df61fb req-658780d6-0544-4a76-b023-80133ff9643f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.642 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.643 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.648 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 2534f8b9-e832-4b78-ada4-e551429bdc75 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.649 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433258.6485362, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.649 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.665 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.669 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.678 2 DEBUG nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.679 2 DEBUG nova.objects.instance [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.682 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.701 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.702 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433258.664729, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.702 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.705 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance running successfully.#033[00m
Oct 14 05:14:18 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.708 2 DEBUG nova.virt.libvirt.guest [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.709 2 DEBUG nova.compute.manager [None req-4cfae5ec-06a0-4d13-a7b9-5d2d58c250d4 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.733 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.737 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.748 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.748 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.755 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.756 2 INFO nova.compute.claims [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.759 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:14:18 np0005486808 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 05:14:18 np0005486808 NetworkManager[44885]: <info>  [1760433258.8311] device (tap7103ce4a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:18Z|01028|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 05:14:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:18Z|01029|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 05:14:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:18Z|01030|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.849 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.850 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.852 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0521bca1-82f4-424a-83b2-0e43cf7c8308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:18.854 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace which is not needed anymore#033[00m
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:18 np0005486808 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 14 05:14:18 np0005486808 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 13.415s CPU time.
Oct 14 05:14:18 np0005486808 systemd-machined[214636]: Machine qemu-118-instance-00000060 terminated.
Oct 14 05:14:18 np0005486808 nova_compute[259627]: 2025-10-14 09:14:18.943 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : haproxy version is 2.8.14-c23fe91
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [NOTICE]   (354918) : path to executable is /usr/sbin/haproxy
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : Exiting Master process...
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : Exiting Master process...
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [ALERT]    (354918) : Current worker (354920) exited with code 143 (Terminated)
Oct 14 05:14:19 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[354908]: [WARNING]  (354918) : All workers exited. Exiting... (0)
Oct 14 05:14:19 np0005486808 systemd[1]: libpod-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope: Deactivated successfully.
Oct 14 05:14:19 np0005486808 conmon[354908]: conmon a15e176523bc2c68be72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope/container/memory.events
Oct 14 05:14:19 np0005486808 podman[356819]: 2025-10-14 09:14:19.030445164 +0000 UTC m=+0.054850072 container died a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:14:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e223c53edf6cce1508ede20bb72625545cc9c2f087f47ad069c77aa13b5410e4-merged.mount: Deactivated successfully.
Oct 14 05:14:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454-userdata-shm.mount: Deactivated successfully.
Oct 14 05:14:19 np0005486808 podman[356819]: 2025-10-14 09:14:19.083931033 +0000 UTC m=+0.108335951 container cleanup a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:14:19 np0005486808 systemd[1]: libpod-conmon-a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454.scope: Deactivated successfully.
Oct 14 05:14:19 np0005486808 podman[356876]: 2025-10-14 09:14:19.170117407 +0000 UTC m=+0.051345917 container remove a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.176 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d191437f-9713-476c-90c1-a17cb8775e75]: (4, ('Tue Oct 14 09:14:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454)\na15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454\nTue Oct 14 09:14:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (a15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454)\na15e176523bc2c68be726efae2157510856ad583037ec2ed2302f535cb80b454\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.178 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57925bc1-d113-484c-90ea-9ab220d2c12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.180 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:19 np0005486808 kernel: tap563aa000-40: left promiscuous mode
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.214 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb2be5b-dff6-4ffb-b794-c395f57d5b2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[80fedbda-a921-4bb1-abd7-d41fb4c61a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.234 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8f34e-38c4-44c3-8af5-079e58b50230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.252 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a79bbbc-29c5-43ef-b20b-0fd54a5ab4e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701284, 'reachable_time': 32111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356895, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 systemd[1]: run-netns-ovnmeta\x2d563aa000\x2d400f\x2d4c19\x2dba83\x2d9377cc50d29f.mount: Deactivated successfully.
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.258 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:14:19 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:19.259 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[064e7ffd-1507-4fa7-9e31-e978c946bdb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530003336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.370 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.376 2 DEBUG nova.compute.provider_tree [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.398 2 DEBUG nova.scheduler.client.report [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.421 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.422 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.474 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.475 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.496 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.519 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.600 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance shutdown successfully after 3 seconds.#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.606 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.610 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.611 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:15Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.612 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.613 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.613 2 DEBUG os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7103ce4a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.625 2 INFO os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.644 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.646 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.647 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating image(s)#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.667 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.695 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.721 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.725 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.768 2 DEBUG nova.policy [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.810 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.811 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.812 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.812 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.839 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:19 np0005486808 nova_compute[259627]: 2025-10-14 09:14:19.843 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 17db7f38-479a-4d56-9424-7f5ab695ccea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 326 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 318 op/s
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.139 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 17db7f38-479a-4d56-9424-7f5ab695ccea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.214 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.253 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting instance files /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.254 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deletion of /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del complete#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.317 2 DEBUG nova.objects.instance [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.330 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Successfully created port: b162ed75-30c8-4d39-97d3-7baa4c970ef6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.333 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.333 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Ensure instance console log exists: /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.334 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.334 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.335 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.409 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.410 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating image(s)#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.430 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.453 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.473 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.475 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.579 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.581 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.582 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.582 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.611 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.616 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.669 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.670 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.671 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.671 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.672 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.672 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.673 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.673 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.674 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.674 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.675 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.675 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.675 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.676 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.676 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.677 2 DEBUG oslo_concurrency.lockutils [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.677 2 DEBUG nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.678 2 WARNING nova.compute.manager [req-50e292c3-a091-4466-9673-1db3c4888649 req-8b1b8649-4cbf-4a96-8f08-1e1daec84efb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct 14 05:14:20 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 05:14:20 np0005486808 nova_compute[259627]: 2025-10-14 09:14:20.897 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf b595141f-123e-4250-bfec-888d866fd0c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.000 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.001 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.001 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.002 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.002 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.005 2 INFO nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Terminating instance#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.007 2 DEBUG nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.013 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:21 np0005486808 kernel: tap1e3d49fe-52 (unregistering): left promiscuous mode
Oct 14 05:14:21 np0005486808 NetworkManager[44885]: <info>  [1760433261.1079] device (tap1e3d49fe-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:21Z|01031|binding|INFO|Releasing lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 from this chassis (sb_readonly=0)
Oct 14 05:14:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:21Z|01032|binding|INFO|Setting lport 1e3d49fe-52bd-40cb-ae1a-86eb664df473 down in Southbound
Oct 14 05:14:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:21Z|01033|binding|INFO|Removing iface tap1e3d49fe-52 ovn-installed in OVS
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.130 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:6b:bc 10.100.0.4'], port_security=['fa:16:3e:2d:6b:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84080e43-a9f4-4b6a-889f-d76167ff715a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de05cc126e24608be28a7d5dea18bf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f10ca33-82fc-4f36-9ded-7e23e5949e23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3db415ba-b4a8-48a5-b3a3-5e4c9b11b067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=1e3d49fe-52bd-40cb-ae1a-86eb664df473) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.131 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 1e3d49fe-52bd-40cb-ae1a-86eb664df473 in datapath 8110a1ba-8e30-49ef-ba1c-c72228086a20 unbound from our chassis#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.133 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8110a1ba-8e30-49ef-ba1c-c72228086a20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.134 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[685fd784-f956-486b-a50f-b493ef3bb11c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.135 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 namespace which is not needed anymore#033[00m
Oct 14 05:14:21 np0005486808 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 14 05:14:21 np0005486808 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 6.499s CPU time.
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:21 np0005486808 systemd-machined[214636]: Machine qemu-121-instance-00000062 terminated.
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.186 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.187 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Ensure instance console log exists: /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.187 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.188 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.188 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.191 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start _get_guest_xml network_info=[{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.196 2 WARNING nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.205 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.207 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.210 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.210 2 DEBUG nova.virt.libvirt.host [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.211 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.211 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.212 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.213 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.214 2 DEBUG nova.virt.hardware [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.215 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'vcpu_model' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.236 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : haproxy version is 2.8.14-c23fe91
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [NOTICE]   (356598) : path to executable is /usr/sbin/haproxy
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : Exiting Master process...
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : Exiting Master process...
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [ALERT]    (356598) : Current worker (356600) exited with code 143 (Terminated)
Oct 14 05:14:21 np0005486808 neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20[356594]: [WARNING]  (356598) : All workers exited. Exiting... (0)
Oct 14 05:14:21 np0005486808 systemd[1]: libpod-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope: Deactivated successfully.
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:21 np0005486808 podman[357271]: 2025-10-14 09:14:21.295445608 +0000 UTC m=+0.052348851 container died 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.308 2 INFO nova.virt.libvirt.driver [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Instance destroyed successfully.#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.311 2 DEBUG nova.objects.instance [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lazy-loading 'resources' on Instance uuid 84080e43-a9f4-4b6a-889f-d76167ff715a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.331 2 DEBUG nova.virt.libvirt.vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-250281827',display_name='tempest-ServerMetadataTestJSON-server-250281827',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-250281827',id=98,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de05cc126e24608be28a7d5dea18bf3',ramdisk_id='',reservation_id='r-aqxl060k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-10451963',owner_user_name='tempest-ServerMetadataTestJSON-10451963-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='559a8ea4f81141efa5e11da9b174482d',uuid=84080e43-a9f4-4b6a-889f-d76167ff715a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.332 2 DEBUG nova.network.os_vif_util [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converting VIF {"id": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "address": "fa:16:3e:2d:6b:bc", "network": {"id": "8110a1ba-8e30-49ef-ba1c-c72228086a20", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-885916834-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de05cc126e24608be28a7d5dea18bf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e3d49fe-52", "ovs_interfaceid": "1e3d49fe-52bd-40cb-ae1a-86eb664df473", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.334 2 DEBUG nova.network.os_vif_util [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.335 2 DEBUG os_vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5-userdata-shm.mount: Deactivated successfully.
Oct 14 05:14:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e8c703b8ad82eb6af106c099e66a521ffd9ab04da878e5ff9418cd428162b5e4-merged.mount: Deactivated successfully.
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3d49fe-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.352 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Successfully updated port: b162ed75-30c8-4d39-97d3-7baa4c970ef6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:14:21 np0005486808 podman[357271]: 2025-10-14 09:14:21.354308579 +0000 UTC m=+0.111211842 container cleanup 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.355 2 INFO os_vif [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:6b:bc,bridge_name='br-int',has_traffic_filtering=True,id=1e3d49fe-52bd-40cb-ae1a-86eb664df473,network=Network(8110a1ba-8e30-49ef-ba1c-c72228086a20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e3d49fe-52')#033[00m
Oct 14 05:14:21 np0005486808 systemd[1]: libpod-conmon-9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5.scope: Deactivated successfully.
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.378 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.379 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.379 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:21 np0005486808 podman[357314]: 2025-10-14 09:14:21.429112283 +0000 UTC m=+0.050689720 container remove 9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.441 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[50ddf6de-8228-4d9f-b85c-ffffba104511]: (4, ('Tue Oct 14 09:14:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 (9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5)\n9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5\nTue Oct 14 09:14:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 (9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5)\n9af3eac4977b5e91ac231d5bb80b1d9835e59533cfc524b04defb270e2219fb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f41144ac-d43d-487a-8c1f-cf2fe7fd4886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.445 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8110a1ba-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:21 np0005486808 kernel: tap8110a1ba-80: left promiscuous mode
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.453 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c984381f-290c-48a3-a38c-e25072c68fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.476 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0777f3-c2be-4f17-8fed-ebfb789c2375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.477 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e75c9e-4b71-4e27-8db6-02cf87fc0d28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40dd8064-b3c8-4b42-952f-81812cc14da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703282, 'reachable_time': 20477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357360, 'error': None, 'target': 'ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 systemd[1]: run-netns-ovnmeta\x2d8110a1ba\x2d8e30\x2d49ef\x2dba1c\x2dc72228086a20.mount: Deactivated successfully.
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.499 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8110a1ba-8e30-49ef-ba1c-c72228086a20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:14:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:21.499 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c70f7e83-0740-4b5c-86c2-843b40282071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.596 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.663 2 DEBUG nova.compute.manager [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-changed-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.663 2 DEBUG nova.compute.manager [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Refreshing instance network info cache due to event network-changed-b162ed75-30c8-4d39-97d3-7baa4c970ef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.664 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012694589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.704 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.737 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.743 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.810 2 INFO nova.virt.libvirt.driver [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deleting instance files /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a_del#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.812 2 INFO nova.virt.libvirt.driver [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deletion of /var/lib/nova/instances/84080e43-a9f4-4b6a-889f-d76167ff715a_del complete#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.874 2 INFO nova.compute.manager [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG oslo.service.loopingcall [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:14:21 np0005486808 nova_compute[259627]: 2025-10-14 09:14:21.875 2 DEBUG nova.network.neutron [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:14:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 170 op/s
Oct 14 05:14:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623412631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.186 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.187 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.188 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.189 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.191 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <uuid>b595141f-123e-4250-bfec-888d866fd0c6</uuid>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <name>instance-00000060</name>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399788817</nova:name>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:21</nova:creationTime>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <nova:port uuid="7103ce4a-69e8-454b-aed3-251ecb109232">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="serial">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="uuid">b595141f-123e-4250-bfec-888d866fd0c6</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b595141f-123e-4250-bfec-888d866fd0c6_disk.config">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9d:3c:de"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <target dev="tap7103ce4a-69"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/console.log" append="off"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:22 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:22 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:22 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:22 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.193 2 DEBUG nova.virt.libvirt.vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:20Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.193 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.194 2 DEBUG nova.network.os_vif_util [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.194 2 DEBUG os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7103ce4a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7103ce4a-69, col_values=(('external_ids', {'iface-id': '7103ce4a-69e8-454b-aed3-251ecb109232', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:3c:de', 'vm-uuid': 'b595141f-123e-4250-bfec-888d866fd0c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:22 np0005486808 NetworkManager[44885]: <info>  [1760433262.2015] manager: (tap7103ce4a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.205 2 INFO os_vif [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.265 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:9d:3c:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.266 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Using config drive#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.289 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.313 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'ec2_ids' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.347 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'keypairs' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.539 2 DEBUG nova.network.neutron [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.575 2 INFO nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Took 0.70 seconds to deallocate network for instance.#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.621 2 DEBUG nova.network.neutron [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.628 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.628 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.641 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance network_info: |[{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.642 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Refreshing network info cache for port b162ed75-30c8-4d39-97d3-7baa4c970ef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.646 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start _get_guest_xml network_info=[{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.652 2 WARNING nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.657 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.658 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.665 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.666 2 DEBUG nova.virt.libvirt.host [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.666 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.667 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.667 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.668 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.669 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.670 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.670 2 DEBUG nova.virt.hardware [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.673 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.755 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Creating config drive at /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.761 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpls5qk_u6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.883 2 DEBUG oslo_concurrency.processutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.933 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpls5qk_u6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.974 2 DEBUG nova.storage.rbd_utils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image b595141f-123e-4250-bfec-888d866fd0c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:22 np0005486808 nova_compute[259627]: 2025-10-14 09:14:22.979 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.028 2 DEBUG nova.compute.manager [req-a62a6389-6bfa-419b-ba27-2b53d77493eb req-5328019b-8357-4548-bccc-59b6666f6bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-deleted-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.169 2 DEBUG oslo_concurrency.processutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config b595141f-123e-4250-bfec-888d866fd0c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.170 2 INFO nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting local config drive /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814762178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:23 np0005486808 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 05:14:23 np0005486808 systemd-udevd[357250]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.2206] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.2361] device (tap7103ce4a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.2368] device (tap7103ce4a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.258 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:23Z|01034|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 05:14:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:23Z|01035|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.273 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.274 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f bound to our chassis#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.275 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 563aa000-400f-4c19-ba83-9377cc50d29f#033[00m
Oct 14 05:14:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:23Z|01036|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 05:14:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:23Z|01037|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc92582-e081-4622-8db0-5c53ebad6cb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.295 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap563aa000-41 in ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:14:23 np0005486808 systemd-machined[214636]: New machine qemu-123-instance-00000060.
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.298 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap563aa000-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377ed8f3-27a1-4e4d-b21d-9cff3960915e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12dc08e6-75f8-4d11-acff-58fdb002fe1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 systemd[1]: Started Virtual Machine qemu-123-instance-00000060.
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.318 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f1aa3a-22a8-4b21-9b94-dee5ed53f534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.330 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0ca99-ac58-4b47-9dba-f151aa7306d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.338 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082960643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.379 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aff7b29b-9a02-46c1-9e3d-f8a5c797bc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.385 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c79a4-e9ac-4e87-988c-31f08e4f2b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.3873] manager: (tap563aa000-40): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.407 2 DEBUG oslo_concurrency.processutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.424 2 DEBUG nova.compute.provider_tree [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.435 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1549f98-7f96-41c1-8568-b592d5029280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.439 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[72673bb3-9f0f-4887-ac5a-5d20867ec9b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.450 2 DEBUG nova.scheduler.client.report [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.4611] device (tap563aa000-40): carrier: link connected
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.469 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b7a01a-52ff-4b21-b69a-6263120c55eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.485 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be24fe55-844c-43fc-997c-e679d980ee0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704222, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357570, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c32e5f8-ddf1-4f0e-b527-ccec24a63ef0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:cd84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704222, 'tstamp': 704222}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357572, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.511 2 INFO nova.scheduler.client.report [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Deleted allocations for instance 84080e43-a9f4-4b6a-889f-d76167ff715a#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a02f475e-143f-4dab-ad17-361f99a0e082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap563aa000-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:cd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704222, 'reachable_time': 29378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357582, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6051ddaf-0896-4f8d-8f80-7730122ba119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.580 2 DEBUG oslo_concurrency.lockutils [None req-2ead96b8-8538-4805-976c-ff02a42c6652 559a8ea4f81141efa5e11da9b174482d 2de05cc126e24608be28a7d5dea18bf3 - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.624 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd3c04-a23c-4b9c-9c27-389a31ee1d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.625 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.625 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.626 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563aa000-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.6283] manager: (tap563aa000-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 14 05:14:23 np0005486808 kernel: tap563aa000-40: entered promiscuous mode
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.633 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap563aa000-40, col_values=(('external_ids', {'iface-id': '4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:23Z|01038|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.668 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7e9aec-54b4-4dd1-a526-987738275bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.671 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/563aa000-400f-4c19-ba83-9377cc50d29f.pid.haproxy
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 563aa000-400f-4c19-ba83-9377cc50d29f
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:14:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:23.673 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'env', 'PROCESS_TAG=haproxy-563aa000-400f-4c19-ba83-9377cc50d29f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/563aa000-400f-4c19-ba83-9377cc50d29f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82152448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.832 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.833 2 DEBUG nova.virt.libvirt.vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:19Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.834 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.835 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.836 2 DEBUG nova.objects.instance [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.850 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <uuid>17db7f38-479a-4d56-9424-7f5ab695ccea</uuid>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <name>instance-00000063</name>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-982043822</nova:name>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:22</nova:creationTime>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <nova:port uuid="b162ed75-30c8-4d39-97d3-7baa4c970ef6">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="serial">17db7f38-479a-4d56-9424-7f5ab695ccea</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="uuid">17db7f38-479a-4d56-9424-7f5ab695ccea</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/17db7f38-479a-4d56-9424-7f5ab695ccea_disk">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b3:f9:07"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <target dev="tapb162ed75-30"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/console.log" append="off"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:23 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:23 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:23 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:23 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.850 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Preparing to wait for external event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.851 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.852 2 DEBUG nova.virt.libvirt.vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:19Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.852 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.853 2 DEBUG nova.network.os_vif_util [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.854 2 DEBUG os_vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb162ed75-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.858 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb162ed75-30, col_values=(('external_ids', {'iface-id': 'b162ed75-30c8-4d39-97d3-7baa4c970ef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:f9:07', 'vm-uuid': '17db7f38-479a-4d56-9424-7f5ab695ccea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 NetworkManager[44885]: <info>  [1760433263.8604] manager: (tapb162ed75-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.869 2 INFO os_vif [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30')#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.923 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.923 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.924 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:b3:f9:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.924 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Using config drive#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.946 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.952 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updated VIF entry in instance network info cache for port b162ed75-30c8-4d39-97d3-7baa4c970ef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.952 2 DEBUG nova.network.neutron [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [{"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 280 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 153 op/s
Oct 14 05:14:23 np0005486808 nova_compute[259627]: 2025-10-14 09:14:23.978 2 DEBUG oslo_concurrency.lockutils [req-8b81e394-89ab-4e24-81fd-f4041779c873 req-638b9957-8996-4cb0-a0b9-6359d0f79315 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-17db7f38-479a-4d56-9424-7f5ab695ccea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.027 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.028 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.029 2 WARNING nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-unplugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.029 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.030 2 DEBUG oslo_concurrency.lockutils [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84080e43-a9f4-4b6a-889f-d76167ff715a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.030 2 DEBUG nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] No waiting events found dispatching network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.030 2 WARNING nova.compute.manager [req-2ce2ca97-71cd-406e-9185-f332cfe4fb27 req-7f9f38e4-d613-4541-92c7-868b844a822a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Received unexpected event network-vif-plugged-1e3d49fe-52bd-40cb-ae1a-86eb664df473 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:24 np0005486808 podman[357686]: 2025-10-14 09:14:24.114752153 +0000 UTC m=+0.055985461 container create 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:14:24 np0005486808 systemd[1]: Started libpod-conmon-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope.
Oct 14 05:14:24 np0005486808 podman[357686]: 2025-10-14 09:14:24.08178266 +0000 UTC m=+0.023015988 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:14:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b2af1b6a32b5e1572566cdbc152a3e460f3972a3a382274f2a47523b42e89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:24 np0005486808 podman[357686]: 2025-10-14 09:14:24.198454926 +0000 UTC m=+0.139688264 container init 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:24 np0005486808 podman[357686]: 2025-10-14 09:14:24.206527935 +0000 UTC m=+0.147761243 container start 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:14:24 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : New worker (357707) forked
Oct 14 05:14:24 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : Loading success.
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.383 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for b595141f-123e-4250-bfec-888d866fd0c6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.385 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433264.3826427, b595141f-123e-4250-bfec-888d866fd0c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.385 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.393 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.393 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.398 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance spawned successfully.#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.398 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.410 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.415 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.426 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.427 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.427 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.428 2 DEBUG nova.virt.libvirt.driver [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.463 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.464 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433264.3840098, b595141f-123e-4250-bfec-888d866fd0c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.464 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.496 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.500 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.507 2 DEBUG nova.compute.manager [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.521 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.568 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.569 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.569 2 DEBUG nova.objects.instance [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:14:24 np0005486808 nova_compute[259627]: 2025-10-14 09:14:24.647 2 DEBUG oslo_concurrency.lockutils [None req-4dcfc8f0-7afe-4b55-843b-911bf1c054f0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.056 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Creating config drive at /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.072 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww928r51 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.191 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.191 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 WARNING nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.192 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.193 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.193 2 DEBUG oslo_concurrency.lockutils [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.194 2 DEBUG nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.194 2 WARNING nova.compute.manager [req-e26c5fbc-fc48-4c57-86c0-f6daadde762a req-82a08399-9c07-43b1-9c43-de99f43b8ed6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.242 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww928r51" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.276 2 DEBUG nova.storage.rbd_utils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.283 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.455 2 DEBUG oslo_concurrency.processutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config 17db7f38-479a-4d56-9424-7f5ab695ccea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.460 2 INFO nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deleting local config drive /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:25 np0005486808 kernel: tapb162ed75-30: entered promiscuous mode
Oct 14 05:14:25 np0005486808 NetworkManager[44885]: <info>  [1760433265.5120] manager: (tapb162ed75-30): new Tun device (/org/freedesktop/NetworkManager/Devices/430)
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:25Z|01039|binding|INFO|Claiming lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 for this chassis.
Oct 14 05:14:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:25Z|01040|binding|INFO|b162ed75-30c8-4d39-97d3-7baa4c970ef6: Claiming fa:16:3e:b3:f9:07 10.100.0.7
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.558 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:f9:07 10.100.0.7'], port_security=['fa:16:3e:b3:f9:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17db7f38-479a-4d56-9424-7f5ab695ccea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b162ed75-30c8-4d39-97d3-7baa4c970ef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.559 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b162ed75-30c8-4d39-97d3-7baa4c970ef6 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.561 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:25Z|01041|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 up in Southbound
Oct 14 05:14:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:25Z|01042|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 ovn-installed in OVS
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c9d98-c6f3-4280-9b4e-06be0ec1dd12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 systemd-machined[214636]: New machine qemu-124-instance-00000063.
Oct 14 05:14:25 np0005486808 systemd-udevd[357773]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:25 np0005486808 systemd[1]: Started Virtual Machine qemu-124-instance-00000063.
Oct 14 05:14:25 np0005486808 NetworkManager[44885]: <info>  [1760433265.6159] device (tapb162ed75-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:25 np0005486808 NetworkManager[44885]: <info>  [1760433265.6171] device (tapb162ed75-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[157ac806-49aa-43bc-b50d-a4b4ca02e9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.634 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4d05c2-94ec-448c-9440-26cdb73f033a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.665 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b1eec7bc-c525-401e-8640-e47f16ef344c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56fecd10-f9bc-45d0-b092-04de0bf5a0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357781, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc67e87-bd54-4f9d-b8d9-69795eafe236]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357785, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357785, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.713 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:25 np0005486808 nova_compute[259627]: 2025-10-14 09:14:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.718 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:25.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 258 op/s
Oct 14 05:14:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:25Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:d7:f7 10.100.0.7
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433251.1402357, d5de3978-2377-4d8e-aeaf-c952912130a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.143 2 INFO nova.compute.manager [-] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.165 2 DEBUG nova.compute.manager [None req-3e8bb4fc-abeb-4a77-84e5-2d2c18f0ac90 - - - - - -] [instance: d5de3978-2377-4d8e-aeaf-c952912130a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.518 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433266.5172956, 17db7f38-479a-4d56-9424-7f5ab695ccea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.518 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.544 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.550 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433266.5174892, 17db7f38-479a-4d56-9424-7f5ab695ccea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.550 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.572 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.576 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:26 np0005486808 nova_compute[259627]: 2025-10-14 09:14:26.599 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.723 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.723 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.724 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.724 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.725 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Processing event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.725 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.726 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.726 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.727 2 DEBUG oslo_concurrency.lockutils [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.727 2 DEBUG nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.727 2 WARNING nova.compute.manager [req-737aed22-3588-4b91-93ac-529ae81beb4d req-96ac62c8-6e5a-4270-964b-285dab92eb2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received unexpected event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.728 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.734 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433267.7342854, 17db7f38-479a-4d56-9424-7f5ab695ccea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.735 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.739 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.744 2 INFO nova.virt.libvirt.driver [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance spawned successfully.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.744 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.760 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.772 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.780 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.781 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.781 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.782 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.783 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.783 2 DEBUG nova.virt.libvirt.driver [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.796 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.841 2 INFO nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.842 2 DEBUG nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.906 2 INFO nova.compute.manager [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 9.18 seconds to build instance.#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.921 2 DEBUG oslo_concurrency.lockutils [None req-7167c316-d284-42eb-8138-9dcb9560030d a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:27 np0005486808 nova_compute[259627]: 2025-10-14 09:14:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:28Z|01043|binding|INFO|Releasing lport 61fe5571-a8eb-446a-8c4c-1f6f6758b146 from this chassis (sb_readonly=0)
Oct 14 05:14:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:28Z|01044|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 05:14:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:28Z|01045|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:14:28 np0005486808 nova_compute[259627]: 2025-10-14 09:14:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:28 np0005486808 nova_compute[259627]: 2025-10-14 09:14:28.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:28 np0005486808 nova_compute[259627]: 2025-10-14 09:14:28.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:28 np0005486808 nova_compute[259627]: 2025-10-14 09:14:28.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 293 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 14 05:14:30 np0005486808 nova_compute[259627]: 2025-10-14 09:14:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:30 np0005486808 nova_compute[259627]: 2025-10-14 09:14:30.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.013 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.014 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.014 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.016 2 INFO nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Terminating instance#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.017 2 DEBUG nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.021 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:31 np0005486808 kernel: tapb162ed75-30 (unregistering): left promiscuous mode
Oct 14 05:14:31 np0005486808 NetworkManager[44885]: <info>  [1760433271.1221] device (tapb162ed75-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01046|binding|INFO|Releasing lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 from this chassis (sb_readonly=0)
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01047|binding|INFO|Setting lport b162ed75-30c8-4d39-97d3-7baa4c970ef6 down in Southbound
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01048|binding|INFO|Removing iface tapb162ed75-30 ovn-installed in OVS
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.182 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:f9:07 10.100.0.7'], port_security=['fa:16:3e:b3:f9:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '17db7f38-479a-4d56-9424-7f5ab695ccea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b162ed75-30c8-4d39-97d3-7baa4c970ef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.183 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b162ed75-30c8-4d39-97d3-7baa4c970ef6 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.184 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.205 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8016cc61-6958-4d00-884c-3210abf4b280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.218 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.219 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:31 np0005486808 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 14 05:14:31 np0005486808 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Consumed 4.225s CPU time.
Oct 14 05:14:31 np0005486808 systemd-machined[214636]: Machine qemu-124-instance-00000063 terminated.
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.225 2 INFO nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Terminating instance#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.227 2 DEBUG nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.246 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[10a2f240-9f1f-493f-83ac-95cfb3c90f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.251 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85cad6d1-f09c-4ca1-923c-5eb42b067ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 podman[357833]: 2025-10-14 09:14:31.286167883 +0000 UTC m=+0.107301536 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.285 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd4ea17-1fb7-4a5e-8257-f8636351a651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 kernel: tap4f827284-f3 (unregistering): left promiscuous mode
Oct 14 05:14:31 np0005486808 NetworkManager[44885]: <info>  [1760433271.3026] device (tap4f827284-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:31 np0005486808 podman[357830]: 2025-10-14 09:14:31.314340117 +0000 UTC m=+0.172428201 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01049|binding|INFO|Releasing lport 4f827284-f357-43c5-bdde-c69731b52914 from this chassis (sb_readonly=0)
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01050|binding|INFO|Setting lport 4f827284-f357-43c5-bdde-c69731b52914 down in Southbound
Oct 14 05:14:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:31Z|01051|binding|INFO|Removing iface tap4f827284-f3 ovn-installed in OVS
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d7:f7 10.100.0.7'], port_security=['fa:16:3e:8b:d7:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2534f8b9-e832-4b78-ada4-e551429bdc75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517aafb84156407c8672042097e3ef4f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '572acc55-453a-444a-ab8d-a15e14283f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927296e1-b389-4596-b9be-8cf735b93ca2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=4f827284-f357-43c5-bdde-c69731b52914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a77fd7c-ef65-4f97-96e6-bd6b56ec0a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357910, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.342 2 INFO nova.virt.libvirt.driver [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Instance destroyed successfully.#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.343 2 DEBUG nova.objects.instance [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 17db7f38-479a-4d56-9424-7f5ab695ccea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c470026c-bfff-4324-8fe2-9228909a779a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357922, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357922, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.353 2 DEBUG nova.virt.libvirt.vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-982043822',display_name='tempest-ServersTestJSON-server-982043822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-982043822',id=99,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk/Vx8N8VoO1zmSJqBglSY43YqSujWcA6ZiDD1toYBIpurFKRZBjCOKZgtOBXyQC4T0DxGqSPJNxdy1Ur96CBm27LV9yA38NPDP8uy5/wFRSRGqBndEo3EUYE7tpxK96w==',key_name='tempest-key-1288000849',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-a12i0l08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:27Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=17db7f38-479a-4d56-9424-7f5ab695ccea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.354 2 DEBUG nova.network.os_vif_util [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "address": "fa:16:3e:b3:f9:07", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb162ed75-30", "ovs_interfaceid": "b162ed75-30c8-4d39-97d3-7baa4c970ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.354 2 DEBUG nova.network.os_vif_util [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.355 2 DEBUG os_vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb162ed75-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.360 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.361 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.361 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 4f827284-f357-43c5-bdde-c69731b52914 in datapath a49b41b4-2559-4a22-a274-a6c7bbe75f2c unbound from our chassis#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.362 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a49b41b4-2559-4a22-a274-a6c7bbe75f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb189b1-3682-4430-8923-9d851eeb4661]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.364 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c namespace which is not needed anymore#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.365 2 INFO os_vif [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:f9:07,bridge_name='br-int',has_traffic_filtering=True,id=b162ed75-30c8-4d39-97d3-7baa4c970ef6,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb162ed75-30')#033[00m
Oct 14 05:14:31 np0005486808 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct 14 05:14:31 np0005486808 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000056.scope: Consumed 8.196s CPU time.
Oct 14 05:14:31 np0005486808 systemd-machined[214636]: Machine qemu-122-instance-00000056 terminated.
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.462 2 INFO nova.virt.libvirt.driver [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Instance destroyed successfully.#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.462 2 DEBUG nova.objects.instance [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lazy-loading 'resources' on Instance uuid 2534f8b9-e832-4b78-ada4-e551429bdc75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.489 2 DEBUG nova.virt.libvirt.vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-17250352',display_name='tempest-ServersNegativeTestJSON-server-17250352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-17250352',id=86,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='517aafb84156407c8672042097e3ef4f',ramdisk_id='',reservation_id='r-rj00rja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1475695514',owner_user_name='tempest-ServersNegativeTestJSON-1475695514-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:18Z,user_data=None,user_id='92e59e145f6942b78d0ffbebc4d89e76',uuid=2534f8b9-e832-4b78-ada4-e551429bdc75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG nova.network.os_vif_util [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converting VIF {"id": "4f827284-f357-43c5-bdde-c69731b52914", "address": "fa:16:3e:8b:d7:f7", "network": {"id": "a49b41b4-2559-4a22-a274-a6c7bbe75f2c", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-218671909-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "517aafb84156407c8672042097e3ef4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f827284-f3", "ovs_interfaceid": "4f827284-f357-43c5-bdde-c69731b52914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG nova.network.os_vif_util [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.490 2 DEBUG os_vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f827284-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.498 2 INFO os_vif [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d7:f7,bridge_name='br-int',has_traffic_filtering=True,id=4f827284-f357-43c5-bdde-c69731b52914,network=Network(a49b41b4-2559-4a22-a274-a6c7bbe75f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f827284-f3')#033[00m
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : haproxy version is 2.8.14-c23fe91
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [NOTICE]   (356788) : path to executable is /usr/sbin/haproxy
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : Exiting Master process...
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : Exiting Master process...
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [ALERT]    (356788) : Current worker (356790) exited with code 143 (Terminated)
Oct 14 05:14:31 np0005486808 neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c[356784]: [WARNING]  (356788) : All workers exited. Exiting... (0)
Oct 14 05:14:31 np0005486808 systemd[1]: libpod-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope: Deactivated successfully.
Oct 14 05:14:31 np0005486808 podman[357965]: 2025-10-14 09:14:31.528906285 +0000 UTC m=+0.062021949 container died 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:14:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039727621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f6781cf622e873ac119e8ad8166fc07516eec1ccc73d027486934f81c24ca8b1-merged.mount: Deactivated successfully.
Oct 14 05:14:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:14:31 np0005486808 podman[357965]: 2025-10-14 09:14:31.579853321 +0000 UTC m=+0.112968985 container cleanup 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.592 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:31 np0005486808 systemd[1]: libpod-conmon-003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd.scope: Deactivated successfully.
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.640 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.641 2 DEBUG oslo_concurrency.lockutils [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.642 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.642 2 DEBUG nova.compute.manager [req-8222f3f9-fe3c-4381-bbf0-1afe0e126b6b req-bd3ac972-4ecd-4376-9e78-540652bcab6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-unplugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:14:31 np0005486808 podman[358022]: 2025-10-14 09:14:31.671340266 +0000 UTC m=+0.056704599 container remove 003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b543e3f-5428-4180-85b9-33042213186d]: (4, ('Tue Oct 14 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd)\n003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd\nTue Oct 14 09:14:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c (003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd)\n003a69fa8facb8e884632adb21806e70fa1165d743d1e807512ac7c4836dc3cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.678 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6ee6a0-a966-4a50-a5d4-01279fa92898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.679 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49b41b4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.679 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.679 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 kernel: tapa49b41b4-20: left promiscuous mode
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.685 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.686 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.689 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.689 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.701 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.709 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8e7817-2413-4f8d-85ff-f51285b21f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.733 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[51f4329e-c185-4a77-915f-7b1b4d20735a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc4b86e-ab5c-4641-92ea-41ec1010c565]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.756 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93fe3eb5-667b-463f-8467-13b3eadab3a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703647, 'reachable_time': 44657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358038, 'error': None, 'target': 'ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 systemd[1]: run-netns-ovnmeta\x2da49b41b4\x2d2559\x2d4a22\x2da274\x2da6c7bbe75f2c.mount: Deactivated successfully.
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.761 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a49b41b4-2559-4a22-a274-a6c7bbe75f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:14:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:31.762 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2fb4a1-7005-4cbb-ab8c-c184bdefa464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.825 2 INFO nova.virt.libvirt.driver [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deleting instance files /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea_del#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.826 2 INFO nova.virt.libvirt.driver [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deletion of /var/lib/nova/instances/17db7f38-479a-4d56-9424-7f5ab695ccea_del complete#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.899 2 INFO nova.compute.manager [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG oslo.service.loopingcall [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.900 2 DEBUG nova.network.neutron [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.922 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3362MB free_disk=59.85558319091797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.923 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.962 2 INFO nova.virt.libvirt.driver [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deleting instance files /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del#033[00m
Oct 14 05:14:31 np0005486808 nova_compute[259627]: 2025-10-14 09:14:31.963 2 INFO nova.virt.libvirt.driver [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deletion of /var/lib/nova/instances/2534f8b9-e832-4b78-ada4-e551429bdc75_del complete#033[00m
Oct 14 05:14:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.6 MiB/s wr, 371 op/s
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b595141f-123e-4250-bfec-888d866fd0c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2534f8b9-e832-4b78-ada4-e551429bdc75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 17db7f38-479a-4d56-9424-7f5ab695ccea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.025 2 INFO nova.compute.manager [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG oslo.service.loopingcall [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.026 2 DEBUG nova.network.neutron [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.036 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.050 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.050 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.063 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.097 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.173 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708180741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.646 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.655 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.697 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.714 2 DEBUG nova.network.neutron [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.729 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.729 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.744 2 INFO nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:14:32
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:14:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.789 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.789 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.843 2 DEBUG nova.compute.manager [req-49791a0e-0167-4be8-bcff-a211fc6b984d req-6872d837-3ac5-4641-91cb-24e59714ef6c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-deleted-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:32 np0005486808 nova_compute[259627]: 2025-10-14 09:14:32.886 2 DEBUG oslo_concurrency.processutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:14:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437114887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.313 2 DEBUG oslo_concurrency.processutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.321 2 DEBUG nova.compute.provider_tree [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.345 2 DEBUG nova.scheduler.client.report [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.375 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.382 2 DEBUG nova.network.neutron [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.406 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.413 2 INFO nova.scheduler.client.report [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 17db7f38-479a-4d56-9424-7f5ab695ccea#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.494 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.494 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.524 2 DEBUG oslo_concurrency.lockutils [None req-61df3857-25ec-43de-bc2f-c4abc29bbef0 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.564 2 DEBUG oslo_concurrency.processutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.758 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.759 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.760 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.760 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "17db7f38-479a-4d56-9424-7f5ab695ccea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.761 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] No waiting events found dispatching network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.762 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Received unexpected event network-vif-plugged-b162ed75-30c8-4d39-97d3-7baa4c970ef6 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.762 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.763 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.764 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.764 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-unplugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.765 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.765 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.766 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.766 2 DEBUG oslo_concurrency.lockutils [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.767 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] No waiting events found dispatching network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.767 2 WARNING nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received unexpected event network-vif-plugged-4f827284-f357-43c5-bdde-c69731b52914 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:33 np0005486808 nova_compute[259627]: 2025-10-14 09:14:33.767 2 DEBUG nova.compute.manager [req-a670d2fc-94a4-48f9-a4dd-f5112e71c4e0 req-7d80df98-a99d-47d5-baaa-9b698095d785 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Received event network-vif-deleted-4f827284-f357-43c5-bdde-c69731b52914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 293 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 266 op/s
Oct 14 05:14:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406511685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.040 2 DEBUG oslo_concurrency.processutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.046 2 DEBUG nova.compute.provider_tree [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.063 2 DEBUG nova.scheduler.client.report [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.084 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.105 2 INFO nova.scheduler.client.report [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Deleted allocations for instance 2534f8b9-e832-4b78-ada4-e551429bdc75#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.191 2 DEBUG oslo_concurrency.lockutils [None req-961cb0e6-b9c5-42e7-b02a-c52c43a9de63 92e59e145f6942b78d0ffbebc4d89e76 517aafb84156407c8672042097e3ef4f - - default default] Lock "2534f8b9-e832-4b78-ada4-e551429bdc75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.728 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:14:34 np0005486808 nova_compute[259627]: 2025-10-14 09:14:34.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:14:35 np0005486808 nova_compute[259627]: 2025-10-14 09:14:35.044 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:35 np0005486808 nova_compute[259627]: 2025-10-14 09:14:35.044 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:35 np0005486808 nova_compute[259627]: 2025-10-14 09:14:35.045 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:14:35 np0005486808 nova_compute[259627]: 2025-10-14 09:14:35.045 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 331 op/s
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433261.280935, 84080e43-a9f4-4b6a-889f-d76167ff715a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.292 2 INFO nova.compute.manager [-] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.314 2 DEBUG nova.compute.manager [None req-04676838-9d5b-492d-a9d0-b802aed56d65 - - - - - -] [instance: 84080e43-a9f4-4b6a-889f-d76167ff715a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.639 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [{"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-d46b6953-9413-4e6a-94f7-7b5ac9634c16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.660 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:14:36 np0005486808 nova_compute[259627]: 2025-10-14 09:14:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.339 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.340 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.360 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.451 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.452 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.458 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.458 2 INFO nova.compute.claims [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:14:37 np0005486808 nova_compute[259627]: 2025-10-14 09:14:37.606 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.854065) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277854093, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 622, "num_deletes": 256, "total_data_size": 602740, "memory_usage": 615912, "flush_reason": "Manual Compaction"}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277859849, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 596293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37771, "largest_seqno": 38392, "table_properties": {"data_size": 593031, "index_size": 1106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7780, "raw_average_key_size": 18, "raw_value_size": 586342, "raw_average_value_size": 1426, "num_data_blocks": 50, "num_entries": 411, "num_filter_entries": 411, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433240, "oldest_key_time": 1760433240, "file_creation_time": 1760433277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 5807 microseconds, and 2191 cpu microseconds.
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.859873) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 596293 bytes OK
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.859886) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861845) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861857) EVENT_LOG_v1 {"time_micros": 1760433277861853, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.861870) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 599330, prev total WAL file size 599330, number of live WAL files 2.
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.862253) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(582KB)], [83(7900KB)]
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277862293, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8686208, "oldest_snapshot_seqno": -1}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6128 keys, 8556518 bytes, temperature: kUnknown
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277897881, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8556518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8515267, "index_size": 24821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156528, "raw_average_key_size": 25, "raw_value_size": 8405065, "raw_average_value_size": 1371, "num_data_blocks": 997, "num_entries": 6128, "num_filter_entries": 6128, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.898148) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8556518 bytes
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.899868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.5 rd, 239.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(28.9) write-amplify(14.3) OK, records in: 6652, records dropped: 524 output_compression: NoCompression
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.899883) EVENT_LOG_v1 {"time_micros": 1760433277899876, "job": 48, "event": "compaction_finished", "compaction_time_micros": 35668, "compaction_time_cpu_micros": 18436, "output_level": 6, "num_output_files": 1, "total_output_size": 8556518, "num_input_records": 6652, "num_output_records": 6128, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277900101, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433277901370, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.862187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:14:37.901405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:14:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 05:14:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:38Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:38Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811645815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.042 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.050 2 DEBUG nova.compute.provider_tree [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.077 2 DEBUG nova.scheduler.client.report [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.101 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.102 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:14:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:38Z|01052|binding|INFO|Releasing lport 4b7b52fe-6c74-46c3-ab83-c118ed2fe8eb from this chassis (sb_readonly=0)
Oct 14 05:14:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:38Z|01053|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.169 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.170 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.201 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.269 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.363 2 DEBUG nova.policy [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.375 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.378 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.379 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating image(s)#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.413 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.442 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.465 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.468 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.538 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.539 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.540 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.540 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.562 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.565 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 da40c115-048e-4844-812e-7e65e25bfb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.875 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 da40c115-048e-4844-812e-7e65e25bfb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:38 np0005486808 nova_compute[259627]: 2025-10-14 09:14:38.965 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.041 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Successfully created port: c2ac8abe-0e61-4769-9529-3b391568e6b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.094 2 DEBUG nova.objects.instance [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.115 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.116 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Ensure instance console log exists: /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.117 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.117 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.118 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.772 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Successfully updated port: c2ac8abe-0e61-4769-9529-3b391568e6b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.791 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 169 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 290 KiB/s wr, 214 op/s
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.984 2 DEBUG nova.compute.manager [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-changed-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.984 2 DEBUG nova.compute.manager [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Refreshing instance network info cache due to event network-changed-c2ac8abe-0e61-4769-9529-3b391568e6b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:39 np0005486808 nova_compute[259627]: 2025-10-14 09:14:39.985 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:40 np0005486808 nova_compute[259627]: 2025-10-14 09:14:40.312 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.383 2 DEBUG nova.network.neutron [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance network_info: |[{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.418 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Refreshing network info cache for port c2ac8abe-0e61-4769-9529-3b391568e6b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.421 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start _get_guest_xml network_info=[{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.425 2 WARNING nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.435 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.435 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.442 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.442 2 DEBUG nova.virt.libvirt.host [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.443 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.444 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.445 2 DEBUG nova.virt.hardware [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.448 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063771092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.862 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.882 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:41 np0005486808 nova_compute[259627]: 2025-10-14 09:14:41.886 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 343 op/s
Oct 14 05:14:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741655201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.315 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.316 2 DEBUG nova.virt.libvirt.vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:38Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.317 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.317 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.318 2 DEBUG nova.objects.instance [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <uuid>da40c115-048e-4844-812e-7e65e25bfb3f</uuid>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <name>instance-00000064</name>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-1896126591</nova:name>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:41</nova:creationTime>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <nova:port uuid="c2ac8abe-0e61-4769-9529-3b391568e6b9">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="serial">da40c115-048e-4844-812e-7e65e25bfb3f</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="uuid">da40c115-048e-4844-812e-7e65e25bfb3f</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/da40c115-048e-4844-812e-7e65e25bfb3f_disk">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/da40c115-048e-4844-812e-7e65e25bfb3f_disk.config">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:fa:70:22"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <target dev="tapc2ac8abe-0e"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/console.log" append="off"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:42 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:42 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:42 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:42 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Preparing to wait for external event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.342 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.343 2 DEBUG nova.virt.libvirt.vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:38Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.344 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.344 2 DEBUG nova.network.os_vif_util [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG os_vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ac8abe-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2ac8abe-0e, col_values=(('external_ids', {'iface-id': 'c2ac8abe-0e61-4769-9529-3b391568e6b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:70:22', 'vm-uuid': 'da40c115-048e-4844-812e-7e65e25bfb3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:42 np0005486808 NetworkManager[44885]: <info>  [1760433282.3507] manager: (tapc2ac8abe-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.356 2 INFO os_vif [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e')#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.408 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.408 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.409 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:fa:70:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.409 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Using config drive#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.429 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.888 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Creating config drive at /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.896 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2bkbecw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.942 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updated VIF entry in instance network info cache for port c2ac8abe-0e61-4769-9529-3b391568e6b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.943 2 DEBUG nova.network.neutron [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [{"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.959 2 DEBUG oslo_concurrency.lockutils [req-28071fc5-fd37-429f-9cbe-425fdc66903c req-0a055d11-28f4-4ba6-847b-c9ea1726e4d8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-da40c115-048e-4844-812e-7e65e25bfb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:42 np0005486808 nova_compute[259627]: 2025-10-14 09:14:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.051 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb2bkbecw" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.087 2 DEBUG nova.storage.rbd_utils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image da40c115-048e-4844-812e-7e65e25bfb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.090 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config da40c115-048e-4844-812e-7e65e25bfb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018610456550758612 of space, bias 1.0, pg target 0.5583136965227583 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.298 2 DEBUG oslo_concurrency.processutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config da40c115-048e-4844-812e-7e65e25bfb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.299 2 INFO nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deleting local config drive /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:43 np0005486808 kernel: tapc2ac8abe-0e: entered promiscuous mode
Oct 14 05:14:43 np0005486808 NetworkManager[44885]: <info>  [1760433283.3424] manager: (tapc2ac8abe-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Oct 14 05:14:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:43Z|01054|binding|INFO|Claiming lport c2ac8abe-0e61-4769-9529-3b391568e6b9 for this chassis.
Oct 14 05:14:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:43Z|01055|binding|INFO|c2ac8abe-0e61-4769-9529-3b391568e6b9: Claiming fa:16:3e:fa:70:22 10.100.0.3
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.353 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:70:22 10.100.0.3'], port_security=['fa:16:3e:fa:70:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da40c115-048e-4844-812e-7e65e25bfb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c2ac8abe-0e61-4769-9529-3b391568e6b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.354 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c2ac8abe-0e61-4769-9529-3b391568e6b9 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:43Z|01056|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 ovn-installed in OVS
Oct 14 05:14:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:43Z|01057|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 up in Southbound
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.374 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a07d26-8721-4e57-b5cc-dfdefbae537d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 systemd-udevd[358429]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:43 np0005486808 systemd-machined[214636]: New machine qemu-125-instance-00000064.
Oct 14 05:14:43 np0005486808 NetworkManager[44885]: <info>  [1760433283.3938] device (tapc2ac8abe-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:43 np0005486808 NetworkManager[44885]: <info>  [1760433283.3951] device (tapc2ac8abe-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:43 np0005486808 systemd[1]: Started Virtual Machine qemu-125-instance-00000064.
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.420 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e84414-eba8-47bf-b53a-22e5fdc68745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.424 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2adbc5c8-14b1-4624-89fd-a042a249cca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.450 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0d3674-0dcf-4847-83c8-8a4985c1802a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.469 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[67346800-02d2-4020-9c53-e327a4256ea7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358442, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.485 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[69d58679-71df-4994-9a00-664e78874e50]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358443, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358443, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.487 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 nova_compute[259627]: 2025-10-14 09:14:43.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.490 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.491 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.491 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:43.492 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 3.9 MiB/s wr, 193 op/s
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.290 2 INFO nova.compute.manager [None req-b239f489-a952-4693-b750-ed4aad04c25b e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Get console output#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.296 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.383 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.383 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.403 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.488 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.489 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.498 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.499 2 INFO nova.compute.claims [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.704 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.754 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433284.7283974, da40c115-048e-4844-812e-7e65e25bfb3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.755 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.782 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.788 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433284.7291455, da40c115-048e-4844-812e-7e65e25bfb3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.789 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.812 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.817 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:44 np0005486808 nova_compute[259627]: 2025-10-14 09:14:44.837 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.088 2 DEBUG nova.compute.manager [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.088 2 DEBUG nova.compute.manager [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing instance network info cache due to event network-changed-7103ce4a-69e8-454b-aed3-251ecb109232. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.089 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Refreshing network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829793887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.167 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.177 2 DEBUG nova.compute.provider_tree [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.203 2 DEBUG nova.scheduler.client.report [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.210 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.211 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.212 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.214 2 INFO nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Terminating instance#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.216 2 DEBUG nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.244 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.245 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:14:45 np0005486808 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 05:14:45 np0005486808 NetworkManager[44885]: <info>  [1760433285.2867] device (tap7103ce4a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.287 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.287 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01058|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01059|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01060|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.307 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.308 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.310 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.309 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.310 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24ac04b8-aa29-42ad-bf31-a365e02f35c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.311 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f namespace which is not needed anymore#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.333 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:14:45 np0005486808 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 14 05:14:45 np0005486808 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Consumed 13.468s CPU time.
Oct 14 05:14:45 np0005486808 systemd-machined[214636]: Machine qemu-123-instance-00000060 terminated.
Oct 14 05:14:45 np0005486808 kernel: tap7103ce4a-69: entered promiscuous mode
Oct 14 05:14:45 np0005486808 NetworkManager[44885]: <info>  [1760433285.4466] manager: (tap7103ce4a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 kernel: tap7103ce4a-69 (unregistering): left promiscuous mode
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01061|binding|INFO|Claiming lport 7103ce4a-69e8-454b-aed3-251ecb109232 for this chassis.
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01062|binding|INFO|7103ce4a-69e8-454b-aed3-251ecb109232: Claiming fa:16:3e:9d:3c:de 10.100.0.5
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.456 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.458 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.458 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.458 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating image(s)#033[00m
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01063|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 ovn-installed in OVS
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01064|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 up in Southbound
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01065|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=1)
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01066|if_status|INFO|Dropped 5 log messages in last 181 seconds (most recently, 175 seconds ago) due to excessive rate
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01067|if_status|INFO|Not setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down as sb is readonly
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01068|binding|INFO|Removing iface tap7103ce4a-69 ovn-installed in OVS
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.498 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01069|binding|INFO|Releasing lport 7103ce4a-69e8-454b-aed3-251ecb109232 from this chassis (sb_readonly=0)
Oct 14 05:14:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:45Z|01070|binding|INFO|Setting lport 7103ce4a-69e8-454b-aed3-251ecb109232 down in Southbound
Oct 14 05:14:45 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : haproxy version is 2.8.14-c23fe91
Oct 14 05:14:45 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [NOTICE]   (357705) : path to executable is /usr/sbin/haproxy
Oct 14 05:14:45 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [WARNING]  (357705) : Exiting Master process...
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.509 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:3c:de 10.100.0.5'], port_security=['fa:16:3e:9d:3c:de 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b595141f-123e-4250-bfec-888d866fd0c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-563aa000-400f-4c19-ba83-9377cc50d29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97c0b0e2-9440-40e7-a61b-2eb79520e4e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e572d4-df42-4a93-ac49-d93e0906a5ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7103ce4a-69e8-454b-aed3-251ecb109232) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:45 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [ALERT]    (357705) : Current worker (357707) exited with code 143 (Terminated)
Oct 14 05:14:45 np0005486808 neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f[357701]: [WARNING]  (357705) : All workers exited. Exiting... (0)
Oct 14 05:14:45 np0005486808 systemd[1]: libpod-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope: Deactivated successfully.
Oct 14 05:14:45 np0005486808 podman[358529]: 2025-10-14 09:14:45.52182249 +0000 UTC m=+0.086527274 container died 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.543 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2-userdata-shm.mount: Deactivated successfully.
Oct 14 05:14:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-458b2af1b6a32b5e1572566cdbc152a3e460f3972a3a382274f2a47523b42e89-merged.mount: Deactivated successfully.
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.574 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.579 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:45 np0005486808 podman[358529]: 2025-10-14 09:14:45.580195439 +0000 UTC m=+0.144900223 container cleanup 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:14:45 np0005486808 systemd[1]: libpod-conmon-002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2.scope: Deactivated successfully.
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.631 2 INFO nova.virt.libvirt.driver [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Instance destroyed successfully.#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.632 2 DEBUG nova.objects.instance [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid b595141f-123e-4250-bfec-888d866fd0c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:45 np0005486808 podman[358616]: 2025-10-14 09:14:45.64841835 +0000 UTC m=+0.044876137 container remove 002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.649 2 DEBUG nova.virt.libvirt.vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-14T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399788817',display_name='tempest-TestNetworkAdvancedServerOps-server-1399788817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399788817',id=96,image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKu+EdNTbLcJJySiqS/IEqa/tEPsTFHNnnibtN2r3Vh53iyUeSuOyPo0wb3WDr0n5pX4AT4Pz90bVLkNIFNLc4XvdDhQRiV0WmHkT7tU8LMbcG0FpGnJS7D9bMgie0zW1g==',key_name='tempest-TestNetworkAdvancedServerOps-1640438953',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-qh9caax0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e2368e3e-f504-40e6-a9d3-67df18c845bb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:24Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=b595141f-123e-4250-bfec-888d866fd0c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.650 2 DEBUG nova.network.os_vif_util [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.651 2 DEBUG nova.network.os_vif_util [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.651 2 DEBUG os_vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7103ce4a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.660 2 INFO os_vif [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:3c:de,bridge_name='br-int',has_traffic_filtering=True,id=7103ce4a-69e8-454b-aed3-251ecb109232,network=Network(563aa000-400f-4c19-ba83-9377cc50d29f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7103ce4a-69')#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc636df-be5f-413c-a2a1-e43b1196e138]: (4, ('Tue Oct 14 09:14:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2)\n002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2\nTue Oct 14 09:14:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f (002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2)\n002f8ea4edd0d5f9a5b9c27d614712b9808a22cc899bd4e7acc5c3f7137894e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[838594cf-2e9d-474e-8138-45856123dd42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563aa000-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:45 np0005486808 kernel: tap563aa000-40: left promiscuous mode
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.686 2 DEBUG nova.policy [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '648aaa75d8974d439d8ebe331c3d6568', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4eae338e7d54d159033a20bc7460935', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c227651f-7d23-40ad-8047-a2efdb0d4e01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.691 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.692 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.693 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.693 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[462b6065-e959-449e-a8b7-311a97bd478c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.718 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[266d1ee0-831f-4c6b-a667-c6e69603b1f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.729 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:45 np0005486808 nova_compute[259627]: 2025-10-14 09:14:45.733 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f320e-d0a6-4926-8707-79ce1e8223e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704213, 'reachable_time': 25608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358665, 'error': None, 'target': 'ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 systemd[1]: run-netns-ovnmeta\x2d563aa000\x2d400f\x2d4c19\x2dba83\x2d9377cc50d29f.mount: Deactivated successfully.
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.741 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-563aa000-400f-4c19-ba83-9377cc50d29f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.741 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cb660d-bae7-4c1a-bf56-6d1b1f7e0f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.743 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.745 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.746 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b4206cc4-742b-46c7-9d10-e489a75164d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.747 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7103ce4a-69e8-454b-aed3-251ecb109232 in datapath 563aa000-400f-4c19-ba83-9377cc50d29f unbound from our chassis#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.748 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 563aa000-400f-4c19-ba83-9377cc50d29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:14:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:45.748 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8454cf2d-6241-42f2-8bff-11e0224876f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 3.9 MiB/s wr, 215 op/s
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.071 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.071 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.072 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.072 2 DEBUG oslo_concurrency.lockutils [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.073 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.073 2 DEBUG nova.compute.manager [req-6d5cd04f-fe23-43fd-b8b9-00c4123f992a req-424d326c-d466-4acf-b744-808d9a140f05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.075 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.140 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] resizing rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.220 2 DEBUG nova.objects.instance [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'migration_context' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.239 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.239 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Ensure instance console log exists: /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.241 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.316 2 INFO nova.virt.libvirt.driver [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deleting instance files /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.317 2 INFO nova.virt.libvirt.driver [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deletion of /var/lib/nova/instances/b595141f-123e-4250-bfec-888d866fd0c6_del complete#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433271.3119168, 17db7f38-479a-4d56-9424-7f5ab695ccea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.320 2 INFO nova.compute.manager [-] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.364 2 DEBUG nova.compute.manager [None req-b821c27e-6f1c-485b-8ca6-75bdfd225c1a - - - - - -] [instance: 17db7f38-479a-4d56-9424-7f5ab695ccea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.381 2 INFO nova.compute.manager [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG oslo.service.loopingcall [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.382 2 DEBUG nova.network.neutron [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.459 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433271.4584494, 2534f8b9-e832-4b78-ada4-e551429bdc75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.460 2 INFO nova.compute.manager [-] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.484 2 DEBUG nova.compute.manager [None req-4c645e01-d5e3-4fa8-93c0-12866a61f252 - - - - - -] [instance: 2534f8b9-e832-4b78-ada4-e551429bdc75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:46 np0005486808 nova_compute[259627]: 2025-10-14 09:14:46.783 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Successfully created port: 175a9914-0068-4aeb-b4b6-d501212d3374 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.041 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updated VIF entry in instance network info cache for port 7103ce4a-69e8-454b-aed3-251ecb109232. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.042 2 DEBUG nova.network.neutron [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [{"id": "7103ce4a-69e8-454b-aed3-251ecb109232", "address": "fa:16:3e:9d:3c:de", "network": {"id": "563aa000-400f-4c19-ba83-9377cc50d29f", "bridge": "br-int", "label": "tempest-network-smoke--1642362496", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7103ce4a-69", "ovs_interfaceid": "7103ce4a-69e8-454b-aed3-251ecb109232", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.065 2 DEBUG oslo_concurrency.lockutils [req-f33d7db3-131d-4b99-8f25-f58536363c79 req-5cd5e039-f442-4358-a007-01031b8cd70b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b595141f-123e-4250-bfec-888d866fd0c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.092 2 DEBUG nova.network.neutron [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.114 2 INFO nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.159 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.159 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.276 2 DEBUG oslo_concurrency.processutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.526 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.527 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.528 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Processing event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.529 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.529 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.530 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.530 2 DEBUG oslo_concurrency.lockutils [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.531 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.531 2 WARNING nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.531 2 DEBUG nova.compute.manager [req-b67a9aac-31b2-4b1c-a1ea-b6a784091878 req-cf30b4ad-0e6d-4581-98ca-aecd0f202df4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-deleted-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.533 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.542 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Successfully updated port: 175a9914-0068-4aeb-b4b6-d501212d3374 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.546 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433287.5458107, da40c115-048e-4844-812e-7e65e25bfb3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.546 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.549 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.554 2 INFO nova.virt.libvirt.driver [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance spawned successfully.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.554 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.560 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.561 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.561 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.583 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.593 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.598 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.599 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.599 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.600 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.600 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.601 2 DEBUG nova.virt.libvirt.driver [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.636 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.673 2 INFO nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 9.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.673 2 DEBUG nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.745 2 INFO nova.compute.manager [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 10.31 seconds to build instance.#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.764 2 DEBUG oslo_concurrency.lockutils [None req-cddd027c-6e81-4b17-8deb-0e2741adfea1 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.765 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:14:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71581542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.833 2 DEBUG oslo_concurrency.processutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.841 2 DEBUG nova.compute.provider_tree [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.862 2 DEBUG nova.scheduler.client.report [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.888 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:47 np0005486808 nova_compute[259627]: 2025-10-14 09:14:47.933 2 INFO nova.scheduler.client.report [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance b595141f-123e-4250-bfec-888d866fd0c6#033[00m
Oct 14 05:14:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.004 2 DEBUG oslo_concurrency.lockutils [None req-1d113bb2-871d-4ef6-b5a4-449c770bf8eb e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.180 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.180 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.181 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.181 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.182 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.183 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.183 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.184 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.184 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.185 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.185 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-unplugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b595141f-123e-4250-bfec-888d866fd0c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b595141f-123e-4250-bfec-888d866fd0c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.186 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] No waiting events found dispatching network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.187 2 WARNING nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Received unexpected event network-vif-plugged-7103ce4a-69e8-454b-aed3-251ecb109232 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG nova.compute.manager [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.187 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:48 np0005486808 podman[358787]: 2025-10-14 09:14:48.633910382 +0000 UTC m=+0.052511296 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 05:14:48 np0005486808 podman[358786]: 2025-10-14 09:14:48.637024358 +0000 UTC m=+0.055901218 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.866 2 DEBUG nova.network.neutron [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.889 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.890 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance network_info: |[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.890 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.891 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.896 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start _get_guest_xml network_info=[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.903 2 WARNING nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.908 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.908 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.912 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.913 2 DEBUG nova.virt.libvirt.host [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.914 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.915 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.916 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.916 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.917 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.917 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.918 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.919 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.919 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.920 2 DEBUG nova.virt.hardware [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:48 np0005486808 nova_compute[259627]: 2025-10-14 09:14:48.925 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259066066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.356 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.374 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.378 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355761698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.790 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.794 2 DEBUG nova.virt.libvirt.vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:45Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.795 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.798 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.801 2 DEBUG nova.objects.instance [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.822 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <uuid>e1aea504-3ecf-4273-a867-66afb39de726</uuid>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <name>instance-00000065</name>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1533575249</nova:name>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:48</nova:creationTime>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:user uuid="648aaa75d8974d439d8ebe331c3d6568">tempest-ServerRescueTestJSONUnderV235-2037260230-project-member</nova:user>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:project uuid="c4eae338e7d54d159033a20bc7460935">tempest-ServerRescueTestJSONUnderV235-2037260230</nova:project>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <nova:port uuid="175a9914-0068-4aeb-b4b6-d501212d3374">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="serial">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="uuid">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.config">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:6a:d8:7e"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <target dev="tap175a9914-00"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log" append="off"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:49 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:49 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.834 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Preparing to wait for external event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.834 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.835 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.835 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.836 2 DEBUG nova.virt.libvirt.vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:45Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.837 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.838 2 DEBUG nova.network.os_vif_util [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.838 2 DEBUG os_vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.843 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap175a9914-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.849 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap175a9914-00, col_values=(('external_ids', {'iface-id': '175a9914-0068-4aeb-b4b6-d501212d3374', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:d8:7e', 'vm-uuid': 'e1aea504-3ecf-4273-a867-66afb39de726'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:49 np0005486808 NetworkManager[44885]: <info>  [1760433289.8525] manager: (tap175a9914-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.860 2 INFO os_vif [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00')#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.957 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.959 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.960 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No VIF found with MAC fa:16:3e:6a:d8:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.961 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Using config drive#033[00m
Oct 14 05:14:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 246 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Oct 14 05:14:49 np0005486808 nova_compute[259627]: 2025-10-14 09:14:49.995 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.411 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating config drive at /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.416 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjrsr30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.567 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjrsr30" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.605 2 DEBUG nova.storage.rbd_utils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.612 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config e1aea504-3ecf-4273-a867-66afb39de726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.809 2 DEBUG oslo_concurrency.processutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config e1aea504-3ecf-4273-a867-66afb39de726_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.810 2 INFO nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting local config drive /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.848 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.849 2 DEBUG nova.network.neutron [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:50 np0005486808 kernel: tap175a9914-00: entered promiscuous mode
Oct 14 05:14:50 np0005486808 NetworkManager[44885]: <info>  [1760433290.8591] manager: (tap175a9914-00): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Oct 14 05:14:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:50Z|01071|binding|INFO|Claiming lport 175a9914-0068-4aeb-b4b6-d501212d3374 for this chassis.
Oct 14 05:14:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:50Z|01072|binding|INFO|175a9914-0068-4aeb-b4b6-d501212d3374: Claiming fa:16:3e:6a:d8:7e 10.100.0.7
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.868 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.871 2 DEBUG oslo_concurrency.lockutils [req-e9425ea4-d532-4368-86f8-03fa5181aa81 req-2c702d7b-a420-4edd-947a-15aeeca2f27b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.870 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 bound to our chassis#033[00m
Oct 14 05:14:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.871 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:14:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:50.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f90433d6-01d9-477a-8c97-8bc500d6e1de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:50Z|01073|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 ovn-installed in OVS
Oct 14 05:14:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:50Z|01074|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 up in Southbound
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:50 np0005486808 nova_compute[259627]: 2025-10-14 09:14:50.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:50 np0005486808 systemd-machined[214636]: New machine qemu-126-instance-00000065.
Oct 14 05:14:50 np0005486808 systemd[1]: Started Virtual Machine qemu-126-instance-00000065.
Oct 14 05:14:50 np0005486808 systemd-udevd[358959]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:50 np0005486808 NetworkManager[44885]: <info>  [1760433290.9660] device (tap175a9914-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:50 np0005486808 NetworkManager[44885]: <info>  [1760433290.9682] device (tap175a9914-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.200 2 DEBUG nova.compute.manager [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.202 2 DEBUG oslo_concurrency.lockutils [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.203 2 DEBUG nova.compute.manager [req-6df7c5d4-5206-451f-92b6-63eee4bee362 req-f349a841-d713-4889-a03c-23aa8dec99ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Processing event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:14:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:51Z|01075|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:51Z|01076|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.711 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.710773, e1aea504-3ecf-4273-a867-66afb39de726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.712 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Started (Lifecycle Event)#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.715 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.719 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.723 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance spawned successfully.#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.724 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.750 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.762 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.768 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.769 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.770 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.770 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.771 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.772 2 DEBUG nova.virt.libvirt.driver [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.842 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.843 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.7110634, e1aea504-3ecf-4273-a867-66afb39de726 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.843 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.868 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.872 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433291.7183087, e1aea504-3ecf-4273-a867-66afb39de726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.878 2 INFO nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.878 2 DEBUG nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.888 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.893 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.918 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.943 2 INFO nova.compute.manager [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 7.48 seconds to build instance.#033[00m
Oct 14 05:14:51 np0005486808 nova_compute[259627]: 2025-10-14 09:14:51.960 2 DEBUG oslo_concurrency.lockutils [None req-f4d01306-015d-4a06-ab8c-80064b95811e 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 270 op/s
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.566 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.567 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.584 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.612 2 INFO nova.compute.manager [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Rescuing#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.613 2 DEBUG nova.network.neutron [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.667 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.668 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.678 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.679 2 INFO nova.compute.claims [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:14:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:52 np0005486808 nova_compute[259627]: 2025-10-14 09:14:52.892 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.341 2 DEBUG nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.342 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.342 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.343 2 DEBUG oslo_concurrency.lockutils [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.343 2 DEBUG nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.343 2 WARNING nova.compute.manager [req-2d98331c-cd86-43dd-88c8-7338012d6463 req-09412ab0-0537-4e23-9558-15aa9e81c10f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:14:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:14:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937491467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.375 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.382 2 DEBUG nova.compute.provider_tree [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.396 2 DEBUG nova.scheduler.client.report [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.428 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.429 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.492 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.493 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.516 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.536 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.621 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.624 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.625 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating image(s)#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.660 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.697 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.723 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.736 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.807 2 DEBUG nova.policy [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.843 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.843 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.844 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.844 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.872 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:53 np0005486808 nova_compute[259627]: 2025-10-14 09:14:53.876 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84968701-f6c5-4798-888e-fa0f3311adca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 213 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.155 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 84968701-f6c5-4798-888e-fa0f3311adca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:54 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:14:54 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.214 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.308 2 DEBUG nova.objects.instance [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.329 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.330 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Ensure instance console log exists: /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.331 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.331 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.332 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.369 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Successfully created port: a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.583 2 DEBUG nova.network.neutron [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.606 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:54 np0005486808 nova_compute[259627]: 2025-10-14 09:14:54.914 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.558 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Successfully updated port: a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.572 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG nova.compute.manager [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-changed-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG nova.compute.manager [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Refreshing instance network info cache due to event network-changed-a8c2f2be-f25a-4512-b8a2-17b2b695ce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.654 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:14:55 np0005486808 nova_compute[259627]: 2025-10-14 09:14:55.834 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:14:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 242 op/s
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.846 2 DEBUG nova.network.neutron [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance network_info: |[{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.869 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.870 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Refreshing network info cache for port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.873 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start _get_guest_xml network_info=[{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.877 2 WARNING nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.882 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.883 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.892 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.893 2 DEBUG nova.virt.libvirt.host [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.894 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.894 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.895 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.896 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.897 2 DEBUG nova.virt.hardware [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:14:56 np0005486808 nova_compute[259627]: 2025-10-14 09:14:56.901 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435600781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.336 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.371 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.379 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:14:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490268113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.916 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.918 2 DEBUG nova.virt.libvirt.vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:53Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.919 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.922 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.923 2 DEBUG nova.objects.instance [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.953 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <uuid>84968701-f6c5-4798-888e-fa0f3311adca</uuid>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <name>instance-00000066</name>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-1896126591</nova:name>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:14:56</nova:creationTime>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <nova:port uuid="a8c2f2be-f25a-4512-b8a2-17b2b695ce52">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="serial">84968701-f6c5-4798-888e-fa0f3311adca</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="uuid">84968701-f6c5-4798-888e-fa0f3311adca</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/84968701-f6c5-4798-888e-fa0f3311adca_disk">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/84968701-f6c5-4798-888e-fa0f3311adca_disk.config">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c1:58:a4"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <target dev="tapa8c2f2be-f2"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/console.log" append="off"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:14:57 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:14:57 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:14:57 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:14:57 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.955 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Preparing to wait for external event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.956 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.957 2 DEBUG nova.virt.libvirt.vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:53Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.958 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.960 2 DEBUG nova.network.os_vif_util [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.960 2 DEBUG os_vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c2f2be-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c2f2be-f2, col_values=(('external_ids', {'iface-id': 'a8c2f2be-f25a-4512-b8a2-17b2b695ce52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:58:a4', 'vm-uuid': '84968701-f6c5-4798-888e-fa0f3311adca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:57 np0005486808 NetworkManager[44885]: <info>  [1760433297.9735] manager: (tapa8c2f2be-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:57 np0005486808 nova_compute[259627]: 2025-10-14 09:14:57.980 2 INFO os_vif [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2')#033[00m
Oct 14 05:14:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.047 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.048 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.049 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:c1:58:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.049 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Using config drive#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.070 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 960adb52-a5e4-4e97-9f70-70a815d5e2cf does not exist
Oct 14 05:14:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 82e9d0c5-fd39-4286-aac4-57f46ebe8b38 does not exist
Oct 14 05:14:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8b89a373-26e9-4ede-be32-d0f750e68abf does not exist
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:14:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.581 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Creating config drive at /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.585 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwxhudkf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.679256304 +0000 UTC m=+0.040309104 container create e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:14:58 np0005486808 systemd[1]: Started libpod-conmon-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope.
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.734 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwxhudkf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.661715742 +0000 UTC m=+0.022768552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.759 2 DEBUG nova.storage.rbd_utils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 84968701-f6c5-4798-888e-fa0f3311adca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:14:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.767 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config 84968701-f6c5-4798-888e-fa0f3311adca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.784491178 +0000 UTC m=+0.145544018 container init e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.791377678 +0000 UTC m=+0.152430478 container start e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.795722975 +0000 UTC m=+0.156775855 container attach e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:14:58 np0005486808 systemd[1]: libpod-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope: Deactivated successfully.
Oct 14 05:14:58 np0005486808 sweet_black[359686]: 167 167
Oct 14 05:14:58 np0005486808 conmon[359686]: conmon e678264e9241dc5b190f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope/container/memory.events
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.800249696 +0000 UTC m=+0.161302496 container died e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:14:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-55caa37a6b3c4650d272911b5ba43eecb749a9ddd560f043e0c37c4bb7b8ca0f-merged.mount: Deactivated successfully.
Oct 14 05:14:58 np0005486808 podman[359668]: 2025-10-14 09:14:58.855210461 +0000 UTC m=+0.216263271 container remove e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 05:14:58 np0005486808 systemd[1]: libpod-conmon-e678264e9241dc5b190f8419c2e3e3091a2cae1f7bd5e1c822008421e1ea16c4.scope: Deactivated successfully.
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.889 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updated VIF entry in instance network info cache for port a8c2f2be-f25a-4512-b8a2-17b2b695ce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.891 2 DEBUG nova.network.neutron [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [{"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.910 2 DEBUG oslo_concurrency.lockutils [req-638cb684-56a2-433b-973c-b0a676c4d210 req-74da4ca9-e4c5-47b6-bf1a-127e93b8dd9a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-84968701-f6c5-4798-888e-fa0f3311adca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.955 2 DEBUG oslo_concurrency.processutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config 84968701-f6c5-4798-888e-fa0f3311adca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:14:58 np0005486808 nova_compute[259627]: 2025-10-14 09:14:58.955 2 INFO nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deleting local config drive /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca/disk.config because it was imported into RBD.#033[00m
Oct 14 05:14:59 np0005486808 kernel: tapa8c2f2be-f2: entered promiscuous mode
Oct 14 05:14:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:59Z|01077|binding|INFO|Claiming lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for this chassis.
Oct 14 05:14:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:59Z|01078|binding|INFO|a8c2f2be-f25a-4512-b8a2-17b2b695ce52: Claiming fa:16:3e:c1:58:a4 10.100.0.12
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:59 np0005486808 NetworkManager[44885]: <info>  [1760433299.0163] manager: (tapa8c2f2be-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.020 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:58:a4 10.100.0.12'], port_security=['fa:16:3e:c1:58:a4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84968701-f6c5-4798-888e-fa0f3311adca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=a8c2f2be-f25a-4512-b8a2-17b2b695ce52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.022 162547 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.023 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:14:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:59Z|01079|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 ovn-installed in OVS
Oct 14 05:14:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:14:59Z|01080|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 up in Southbound
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b19d78-2fec-4c8c-950f-d3bc750799eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 systemd-udevd[359772]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:14:59 np0005486808 systemd-machined[214636]: New machine qemu-127-instance-00000066.
Oct 14 05:14:59 np0005486808 podman[359751]: 2025-10-14 09:14:59.067520373 +0000 UTC m=+0.066546870 container create fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:14:59 np0005486808 NetworkManager[44885]: <info>  [1760433299.0719] device (tapa8c2f2be-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:14:59 np0005486808 systemd[1]: Started Virtual Machine qemu-127-instance-00000066.
Oct 14 05:14:59 np0005486808 NetworkManager[44885]: <info>  [1760433299.0728] device (tapa8c2f2be-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.103 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[20185546-02d9-40c6-aba4-abfb119451a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.105 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[998f241d-84f5-49c1-98db-4cabaeecf619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 systemd[1]: Started libpod-conmon-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope.
Oct 14 05:14:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:14:59 np0005486808 podman[359751]: 2025-10-14 09:14:59.045718866 +0000 UTC m=+0.044745373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:14:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.140 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3334ba-f0ba-4cf8-a9b7-a53f8e48909f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.148 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.163 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd4cdc5-148b-4075-ad41-eaddca02e666]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359792, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 podman[359751]: 2025-10-14 09:14:59.169416364 +0000 UTC m=+0.168442851 container init fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:14:59 np0005486808 podman[359751]: 2025-10-14 09:14:59.177819391 +0000 UTC m=+0.176845878 container start fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:14:59 np0005486808 podman[359751]: 2025-10-14 09:14:59.181578154 +0000 UTC m=+0.180604641 container attach fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f2394-e9df-4a38-8fb4-8f4daac92883]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359795, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359795, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.188 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.191 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.192 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.193 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:14:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:14:59.194 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:14:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 260 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.983 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433299.982139, 84968701-f6c5-4798-888e-fa0f3311adca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:14:59 np0005486808 nova_compute[259627]: 2025-10-14 09:14:59.984 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.004 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.009 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433299.9823294, 84968701-f6c5-4798-888e-fa0f3311adca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.010 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.030 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.033 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.052 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:00 np0005486808 strange_swartz[359782]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:15:00 np0005486808 strange_swartz[359782]: --> relative data size: 1.0
Oct 14 05:15:00 np0005486808 strange_swartz[359782]: --> All data devices are unavailable
Oct 14 05:15:00 np0005486808 systemd[1]: libpod-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope: Deactivated successfully.
Oct 14 05:15:00 np0005486808 conmon[359782]: conmon fcd44b21de49d382c706 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope/container/memory.events
Oct 14 05:15:00 np0005486808 podman[359751]: 2025-10-14 09:15:00.199075262 +0000 UTC m=+1.198101769 container died fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 05:15:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa52c94f82841b87b8696a1b505c48ca7fdb071e77d319e0305a992492df1fc6-merged.mount: Deactivated successfully.
Oct 14 05:15:00 np0005486808 podman[359751]: 2025-10-14 09:15:00.261866639 +0000 UTC m=+1.260893126 container remove fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:15:00 np0005486808 systemd[1]: libpod-conmon-fcd44b21de49d382c706d12d3f98a257207777036d225df47e67a6ccc2fa7905.scope: Deactivated successfully.
Oct 14 05:15:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:00Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:70:22 10.100.0.3
Oct 14 05:15:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:00Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:70:22 10.100.0.3
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433285.467822, b595141f-123e-4250-bfec-888d866fd0c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.626 2 INFO nova.compute.manager [-] [instance: b595141f-123e-4250-bfec-888d866fd0c6] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:15:00 np0005486808 nova_compute[259627]: 2025-10-14 09:15:00.659 2 DEBUG nova.compute.manager [None req-242c9f69-e5ed-48f4-a4ca-deb7de9d70e4 - - - - - -] [instance: b595141f-123e-4250-bfec-888d866fd0c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:00 np0005486808 podman[360014]: 2025-10-14 09:15:00.919092668 +0000 UTC m=+0.042629932 container create 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:15:00 np0005486808 systemd[1]: Started libpod-conmon-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope.
Oct 14 05:15:00 np0005486808 podman[360014]: 2025-10-14 09:15:00.900997242 +0000 UTC m=+0.024534496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:15:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:01 np0005486808 podman[360014]: 2025-10-14 09:15:01.039970807 +0000 UTC m=+0.163508071 container init 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:15:01 np0005486808 podman[360014]: 2025-10-14 09:15:01.052513046 +0000 UTC m=+0.176050280 container start 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:15:01 np0005486808 podman[360014]: 2025-10-14 09:15:01.057166141 +0000 UTC m=+0.180703365 container attach 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:15:01 np0005486808 upbeat_lederberg[360031]: 167 167
Oct 14 05:15:01 np0005486808 systemd[1]: libpod-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope: Deactivated successfully.
Oct 14 05:15:01 np0005486808 podman[360014]: 2025-10-14 09:15:01.062780749 +0000 UTC m=+0.186317983 container died 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:15:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4a6c36d2cf3685c0e671aeec3bf72bc51c5865720bdf11777bd893d5f3dc7b52-merged.mount: Deactivated successfully.
Oct 14 05:15:01 np0005486808 podman[360014]: 2025-10-14 09:15:01.103888232 +0000 UTC m=+0.227425466 container remove 9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:15:01 np0005486808 systemd[1]: libpod-conmon-9a490f30482dbb84a59d90c856f1eefe567b07cf1bd80c6e434c0520d837ba91.scope: Deactivated successfully.
Oct 14 05:15:01 np0005486808 podman[360055]: 2025-10-14 09:15:01.340913954 +0000 UTC m=+0.061454235 container create 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:15:01 np0005486808 podman[360055]: 2025-10-14 09:15:01.311364686 +0000 UTC m=+0.031905057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:15:01 np0005486808 systemd[1]: Started libpod-conmon-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope.
Oct 14 05:15:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:01 np0005486808 podman[360055]: 2025-10-14 09:15:01.463119416 +0000 UTC m=+0.183659727 container init 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:15:01 np0005486808 podman[360055]: 2025-10-14 09:15:01.46976169 +0000 UTC m=+0.190301981 container start 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:15:01 np0005486808 podman[360055]: 2025-10-14 09:15:01.475127112 +0000 UTC m=+0.195667423 container attach 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:15:01 np0005486808 podman[360071]: 2025-10-14 09:15:01.491988558 +0000 UTC m=+0.100461807 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 14 05:15:01 np0005486808 podman[360069]: 2025-10-14 09:15:01.511374116 +0000 UTC m=+0.118764259 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:15:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 293 op/s
Oct 14 05:15:02 np0005486808 happy_morse[360091]: {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    "0": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "devices": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "/dev/loop3"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            ],
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_name": "ceph_lv0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_size": "21470642176",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "name": "ceph_lv0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "tags": {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_name": "ceph",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.crush_device_class": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.encrypted": "0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_id": "0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.vdo": "0"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            },
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "vg_name": "ceph_vg0"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        }
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    ],
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    "1": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "devices": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "/dev/loop4"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            ],
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_name": "ceph_lv1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_size": "21470642176",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "name": "ceph_lv1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "tags": {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_name": "ceph",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.crush_device_class": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.encrypted": "0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_id": "1",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.vdo": "0"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            },
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "vg_name": "ceph_vg1"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        }
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    ],
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    "2": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "devices": [
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "/dev/loop5"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            ],
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_name": "ceph_lv2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_size": "21470642176",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "name": "ceph_lv2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "tags": {
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.cluster_name": "ceph",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.crush_device_class": "",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.encrypted": "0",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osd_id": "2",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:                "ceph.vdo": "0"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            },
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "type": "block",
Oct 14 05:15:02 np0005486808 happy_morse[360091]:            "vg_name": "ceph_vg2"
Oct 14 05:15:02 np0005486808 happy_morse[360091]:        }
Oct 14 05:15:02 np0005486808 happy_morse[360091]:    ]
Oct 14 05:15:02 np0005486808 happy_morse[360091]: }
Oct 14 05:15:02 np0005486808 systemd[1]: libpod-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope: Deactivated successfully.
Oct 14 05:15:02 np0005486808 conmon[360091]: conmon 54c768a82e6a760a878e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope/container/memory.events
Oct 14 05:15:02 np0005486808 podman[360055]: 2025-10-14 09:15:02.293658866 +0000 UTC m=+1.014199197 container died 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 05:15:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-12e2abfa681240a38a2098d90eebce6b45d75b2711dee286de7848c743afe9a2-merged.mount: Deactivated successfully.
Oct 14 05:15:02 np0005486808 podman[360055]: 2025-10-14 09:15:02.363457917 +0000 UTC m=+1.083998218 container remove 54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:15:02 np0005486808 systemd[1]: libpod-conmon-54c768a82e6a760a878ef9c04331057be3f3975f283ee6929b9c3f6f3fc30835.scope: Deactivated successfully.
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.412 2 DEBUG nova.compute.manager [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.412 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG oslo_concurrency.lockutils [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.413 2 DEBUG nova.compute.manager [req-c91730d8-f0bc-4582-8e60-f0a61225a939 req-867ee8b9-e5ab-478b-8be8-c7e0cf82106c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Processing event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.415 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.419 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433302.4194567, 84968701-f6c5-4798-888e-fa0f3311adca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.420 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.422 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.426 2 INFO nova.virt.libvirt.driver [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance spawned successfully.#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.426 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.445 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.451 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.456 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.457 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.457 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.458 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.459 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.459 2 DEBUG nova.virt.libvirt.driver [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.488 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.537 2 INFO nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 8.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.549 2 DEBUG nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.618 2 INFO nova.compute.manager [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 9.98 seconds to build instance.#033[00m
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.634 2 DEBUG oslo_concurrency.lockutils [None req-bde45a0b-5d4f-4965-9769-959b9230de5a a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:02 np0005486808 nova_compute[259627]: 2025-10-14 09:15:02.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:03 np0005486808 podman[360269]: 2025-10-14 09:15:03.077204427 +0000 UTC m=+0.046305953 container create f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:15:03 np0005486808 systemd[1]: Started libpod-conmon-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope.
Oct 14 05:15:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:03 np0005486808 podman[360269]: 2025-10-14 09:15:03.05664187 +0000 UTC m=+0.025743416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:15:03 np0005486808 podman[360269]: 2025-10-14 09:15:03.159131946 +0000 UTC m=+0.128233492 container init f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:15:03 np0005486808 podman[360269]: 2025-10-14 09:15:03.169643475 +0000 UTC m=+0.138745001 container start f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:15:03 np0005486808 busy_heisenberg[360285]: 167 167
Oct 14 05:15:03 np0005486808 podman[360269]: 2025-10-14 09:15:03.173523981 +0000 UTC m=+0.142625507 container attach f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:15:03 np0005486808 systemd[1]: libpod-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope: Deactivated successfully.
Oct 14 05:15:03 np0005486808 podman[360290]: 2025-10-14 09:15:03.214155032 +0000 UTC m=+0.026477083 container died f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:15:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ed6e651098525854a061599b8e46a21e4a0af53050637580d55340540be25849-merged.mount: Deactivated successfully.
Oct 14 05:15:03 np0005486808 podman[360290]: 2025-10-14 09:15:03.247476884 +0000 UTC m=+0.059798895 container remove f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:15:03 np0005486808 systemd[1]: libpod-conmon-f98592d093d6cbaad184b5fd1cdadc3d42e35df99c0fc8fce23a5b73e5500a75.scope: Deactivated successfully.
Oct 14 05:15:03 np0005486808 podman[360312]: 2025-10-14 09:15:03.455412698 +0000 UTC m=+0.049252734 container create 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:15:03 np0005486808 systemd[1]: Started libpod-conmon-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope.
Oct 14 05:15:03 np0005486808 nova_compute[259627]: 2025-10-14 09:15:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:03 np0005486808 podman[360312]: 2025-10-14 09:15:03.532084628 +0000 UTC m=+0.125924664 container init 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:15:03 np0005486808 podman[360312]: 2025-10-14 09:15:03.431892389 +0000 UTC m=+0.025732445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:15:03 np0005486808 podman[360312]: 2025-10-14 09:15:03.539276445 +0000 UTC m=+0.133116481 container start 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:15:03 np0005486808 podman[360312]: 2025-10-14 09:15:03.548728448 +0000 UTC m=+0.142568504 container attach 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:15:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 293 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 14 05:15:04 np0005486808 practical_morse[360329]: {
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_id": 2,
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "type": "bluestore"
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    },
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_id": 1,
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "type": "bluestore"
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    },
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_id": 0,
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:15:04 np0005486808 practical_morse[360329]:        "type": "bluestore"
Oct 14 05:15:04 np0005486808 practical_morse[360329]:    }
Oct 14 05:15:04 np0005486808 practical_morse[360329]: }
Oct 14 05:15:04 np0005486808 systemd[1]: libpod-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Deactivated successfully.
Oct 14 05:15:04 np0005486808 podman[360312]: 2025-10-14 09:15:04.637207086 +0000 UTC m=+1.231047132 container died 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:15:04 np0005486808 systemd[1]: libpod-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Consumed 1.034s CPU time.
Oct 14 05:15:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8f0e87475b1e026c77c5b3d7edbd152d79d0bcd9930ffd1ad37bebe5d51710c0-merged.mount: Deactivated successfully.
Oct 14 05:15:04 np0005486808 podman[360312]: 2025-10-14 09:15:04.719249448 +0000 UTC m=+1.313089494 container remove 1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:15:04 np0005486808 systemd[1]: libpod-conmon-1149fa86db1d824247a7ef8b9ff23d1a42936d193691bbada9f1fd51e96452d5.scope: Deactivated successfully.
Oct 14 05:15:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:15:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:15:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:15:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:15:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ca4426c9-0da5-42f9-99bc-db50c1b0b77e does not exist
Oct 14 05:15:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8adcd819-fe73-4c76-8ef8-147c3e945179 does not exist
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.016 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.542 2 DEBUG nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.543 2 DEBUG oslo_concurrency.lockutils [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.544 2 DEBUG nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:05 np0005486808 nova_compute[259627]: 2025-10-14 09:15:05.544 2 WARNING nova.compute.manager [req-b3c6db00-f37c-432f-bb48-01bb278e471d req-48534a61-f5ab-481e-86a6-620e0d4674ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received unexpected event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:15:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024625731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:15:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 303 op/s
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.197 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.573 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.573 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.577 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.578 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.579 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.580 2 INFO nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Terminating instance#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.581 2 DEBUG nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.587 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:15:06 np0005486808 kernel: tapa8c2f2be-f2 (unregistering): left promiscuous mode
Oct 14 05:15:06 np0005486808 NetworkManager[44885]: <info>  [1760433306.6406] device (tapa8c2f2be-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.651 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:06Z|01081|binding|INFO|Releasing lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 from this chassis (sb_readonly=0)
Oct 14 05:15:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:06Z|01082|binding|INFO|Setting lport a8c2f2be-f25a-4512-b8a2-17b2b695ce52 down in Southbound
Oct 14 05:15:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:06Z|01083|binding|INFO|Removing iface tapa8c2f2be-f2 ovn-installed in OVS
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.664 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.664 2 INFO nova.compute.claims [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.662 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:58:a4 10.100.0.12'], port_security=['fa:16:3e:c1:58:a4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84968701-f6c5-4798-888e-fa0f3311adca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=a8c2f2be-f25a-4512-b8a2-17b2b695ce52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.663 162547 INFO neutron.agent.ovn.metadata.agent [-] Port a8c2f2be-f25a-4512-b8a2-17b2b695ce52 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.665 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[653f80f6-21ff-456c-b079-58ae70498335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 14 05:15:06 np0005486808 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Consumed 4.973s CPU time.
Oct 14 05:15:06 np0005486808 systemd-machined[214636]: Machine qemu-127-instance-00000066 terminated.
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.737 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d424f0-181f-4ab4-b514-c71862bb85c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.740 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1d807eaa-f7c7-4b88-b4bc-f4cccd57c4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.772 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[240d9aee-76bc-493f-8c96-eba795ff227f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d74380b8-3da3-4b36-b92c-bafe066a94ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 958, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 958, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360436, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[140cc4e9-9740-4cc9-ad9c-b5209791ec95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360438, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360438, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.819 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.826 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:06.827 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.832 2 INFO nova.virt.libvirt.driver [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Instance destroyed successfully.#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.833 2 DEBUG nova.objects.instance [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 84968701-f6c5-4798-888e-fa0f3311adca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.847 2 DEBUG nova.virt.libvirt.vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=102,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-g3f94nzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:02Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=84968701-f6c5-4798-888e-fa0f3311adca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.848 2 DEBUG nova.network.os_vif_util [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "address": "fa:16:3e:c1:58:a4", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c2f2be-f2", "ovs_interfaceid": "a8c2f2be-f25a-4512-b8a2-17b2b695ce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.849 2 DEBUG nova.network.os_vif_util [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.850 2 DEBUG os_vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c2f2be-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.863 2 INFO os_vif [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:58:a4,bridge_name='br-int',has_traffic_filtering=True,id=a8c2f2be-f25a-4512-b8a2-17b2b695ce52,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c2f2be-f2')#033[00m
Oct 14 05:15:06 np0005486808 nova_compute[259627]: 2025-10-14 09:15:06.887 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:07 np0005486808 kernel: tap175a9914-00 (unregistering): left promiscuous mode
Oct 14 05:15:07 np0005486808 NetworkManager[44885]: <info>  [1760433307.3252] device (tap175a9914-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71834738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:07Z|01084|binding|INFO|Releasing lport 175a9914-0068-4aeb-b4b6-d501212d3374 from this chassis (sb_readonly=0)
Oct 14 05:15:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:07Z|01085|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 down in Southbound
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:07Z|01086|binding|INFO|Removing iface tap175a9914-00 ovn-installed in OVS
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.389 2 INFO nova.virt.libvirt.driver [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deleting instance files /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca_del#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.390 2 INFO nova.virt.libvirt.driver [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deletion of /var/lib/nova/instances/84968701-f6c5-4798-888e-fa0f3311adca_del complete#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.386 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.390 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 unbound from our chassis#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.393 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:15:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:07.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6986f78b-0c96-4ab8-a4b4-5506d934f910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.398 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.402 2 DEBUG nova.compute.provider_tree [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.425 2 DEBUG nova.scheduler.client.report [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:07 np0005486808 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 14 05:15:07 np0005486808 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Consumed 12.682s CPU time.
Oct 14 05:15:07 np0005486808 systemd-machined[214636]: Machine qemu-126-instance-00000065 terminated.
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.467 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.467 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.475 2 INFO nova.compute.manager [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.475 2 DEBUG oslo.service.loopingcall [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.476 2 DEBUG nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.476 2 DEBUG nova.network.neutron [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.519 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.520 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.541 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.567 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.654 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.656 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.657 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating image(s)#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.690 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.719 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.745 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.749 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.802 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-unplugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.803 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "84968701-f6c5-4798-888e-fa0f3311adca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG oslo_concurrency.lockutils [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.804 2 DEBUG nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] No waiting events found dispatching network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.804 2 WARNING nova.compute.manager [req-08e0c450-60f5-4218-a742-8c24a8f4de39 req-74eb0634-365a-4513-bdc9-986a6c79eee2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received unexpected event network-vif-plugged-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.842 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.866 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.869 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 33969555-fe06-4613-b244-d03c9b4180ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:07 np0005486808 nova_compute[259627]: 2025-10-14 09:15:07.914 2 DEBUG nova.policy [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:15:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.036 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.047 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.048 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'numa_topology' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.070 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Attempting rescue#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.073 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.085 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.086 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating image(s)#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.129 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.134 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.172 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.203 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.209 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.253 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 33969555-fe06-4613-b244-d03c9b4180ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.312 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.313 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.314 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.315 2 DEBUG oslo_concurrency.lockutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.334 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.339 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.382 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.532 2 DEBUG nova.network.neutron [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.560 2 DEBUG nova.objects.instance [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.566 2 INFO nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Took 1.09 seconds to deallocate network for instance.#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.573 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.573 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Ensure instance console log exists: /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.574 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.604 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.605 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.653 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.654 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'migration_context' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.667 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.668 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start _get_guest_xml network_info=[{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.668 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'resources' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.697 2 WARNING nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.704 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.705 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.709 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.709 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.710 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.711 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.712 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.713 2 DEBUG nova.virt.hardware [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.713 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.747 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:08 np0005486808 nova_compute[259627]: 2025-10-14 09:15:08.800 2 DEBUG oslo_concurrency.processutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.106 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Successfully created port: 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196220235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.232 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.234 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096615696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.326 2 DEBUG oslo_concurrency.processutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.334 2 DEBUG nova.compute.provider_tree [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.356 2 DEBUG nova.scheduler.client.report [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.378 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.412 2 INFO nova.scheduler.client.report [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 84968701-f6c5-4798-888e-fa0f3311adca#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.494 2 DEBUG oslo_concurrency.lockutils [None req-9c6db414-b762-435d-be46-52dfd71abe8b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "84968701-f6c5-4798-888e-fa0f3311adca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882457015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.750 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.751 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.899 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Successfully updated port: 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.915 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.916 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:09 np0005486808 nova_compute[259627]: 2025-10-14 09:15:09.916 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:15:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.017 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.018 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.019 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.020 2 WARNING nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.020 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.021 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.021 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.022 2 DEBUG oslo_concurrency.lockutils [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.022 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.023 2 WARNING nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state active and task_state rescuing.#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.023 2 DEBUG nova.compute.manager [req-a63f6f15-84a3-4484-870a-975053a95875 req-489e0f03-94e6-48b9-9198-d7bd0fd84e5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Received event network-vif-deleted-a8c2f2be-f25a-4512-b8a2-17b2b695ce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG nova.compute.manager [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG nova.compute.manager [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.118 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.175 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:15:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502171530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.194 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.195 2 DEBUG nova.virt.libvirt.vif [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:14:51Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.196 2 DEBUG nova.network.os_vif_util [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "vif_mac": "fa:16:3e:6a:d8:7e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.196 2 DEBUG nova.network.os_vif_util [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.198 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.215 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <uuid>e1aea504-3ecf-4273-a867-66afb39de726</uuid>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <name>instance-00000065</name>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1533575249</nova:name>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:08</nova:creationTime>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:user uuid="648aaa75d8974d439d8ebe331c3d6568">tempest-ServerRescueTestJSONUnderV235-2037260230-project-member</nova:user>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:project uuid="c4eae338e7d54d159033a20bc7460935">tempest-ServerRescueTestJSONUnderV235-2037260230</nova:project>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <nova:port uuid="175a9914-0068-4aeb-b4b6-d501212d3374">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="serial">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="uuid">e1aea504-3ecf-4273-a867-66afb39de726</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.rescue">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <target dev="vdb" bus="virtio"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:6a:d8:7e"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <target dev="tap175a9914-00"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/console.log" append="off"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:10 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:10 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:10 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:10 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.225 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.281 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.282 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.282 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.283 2 DEBUG nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] No VIF found with MAC fa:16:3e:6a:d8:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.284 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Using config drive#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.315 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.331 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.359 2 DEBUG nova.objects.instance [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'keypairs' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.440 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.441 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.441 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.442 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.442 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.445 2 INFO nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Terminating instance#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.446 2 DEBUG nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:15:10 np0005486808 kernel: tapc2ac8abe-0e (unregistering): left promiscuous mode
Oct 14 05:15:10 np0005486808 NetworkManager[44885]: <info>  [1760433310.5134] device (tapc2ac8abe-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:10Z|01087|binding|INFO|Releasing lport c2ac8abe-0e61-4769-9529-3b391568e6b9 from this chassis (sb_readonly=0)
Oct 14 05:15:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:10Z|01088|binding|INFO|Setting lport c2ac8abe-0e61-4769-9529-3b391568e6b9 down in Southbound
Oct 14 05:15:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:10Z|01089|binding|INFO|Removing iface tapc2ac8abe-0e ovn-installed in OVS
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.533 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:70:22 10.100.0.3'], port_security=['fa:16:3e:fa:70:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da40c115-048e-4844-812e-7e65e25bfb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c2ac8abe-0e61-4769-9529-3b391568e6b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.535 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c2ac8abe-0e61-4769-9529-3b391568e6b9 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.537 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9b77d0-a7dc-438a-91f8-6c0d9e638eb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 14 05:15:10 np0005486808 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Consumed 12.963s CPU time.
Oct 14 05:15:10 np0005486808 systemd-machined[214636]: Machine qemu-125-instance-00000064 terminated.
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.592 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf6199f-ab03-435d-ab16-30479f3dc9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.598 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdc0afa-d111-4904-a150-ad6cd5f2e8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.624 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[80183ec1-f7f1-4cdd-a5ed-611efc31905b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8bada914-ed13-4746-af92-f2e528ca3b07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360885, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2222d4f9-b14d-43a2-ac28-885f91de3ebf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360886, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360886, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.720 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Creating config drive at /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.725 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgldtc1i2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.727 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:10.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.774 2 INFO nova.virt.libvirt.driver [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Instance destroyed successfully.#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.775 2 DEBUG nova.objects.instance [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid da40c115-048e-4844-812e-7e65e25bfb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.788 2 DEBUG nova.virt.libvirt.vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1896126591',display_name='tempest-ServersTestJSON-server-1896126591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1896126591',id=100,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:14:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-95sxw262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:14:47Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=da40c115-048e-4844-812e-7e65e25bfb3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.788 2 DEBUG nova.network.os_vif_util [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "address": "fa:16:3e:fa:70:22", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2ac8abe-0e", "ovs_interfaceid": "c2ac8abe-0e61-4769-9529-3b391568e6b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.789 2 DEBUG nova.network.os_vif_util [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.789 2 DEBUG os_vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ac8abe-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.798 2 INFO os_vif [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:70:22,bridge_name='br-int',has_traffic_filtering=True,id=c2ac8abe-0e61-4769-9529-3b391568e6b9,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2ac8abe-0e')#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.878 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgldtc1i2" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.900 2 DEBUG nova.storage.rbd_utils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] rbd image e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:10 np0005486808 nova_compute[259627]: 2025-10-14 09:15:10.905 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.094 2 DEBUG oslo_concurrency.processutils [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue e1aea504-3ecf-4273-a867-66afb39de726_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.095 2 INFO nova.virt.libvirt.driver [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting local config drive /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726/disk.config.rescue because it was imported into RBD.#033[00m
Oct 14 05:15:11 np0005486808 kernel: tap175a9914-00: entered promiscuous mode
Oct 14 05:15:11 np0005486808 NetworkManager[44885]: <info>  [1760433311.1460] manager: (tap175a9914-00): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Oct 14 05:15:11 np0005486808 systemd-udevd[360876]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:11Z|01090|binding|INFO|Claiming lport 175a9914-0068-4aeb-b4b6-d501212d3374 for this chassis.
Oct 14 05:15:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:11Z|01091|binding|INFO|175a9914-0068-4aeb-b4b6-d501212d3374: Claiming fa:16:3e:6a:d8:7e 10.100.0.7
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.154 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.155 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 bound to our chassis#033[00m
Oct 14 05:15:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.156 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:15:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:11.156 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[511e92de-060d-4e2b-8e29-223aa224c47e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:11 np0005486808 NetworkManager[44885]: <info>  [1760433311.1619] device (tap175a9914-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:15:11 np0005486808 NetworkManager[44885]: <info>  [1760433311.1627] device (tap175a9914-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:15:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:11Z|01092|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 up in Southbound
Oct 14 05:15:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:11Z|01093|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 ovn-installed in OVS
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:11 np0005486808 systemd-machined[214636]: New machine qemu-128-instance-00000065.
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.184 2 INFO nova.virt.libvirt.driver [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deleting instance files /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f_del#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.185 2 INFO nova.virt.libvirt.driver [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deletion of /var/lib/nova/instances/da40c115-048e-4844-812e-7e65e25bfb3f_del complete#033[00m
Oct 14 05:15:11 np0005486808 systemd[1]: Started Virtual Machine qemu-128-instance-00000065.
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.230 2 INFO nova.compute.manager [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.231 2 DEBUG oslo.service.loopingcall [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.232 2 DEBUG nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.232 2 DEBUG nova.network.neutron [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.690 2 DEBUG nova.network.neutron [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.723 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.724 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance network_info: |[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.725 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.726 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.732 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start _get_guest_xml network_info=[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.739 2 WARNING nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.746 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.747 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.755 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.756 2 DEBUG nova.virt.libvirt.host [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.757 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.758 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.759 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.759 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.760 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.761 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.761 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.762 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.763 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.763 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.764 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.764 2 DEBUG nova.virt.hardware [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.770 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.922 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for e1aea504-3ecf-4273-a867-66afb39de726 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.923 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433311.9219284, e1aea504-3ecf-4273-a867-66afb39de726 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.924 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.933 2 DEBUG nova.compute.manager [None req-2554d05e-3d1b-46dd-8268-597ef65871ad 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.978 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.982 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:11 np0005486808 nova_compute[259627]: 2025-10-14 09:15:11.984 2 DEBUG nova.network.neutron [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 273 op/s
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.033 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.034 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433311.9241364, e1aea504-3ecf-4273-a867-66afb39de726 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.034 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.035 2 INFO nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.074 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.077 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.116 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.117 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.150 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.150 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.151 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-unplugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.151 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.152 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] No waiting events found dispatching network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.153 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received unexpected event network-vif-plugged-c2ac8abe-0e61-4769-9529-3b391568e6b9 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.153 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state None.#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.154 2 DEBUG oslo_concurrency.lockutils [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.155 2 DEBUG nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.155 2 WARNING nova.compute.manager [req-3d675a49-b46d-4091-abca-3a8ad6fac401 req-5a56eb76-3941-49fe-ae85-0a1deac9e47c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state None.#033[00m
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806722944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.222 2 DEBUG nova.compute.manager [req-9e638318-3838-4815-aef5-4ccea4db8040 req-b9b494d6-304c-4d9c-9167-198669497f57 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Received event network-vif-deleted-c2ac8abe-0e61-4769-9529-3b391568e6b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.226 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.258 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.265 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.318 2 DEBUG oslo_concurrency.processutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478511998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.805 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817994809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.808 2 DEBUG nova.virt.libvirt.vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:07Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.809 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.811 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.814 2 DEBUG nova.objects.instance [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.826 2 DEBUG oslo_concurrency.processutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.833 2 DEBUG nova.compute.provider_tree [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.840 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <uuid>33969555-fe06-4613-b244-d03c9b4180ba</uuid>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <name>instance-00000067</name>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-383304321</nova:name>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:11</nova:creationTime>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <nova:port uuid="7e42bf44-c1f8-49df-bd5f-a26abe43a832">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="serial">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="uuid">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk.config">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:20:8d:8f"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <target dev="tap7e42bf44-c1"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log" append="off"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:12 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:12 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:12 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:12 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.843 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Preparing to wait for external event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.843 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.844 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.844 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.846 2 DEBUG nova.virt.libvirt.vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:07Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.846 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.848 2 DEBUG nova.network.os_vif_util [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.848 2 DEBUG os_vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.854 2 DEBUG nova.scheduler.client.report [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e42bf44-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e42bf44-c1, col_values=(('external_ids', {'iface-id': '7e42bf44-c1f8-49df-bd5f-a26abe43a832', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:8d:8f', 'vm-uuid': '33969555-fe06-4613-b244-d03c9b4180ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.887 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:12 np0005486808 NetworkManager[44885]: <info>  [1760433312.8997] manager: (tap7e42bf44-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.904 2 INFO os_vif [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')#033[00m
Oct 14 05:15:12 np0005486808 nova_compute[259627]: 2025-10-14 09:15:12.907 2 INFO nova.scheduler.client.report [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance da40c115-048e-4844-812e-7e65e25bfb3f#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.006 2 DEBUG oslo_concurrency.lockutils [None req-902c4c74-8bbc-4e71-802e-7f5c9537428b a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "da40c115-048e-4844-812e-7e65e25bfb3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.018 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.019 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.019 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:20:8d:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.021 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Using config drive#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.053 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.621 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Creating config drive at /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.631 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u32y4o3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.693 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.694 2 DEBUG nova.network.neutron [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.715 2 DEBUG oslo_concurrency.lockutils [req-ac877969-e14f-410f-914b-e17918705262 req-cad6fe36-cc15-4060-ad85-367a9c6d90ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.802 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u32y4o3" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.831 2 DEBUG nova.storage.rbd_utils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 33969555-fe06-4613-b244-d03c9b4180ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:13 np0005486808 nova_compute[259627]: 2025-10-14 09:15:13.836 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config 33969555-fe06-4613-b244-d03c9b4180ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 372 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 200 op/s
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.177 2 DEBUG oslo_concurrency.processutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config 33969555-fe06-4613-b244-d03c9b4180ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.179 2 INFO nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deleting local config drive /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/disk.config because it was imported into RBD.#033[00m
Oct 14 05:15:14 np0005486808 kernel: tap7e42bf44-c1: entered promiscuous mode
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.2266] manager: (tap7e42bf44-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:14Z|01094|binding|INFO|Claiming lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 for this chassis.
Oct 14 05:15:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:14Z|01095|binding|INFO|7e42bf44-c1f8-49df-bd5f-a26abe43a832: Claiming fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.241 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.243 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e bound to our chassis#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.244 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.263 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfd2c3a-75b2-4467-91ce-4ed25fff6fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.264 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f4225de-91 in ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.266 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f4225de-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.266 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[771315dc-5c57-4ddb-a9b2-b96d3af83b47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a0eb3e-f4bc-4df4-8ba2-3131c155a284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 systemd-udevd[361193]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.278 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6f339-9e4f-4e98-831a-81517f08ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 systemd-machined[214636]: New machine qemu-129-instance-00000067.
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.2885] device (tap7e42bf44-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.2902] device (tap7e42bf44-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:15:14 np0005486808 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.305 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4988116d-6c70-46f9-b526-5edfa07919e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.340 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4b40831a-9c2d-4eb3-8dc3-cc86cf85310d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 systemd-udevd[361198]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.3502] manager: (tap7f4225de-90): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ed05bfd7-1b30-4459-8957-b20f375ac03c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:14Z|01096|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 ovn-installed in OVS
Oct 14 05:15:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:14Z|01097|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 up in Southbound
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4879801e-4488-42d0-a8b5-20611444e8a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.403 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[12ca2d3c-2d95-42d4-a728-2750ab2379b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.4261] device (tap7f4225de-90): carrier: link connected
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[421e0e27-f348-466f-97df-13d733a9d0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b94d1-9386-4da7-80a2-f543e2d10a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709318, 'reachable_time': 28288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361226, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d87856-f55c-4a7d-856f-a91294273e6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:e234'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709318, 'tstamp': 709318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361227, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.501 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcef1ee-cb35-45e5-ac43-403409676e25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709318, 'reachable_time': 28288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361228, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.547 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e46651-2c09-4ece-af19-a3567682369c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9f63ca8e-0216-495c-af13-ed39753696f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.613 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4225de-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:14 np0005486808 kernel: tap7f4225de-90: entered promiscuous mode
Oct 14 05:15:14 np0005486808 NetworkManager[44885]: <info>  [1760433314.6165] manager: (tap7f4225de-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.618 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f4225de-90, col_values=(('external_ids', {'iface-id': 'f5c700c1-27f2-4a9c-8db0-f69417ca2318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:14Z|01098|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 05:15:14 np0005486808 nova_compute[259627]: 2025-10-14 09:15:14.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.654 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6f61ce-dac3-4b30-8929-0db73f748039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.656 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:15:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:14.658 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'env', 'PROCESS_TAG=haproxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:15:15 np0005486808 podman[361302]: 2025-10-14 09:15:15.065438963 +0000 UTC m=+0.082204297 container create 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:15:15 np0005486808 podman[361302]: 2025-10-14 09:15:15.014741354 +0000 UTC m=+0.031506768 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:15:15 np0005486808 systemd[1]: Started libpod-conmon-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope.
Oct 14 05:15:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839d52bde3e6e8745834d54c6e47047a57e1518d17430d80d50930db816dbf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:15 np0005486808 podman[361302]: 2025-10-14 09:15:15.170748879 +0000 UTC m=+0.187514243 container init 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 14 05:15:15 np0005486808 podman[361302]: 2025-10-14 09:15:15.176669235 +0000 UTC m=+0.193434559 container start 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:15:15 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : New worker (361323) forked
Oct 14 05:15:15 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : Loading success.
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.389 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433315.3885822, 33969555-fe06-4613-b244-d03c9b4180ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.389 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.409 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.413 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433315.3892004, 33969555-fe06-4613-b244-d03c9b4180ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.413 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.434 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.438 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:15 np0005486808 nova_compute[259627]: 2025-10-14 09:15:15.464 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 314 op/s
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.270 2 DEBUG nova.compute.manager [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.270 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG oslo_concurrency.lockutils [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.271 2 DEBUG nova.compute.manager [req-e19839e2-c82f-4227-ba76-d6cb5a97e049 req-fe2fe181-d575-4637-9c76-00b53a99876a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Processing event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.273 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.289 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433316.2861264, 33969555-fe06-4613-b244-d03c9b4180ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.293 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.298 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance spawned successfully.#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.299 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.315 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.323 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.329 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.329 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.330 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.330 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.331 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.332 2 DEBUG nova.virt.libvirt.driver [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.354 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.563 2 INFO nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 8.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.564 2 DEBUG nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.644 2 INFO nova.compute.manager [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 10.02 seconds to build instance.#033[00m
Oct 14 05:15:16 np0005486808 nova_compute[259627]: 2025-10-14 09:15:16.661 2 DEBUG oslo_concurrency.lockutils [None req-2805e7f9-930c-4732-abf0-a93d1754b7a8 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.067 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.067 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.083 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.152 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.153 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.163 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.163 2 INFO nova.compute.claims [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.366 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503332990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.881 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.887 2 DEBUG nova.compute.provider_tree [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.906 2 DEBUG nova.scheduler.client.report [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG nova.compute.manager [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG nova.compute.manager [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.915 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.916 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.916 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.930 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.930 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.977 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.978 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:15:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 05:15:17 np0005486808 nova_compute[259627]: 2025-10-14 09:15:17.996 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.021 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.110 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.113 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.114 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating image(s)#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.159 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.201 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.241 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.246 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.353 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.355 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.380 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.387 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.434 2 DEBUG nova.policy [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.573 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.574 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.575 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.576 2 WARNING nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.576 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.576 2 DEBUG nova.compute.manager [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.577 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.694 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.768 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.861 2 DEBUG nova.objects.instance [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.873 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.874 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Ensure instance console log exists: /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.874 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.875 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:18 np0005486808 nova_compute[259627]: 2025-10-14 09:15:18.875 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:19 np0005486808 nova_compute[259627]: 2025-10-14 09:15:19.207 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Successfully created port: 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:15:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:19Z|01099|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 05:15:19 np0005486808 NetworkManager[44885]: <info>  [1760433319.4659] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Oct 14 05:15:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:19Z|01100|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:15:19 np0005486808 NetworkManager[44885]: <info>  [1760433319.4668] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Oct 14 05:15:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:19Z|01101|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 05:15:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:19Z|01102|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:15:19 np0005486808 nova_compute[259627]: 2025-10-14 09:15:19.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:19 np0005486808 nova_compute[259627]: 2025-10-14 09:15:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:19 np0005486808 podman[361521]: 2025-10-14 09:15:19.671917507 +0000 UTC m=+0.090544303 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 14 05:15:19 np0005486808 podman[361522]: 2025-10-14 09:15:19.672562283 +0000 UTC m=+0.080913835 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:15:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 293 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.157 2 DEBUG nova.compute.manager [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG nova.compute.manager [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.158 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.194 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.195 2 DEBUG nova.network.neutron [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.221 2 DEBUG oslo_concurrency.lockutils [req-9804b606-5771-4fcc-94b0-77f2eb35b7de req-bb24bbe4-bbd0-4bdb-a264-680c094dbd0b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.222 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.222 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 14 05:15:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 14 05:15:20 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.582 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Successfully updated port: 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.596 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.597 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.597 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:15:20 np0005486808 nova_compute[259627]: 2025-10-14 09:15:20.869 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.562 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.563 2 DEBUG nova.network.neutron [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.583 2 DEBUG oslo_concurrency.lockutils [req-b42a9f9d-3cbe-4733-a06f-dfc97f36abad req-c2e54034-136a-4e2b-a643-7c5eee52b889 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.830 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433306.8268523, 84968701-f6c5-4798-888e-fa0f3311adca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.830 2 INFO nova.compute.manager [-] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.865 2 DEBUG nova.compute.manager [None req-22412626-5dff-40ff-82df-a0ab087ba917 - - - - - -] [instance: 84968701-f6c5-4798-888e-fa0f3311adca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.949 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.949 2 DEBUG nova.network.neutron [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:21 np0005486808 nova_compute[259627]: 2025-10-14 09:15:21.972 2 DEBUG oslo_concurrency.lockutils [req-8d023863-33b9-49d3-8bfe-cadc0b8d30b8 req-8288568c-ab09-4adf-9322-23164805cbbf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 246 op/s
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.122 2 DEBUG nova.network.neutron [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.150 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.150 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance network_info: |[{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.152 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start _get_guest_xml network_info=[{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.157 2 WARNING nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.164 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.166 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.168 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.libvirt.host [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.169 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.170 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.171 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.172 2 DEBUG nova.virt.hardware [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.174 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.280 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-changed-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Refreshing instance network info cache due to event network-changed-96f6d888-2b4a-4ca3-a48c-4628d4143f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.281 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Refreshing network info cache for port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417846393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.674 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.705 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.711 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:22 np0005486808 nova_compute[259627]: 2025-10-14 09:15:22.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205577679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.163 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.165 2 DEBUG nova.virt.libvirt.vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:18Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.166 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.167 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.168 2 DEBUG nova.objects.instance [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.200 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <uuid>6b47438c-b2b5-48e7-a15c-ee7c5936da65</uuid>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <name>instance-00000068</name>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-265021143</nova:name>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:22</nova:creationTime>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <nova:port uuid="96f6d888-2b4a-4ca3-a48c-4628d4143f60">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="serial">6b47438c-b2b5-48e7-a15c-ee7c5936da65</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="uuid">6b47438c-b2b5-48e7-a15c-ee7c5936da65</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:59:b2:50"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <target dev="tap96f6d888-2b"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/console.log" append="off"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:23 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:23 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:23 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:23 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.202 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Preparing to wait for external event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.203 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.203 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.204 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.205 2 DEBUG nova.virt.libvirt.vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:18Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.205 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.206 2 DEBUG nova.network.os_vif_util [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.206 2 DEBUG os_vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96f6d888-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96f6d888-2b, col_values=(('external_ids', {'iface-id': '96f6d888-2b4a-4ca3-a48c-4628d4143f60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:b2:50', 'vm-uuid': '6b47438c-b2b5-48e7-a15c-ee7c5936da65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:23 np0005486808 NetworkManager[44885]: <info>  [1760433323.2647] manager: (tap96f6d888-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.274 2 INFO os_vif [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b')#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.379 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.381 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.381 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:59:b2:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.382 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Using config drive#033[00m
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.411 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 14 05:15:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 14 05:15:23 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 14 05:15:23 np0005486808 nova_compute[259627]: 2025-10-14 09:15:23.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 339 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.357 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Creating config drive at /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.367 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs15fzfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.540 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvs15fzfm" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.571 2 DEBUG nova.storage.rbd_utils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.575 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.775 2 DEBUG oslo_concurrency.processutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config 6b47438c-b2b5-48e7-a15c-ee7c5936da65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.777 2 INFO nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deleting local config drive /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65/disk.config because it was imported into RBD.#033[00m
Oct 14 05:15:24 np0005486808 NetworkManager[44885]: <info>  [1760433324.8369] manager: (tap96f6d888-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Oct 14 05:15:24 np0005486808 kernel: tap96f6d888-2b: entered promiscuous mode
Oct 14 05:15:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:24Z|01103|binding|INFO|Claiming lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 for this chassis.
Oct 14 05:15:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:24Z|01104|binding|INFO|96f6d888-2b4a-4ca3-a48c-4628d4143f60: Claiming fa:16:3e:59:b2:50 10.100.0.4
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.852 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:50 10.100.0.4'], port_security=['fa:16:3e:59:b2:50 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b47438c-b2b5-48e7-a15c-ee7c5936da65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=96f6d888-2b4a-4ca3-a48c-4628d4143f60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.854 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.871 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.891 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1baf7dd6-7424-4371-afb2-8e5ef6b2a80b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:24Z|01105|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 ovn-installed in OVS
Oct 14 05:15:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:24Z|01106|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 up in Southbound
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:24 np0005486808 nova_compute[259627]: 2025-10-14 09:15:24.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:24 np0005486808 systemd-udevd[361696]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:24 np0005486808 systemd-machined[214636]: New machine qemu-130-instance-00000068.
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.929 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ea95af-c20b-440b-96d9-aaad2ad06c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.931 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d99d36ac-a443-44c9-8125-cd7d1a82787c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:24 np0005486808 NetworkManager[44885]: <info>  [1760433324.9363] device (tap96f6d888-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:15:24 np0005486808 NetworkManager[44885]: <info>  [1760433324.9374] device (tap96f6d888-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:15:24 np0005486808 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Oct 14 05:15:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:24.974 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ab8135-878c-4803-9c99-2073b4ecaa26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.008 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e39d2cb-71c1-47ef-a230-8d88f378711a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361703, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be274130-0d24-4251-aca4-b681dbcdd601]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361707, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361707, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.038 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.042 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:25.043 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.143 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updated VIF entry in instance network info cache for port 96f6d888-2b4a-4ca3-a48c-4628d4143f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.144 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [{"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6b47438c-b2b5-48e7-a15c-ee7c5936da65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.162 2 DEBUG nova.compute.manager [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.163 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.305 2 DEBUG nova.compute.manager [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.306 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.306 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.307 2 DEBUG oslo_concurrency.lockutils [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.307 2 DEBUG nova.compute.manager [req-7069f613-a5b7-43ed-b367-ab61a227ba45 req-ee559b97-5320-4372-b2af-3a5832e88512 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Processing event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:15:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 14 05:15:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 14 05:15:25 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.772 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433310.744599, da40c115-048e-4844-812e-7e65e25bfb3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.773 2 INFO nova.compute.manager [-] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.795 2 DEBUG nova.compute.manager [None req-9b9d7534-54e7-465c-83b8-a30a18c4667e - - - - - -] [instance: da40c115-048e-4844-812e-7e65e25bfb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.969 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.9689043, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.969 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.971 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.975 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.978 2 INFO nova.virt.libvirt.driver [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance spawned successfully.#033[00m
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.979 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 406 op/s
Oct 14 05:15:25 np0005486808 nova_compute[259627]: 2025-10-14 09:15:25.998 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.006 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.011 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.012 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.013 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.013 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.014 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.015 2 DEBUG nova.virt.libvirt.driver [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.044 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.971192, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.045 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.072 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.076 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433325.976358, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.076 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.083 2 INFO nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.084 2 DEBUG nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.097 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.101 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.134 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.166 2 INFO nova.compute.manager [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 9.03 seconds to build instance.#033[00m
Oct 14 05:15:26 np0005486808 nova_compute[259627]: 2025-10-14 09:15:26.181 2 DEBUG oslo_concurrency.lockutils [None req-a828fbab-2a99-4ff7-a8fb-2dd5deb4f706 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 14 05:15:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 14 05:15:26 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.044 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.045 2 DEBUG nova.network.neutron [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.072 2 DEBUG oslo_concurrency.lockutils [req-e791a415-8239-4c81-abc3-b6f6f3fc53b4 req-f655c75f-82b3-4f51-90c4-ad9517e3f70c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.397 2 DEBUG nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.397 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.398 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.398 2 DEBUG oslo_concurrency.lockutils [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.399 2 DEBUG nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:27 np0005486808 nova_compute[259627]: 2025-10-14 09:15:27.399 2 WARNING nova.compute.manager [req-d3ae4740-9668-45cf-891d-d918643785eb req-1375ef83-58ae-4307-9933-ce436f3904a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received unexpected event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:15:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 53 KiB/s wr, 222 op/s
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.006 2 DEBUG nova.compute.manager [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.006 2 DEBUG nova.compute.manager [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing instance network info cache due to event network-changed-175a9914-0068-4aeb-b4b6-d501212d3374. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.007 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.008 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.008 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Refreshing network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:28 np0005486808 nova_compute[259627]: 2025-10-14 09:15:28.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:29Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 05:15:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:29Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.961 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.961 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.963 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.964 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.965 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.966 2 INFO nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Terminating instance#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.968 2 DEBUG nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:29 np0005486808 nova_compute[259627]: 2025-10-14 09:15:29.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 48 KiB/s wr, 204 op/s
Oct 14 05:15:30 np0005486808 kernel: tap96f6d888-2b (unregistering): left promiscuous mode
Oct 14 05:15:30 np0005486808 NetworkManager[44885]: <info>  [1760433330.0152] device (tap96f6d888-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:30Z|01107|binding|INFO|Releasing lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 from this chassis (sb_readonly=0)
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:30Z|01108|binding|INFO|Setting lport 96f6d888-2b4a-4ca3-a48c-4628d4143f60 down in Southbound
Oct 14 05:15:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:30Z|01109|binding|INFO|Removing iface tap96f6d888-2b ovn-installed in OVS
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.042 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:50 10.100.0.4'], port_security=['fa:16:3e:59:b2:50 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b47438c-b2b5-48e7-a15c-ee7c5936da65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=96f6d888-2b4a-4ca3-a48c-4628d4143f60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.043 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 96f6d888-2b4a-4ca3-a48c-4628d4143f60 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.045 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.063 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61812b6c-310d-43d4-b7b2-550fe4601382]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.092 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15f252c8-9b40-4049-bc8e-b20c7af865ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.095 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7c0eb4-c530-4521-978d-834c1ca40f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 14 05:15:30 np0005486808 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 4.993s CPU time.
Oct 14 05:15:30 np0005486808 systemd-machined[214636]: Machine qemu-130-instance-00000068 terminated.
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1d1ca0-ec30-4ff7-ba9d-18fc3d92ad4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f691a26-071f-492f-96f9-9ac477a2efcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361762, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c5ec72-20a7-4f55-bd93-54b4b7f18c52]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361763, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361763, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.160 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.169 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:30.170 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.201 2 INFO nova.virt.libvirt.driver [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Instance destroyed successfully.#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.201 2 DEBUG nova.objects.instance [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid 6b47438c-b2b5-48e7-a15c-ee7c5936da65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.215 2 DEBUG nova.virt.libvirt.vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-265021143',display_name='tempest-ServersTestJSON-server-265021143',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-265021143',id=104,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-6w8v0jr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:28Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=6b47438c-b2b5-48e7-a15c-ee7c5936da65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.215 2 DEBUG nova.network.os_vif_util [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "address": "fa:16:3e:59:b2:50", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96f6d888-2b", "ovs_interfaceid": "96f6d888-2b4a-4ca3-a48c-4628d4143f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.216 2 DEBUG nova.network.os_vif_util [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.216 2 DEBUG os_vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96f6d888-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.222 2 INFO os_vif [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:50,bridge_name='br-int',has_traffic_filtering=True,id=96f6d888-2b4a-4ca3-a48c-4628d4143f60,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96f6d888-2b')#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.538 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updated VIF entry in instance network info cache for port 175a9914-0068-4aeb-b4b6-d501212d3374. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.539 2 DEBUG nova.network.neutron [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [{"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.565 2 DEBUG oslo_concurrency.lockutils [req-7d7f483f-49c4-409b-805e-bff2237e4674 req-0b1a4ce3-1a6d-4a51-a20c-df232919ea1b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e1aea504-3ecf-4273-a867-66afb39de726" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.603 2 INFO nova.virt.libvirt.driver [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deleting instance files /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65_del#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.604 2 INFO nova.virt.libvirt.driver [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deletion of /var/lib/nova/instances/6b47438c-b2b5-48e7-a15c-ee7c5936da65_del complete#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.668 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.668 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG oslo_concurrency.lockutils [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.670 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.671 2 DEBUG nova.compute.manager [req-200252c5-bf43-4aa4-954b-914fb98a9d5c req-8b63af9e-55d3-4d22-885b-22ba411348ec 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-unplugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.676 2 INFO nova.compute.manager [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG oslo.service.loopingcall [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.677 2 DEBUG nova.network.neutron [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:30 np0005486808 nova_compute[259627]: 2025-10-14 09:15:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.480 2 DEBUG nova.network.neutron [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.504 2 INFO nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.553 2 DEBUG nova.compute.manager [req-942d255b-f509-4360-b3da-1d696c72bc84 req-63d01bd1-2bd7-41e7-8324-702a079d0eac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-deleted-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.560 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.560 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.673 2 DEBUG oslo_concurrency.processutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:31 np0005486808 podman[361796]: 2025-10-14 09:15:31.716509282 +0000 UTC m=+0.112504414 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:15:31 np0005486808 podman[361795]: 2025-10-14 09:15:31.741444707 +0000 UTC m=+0.145606580 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:15:31 np0005486808 nova_compute[259627]: 2025-10-14 09:15:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 429 op/s
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859700460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.174 2 DEBUG oslo_concurrency.processutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.182 2 DEBUG nova.compute.provider_tree [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.201 2 DEBUG nova.scheduler.client.report [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.235 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.240 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.241 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.241 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.336 2 INFO nova.scheduler.client.report [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance 6b47438c-b2b5-48e7-a15c-ee7c5936da65#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.432 2 DEBUG oslo_concurrency.lockutils [None req-4608aa1f-fed0-458f-957e-2d44acf6ff49 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159755200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.712 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.787 2 DEBUG oslo_concurrency.lockutils [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6b47438c-b2b5-48e7-a15c-ee7c5936da65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.788 2 DEBUG nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] No waiting events found dispatching network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.788 2 WARNING nova.compute.manager [req-fd3dd266-83a1-4251-be77-f46b625d35b6 req-bffec553-61b2-41a3-8de0-a5c7bc20f092 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Received unexpected event network-vif-plugged-96f6d888-2b4a-4ca3-a48c-4628d4143f60 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:15:32
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:15:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.804 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.805 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.811 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.811 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.817 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.817 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 nova_compute[259627]: 2025-10-14 09:15:32.818 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 14 05:15:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.056 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3149MB free_disk=59.81802749633789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.057 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d46b6953-9413-4e6a-94f7-7b5ac9634c16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e1aea504-3ecf-4273-a867-66afb39de726 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 33969555-fe06-4613-b244-d03c9b4180ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.146 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.147 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.228 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.503 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.504 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.505 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.506 2 INFO nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Terminating instance#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.507 2 DEBUG nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 kernel: tap175a9914-00 (unregistering): left promiscuous mode
Oct 14 05:15:33 np0005486808 NetworkManager[44885]: <info>  [1760433333.5762] device (tap175a9914-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:33Z|01110|binding|INFO|Releasing lport 175a9914-0068-4aeb-b4b6-d501212d3374 from this chassis (sb_readonly=0)
Oct 14 05:15:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:33Z|01111|binding|INFO|Setting lport 175a9914-0068-4aeb-b4b6-d501212d3374 down in Southbound
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:33Z|01112|binding|INFO|Removing iface tap175a9914-00 ovn-installed in OVS
Oct 14 05:15:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.589 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:d8:7e 10.100.0.7'], port_security=['fa:16:3e:6a:d8:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1aea504-3ecf-4273-a867-66afb39de726', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a53d982-8cf9-44ff-9b16-dda957aa7729', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4eae338e7d54d159033a20bc7460935', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9a0a095f-2308-407d-90d9-6acaa688f93c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab8a6447-c4e7-4ebf-b042-eb7d83ec88dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=175a9914-0068-4aeb-b4b6-d501212d3374) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.590 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 175a9914-0068-4aeb-b4b6-d501212d3374 in datapath 8a53d982-8cf9-44ff-9b16-dda957aa7729 unbound from our chassis#033[00m
Oct 14 05:15:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.591 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a53d982-8cf9-44ff-9b16-dda957aa7729 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:15:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:33.592 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[369d570b-928a-4470-b3f1-d2e0e6f9bd30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 14 05:15:33 np0005486808 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Consumed 12.722s CPU time.
Oct 14 05:15:33 np0005486808 systemd-machined[214636]: Machine qemu-128-instance-00000065 terminated.
Oct 14 05:15:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712277105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.713 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.724 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.742 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.748 2 INFO nova.virt.libvirt.driver [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Instance destroyed successfully.#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.748 2 DEBUG nova.objects.instance [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lazy-loading 'resources' on Instance uuid e1aea504-3ecf-4273-a867-66afb39de726 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.764 2 DEBUG nova.virt.libvirt.vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1533575249',display_name='tempest-ServerRescueTestJSONUnderV235-server-1533575249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1533575249',id=101,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4eae338e7d54d159033a20bc7460935',ramdisk_id='',reservation_id='r-tm6flkse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2037260230',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2037260230-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:12Z,user_data=None,user_id='648aaa75d8974d439d8ebe331c3d6568',uuid=e1aea504-3ecf-4273-a867-66afb39de726,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.764 2 DEBUG nova.network.os_vif_util [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converting VIF {"id": "175a9914-0068-4aeb-b4b6-d501212d3374", "address": "fa:16:3e:6a:d8:7e", "network": {"id": "8a53d982-8cf9-44ff-9b16-dda957aa7729", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-827923963-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c4eae338e7d54d159033a20bc7460935", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap175a9914-00", "ovs_interfaceid": "175a9914-0068-4aeb-b4b6-d501212d3374", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.765 2 DEBUG nova.network.os_vif_util [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.765 2 DEBUG os_vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap175a9914-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:33 np0005486808 nova_compute[259627]: 2025-10-14 09:15:33.772 2 INFO os_vif [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:d8:7e,bridge_name='br-int',has_traffic_filtering=True,id=175a9914-0068-4aeb-b4b6-d501212d3374,network=Network(8a53d982-8cf9-44ff-9b16-dda957aa7729),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap175a9914-00')#033[00m
Oct 14 05:15:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 14 05:15:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 14 05:15:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 14 05:15:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 359 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.657 2 INFO nova.virt.libvirt.driver [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deleting instance files /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726_del#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.658 2 INFO nova.virt.libvirt.driver [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deletion of /var/lib/nova/instances/e1aea504-3ecf-4273-a867-66afb39de726_del complete#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.727 2 INFO nova.compute.manager [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.728 2 DEBUG oslo.service.loopingcall [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.729 2 DEBUG nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.730 2 DEBUG nova.network.neutron [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.970 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.971 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.971 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.972 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.973 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.973 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-unplugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.974 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.975 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e1aea504-3ecf-4273-a867-66afb39de726-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.975 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.976 2 DEBUG oslo_concurrency.lockutils [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.977 2 DEBUG nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] No waiting events found dispatching network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:34 np0005486808 nova_compute[259627]: 2025-10-14 09:15:34.977 2 WARNING nova.compute.manager [req-3d0d1b92-2da9-4fa0-87cf-5793119adb5b req-6f4f3875-a5a1-4a55-862d-b564ffe1c80c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received unexpected event network-vif-plugged-175a9914-0068-4aeb-b4b6-d501212d3374 for instance with vm_state rescued and task_state deleting.#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.696 2 DEBUG nova.network.neutron [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.719 2 INFO nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.771 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.772 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.811 2 DEBUG nova.compute.manager [req-85fd60c1-7642-4c36-844c-aa52f4c42d34 req-bae79897-53ea-43a2-9541-1286b9dc437a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Received event network-vif-deleted-175a9914-0068-4aeb-b4b6-d501212d3374 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.845 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.846 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.867 2 INFO nova.compute.manager [None req-0b0f90e8-c2b8-43ce-8854-d98ac3f27c1b e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.877 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.888 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:15:35 np0005486808 nova_compute[259627]: 2025-10-14 09:15:35.913 2 DEBUG oslo_concurrency.processutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 559 op/s
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.008 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.224 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.225 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.225 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.231 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.233 2 DEBUG nova.objects.instance [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.270 2 DEBUG nova.virt.libvirt.driver [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:15:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098411361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.369 2 DEBUG oslo_concurrency.processutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.380 2 DEBUG nova.compute.provider_tree [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.395 2 DEBUG nova.scheduler.client.report [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.416 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.421 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.431 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.432 2 INFO nova.compute.claims [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.458 2 INFO nova.scheduler.client.report [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Deleted allocations for instance e1aea504-3ecf-4273-a867-66afb39de726#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.565 2 DEBUG oslo_concurrency.lockutils [None req-89030293-9439-412a-940e-e23ce27963c8 648aaa75d8974d439d8ebe331c3d6568 c4eae338e7d54d159033a20bc7460935 - - default default] Lock "e1aea504-3ecf-4273-a867-66afb39de726" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.627 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.771 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.772 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:36 np0005486808 nova_compute[259627]: 2025-10-14 09:15:36.800 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424274207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.071 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.080 2 DEBUG nova.compute.provider_tree [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.097 2 DEBUG nova.scheduler.client.report [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.122 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.124 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.194 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.195 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.221 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.240 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.354 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.356 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.356 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating image(s)#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.384 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.409 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.433 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.436 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.526 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.527 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.528 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.528 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.553 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.557 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c28cafe5-40e7-47f9-8793-6193487fccc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.699 2 DEBUG nova.policy [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a287ef08fc5c4f218bf06cd2c7ed021e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.856 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c28cafe5-40e7-47f9-8793-6193487fccc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 14 05:15:37 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.949 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] resizing rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:15:37 np0005486808 nova_compute[259627]: 2025-10-14 09:15:37.988 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 38 KiB/s wr, 231 op/s
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.071 2 DEBUG nova.objects.instance [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'migration_context' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.084 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Ensure instance console log exists: /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.085 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.086 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 kernel: tap7e42bf44-c1 (unregistering): left promiscuous mode
Oct 14 05:15:38 np0005486808 NetworkManager[44885]: <info>  [1760433338.5898] device (tap7e42bf44-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:38Z|01113|binding|INFO|Releasing lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 from this chassis (sb_readonly=0)
Oct 14 05:15:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:38Z|01114|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 down in Southbound
Oct 14 05:15:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:38Z|01115|binding|INFO|Removing iface tap7e42bf44-c1 ovn-installed in OVS
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.610 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.611 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e unbound from our chassis#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.613 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.613 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4dea3aa5-c2a8-48af-b481-c41fd6dbf14f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.614 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace which is not needed anymore#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 14 05:15:38 np0005486808 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 13.667s CPU time.
Oct 14 05:15:38 np0005486808 systemd-machined[214636]: Machine qemu-129-instance-00000067 terminated.
Oct 14 05:15:38 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : haproxy version is 2.8.14-c23fe91
Oct 14 05:15:38 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [NOTICE]   (361321) : path to executable is /usr/sbin/haproxy
Oct 14 05:15:38 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [WARNING]  (361321) : Exiting Master process...
Oct 14 05:15:38 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [ALERT]    (361321) : Current worker (361323) exited with code 143 (Terminated)
Oct 14 05:15:38 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[361317]: [WARNING]  (361321) : All workers exited. Exiting... (0)
Oct 14 05:15:38 np0005486808 systemd[1]: libpod-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope: Deactivated successfully.
Oct 14 05:15:38 np0005486808 conmon[361317]: conmon 4c63af5c6a6f78608041 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope/container/memory.events
Oct 14 05:15:38 np0005486808 podman[362179]: 2025-10-14 09:15:38.765420443 +0000 UTC m=+0.051085680 container died 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1-userdata-shm.mount: Deactivated successfully.
Oct 14 05:15:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5839d52bde3e6e8745834d54c6e47047a57e1518d17430d80d50930db816dbf4-merged.mount: Deactivated successfully.
Oct 14 05:15:38 np0005486808 podman[362179]: 2025-10-14 09:15:38.818221955 +0000 UTC m=+0.103887222 container cleanup 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:15:38 np0005486808 systemd[1]: libpod-conmon-4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1.scope: Deactivated successfully.
Oct 14 05:15:38 np0005486808 podman[362211]: 2025-10-14 09:15:38.89836354 +0000 UTC m=+0.050445825 container remove 4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d40d17e1-f028-4109-94f8-a231969d3036]: (4, ('Tue Oct 14 09:15:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1)\n4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1\nTue Oct 14 09:15:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1)\n4c63af5c6a6f786080410e90a8ec39ac0e90c54e5a94f9cfab106c88cb87e5a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.911 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37708e8a-39ee-4b6b-b1c2-f87ad9f8f0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.912 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 kernel: tap7f4225de-90: left promiscuous mode
Oct 14 05:15:38 np0005486808 nova_compute[259627]: 2025-10-14 09:15:38.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.949 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e252a4e6-84a1-4d07-8e01-6010ed358121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6deb7-0a3e-442c-81c5-f40ca0f52814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:38.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c55df0-b962-45ce-9c41-752fc2516cb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.002 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[161d6710-551b-4b99-b0fa-d9ea3598d892]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709309, 'reachable_time': 21217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362241, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:39 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7f4225de\x2d9f3f\x2d48e2\x2dbad7\x2da89cf4884a2e.mount: Deactivated successfully.
Oct 14 05:15:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.008 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:15:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:39.008 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[14bee3e0-53c1-430c-ab46-72e0a9ea5928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.294 2 INFO nova.virt.libvirt.driver [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance shutdown successfully after 3 seconds.#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.302 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.303 2 DEBUG nova.objects.instance [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.325 2 DEBUG nova.compute.manager [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.394 2 DEBUG oslo_concurrency.lockutils [None req-3fddcbbd-a746-4874-8801-725556650240 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.736 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Successfully created port: 114d4e63-ee15-4133-b8bc-9cd2b1861072 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG oslo_concurrency.lockutils [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.851 2 DEBUG nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:39 np0005486808 nova_compute[259627]: 2025-10-14 09:15:39.852 2 WARNING nova.compute.manager [req-0ec4ae54-15c0-4021-8221-e1afbdcc9e87 req-ad9ca2b7-a4bb-4bd3-a5c1-9dd5680bcb39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-unplugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:15:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 200 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 177 op/s
Oct 14 05:15:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:40Z|01116|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:15:40 np0005486808 nova_compute[259627]: 2025-10-14 09:15:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:40Z|01117|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:15:40 np0005486808 nova_compute[259627]: 2025-10-14 09:15:40.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:40 np0005486808 nova_compute[259627]: 2025-10-14 09:15:40.995 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Successfully updated port: 114d4e63-ee15-4133-b8bc-9cd2b1861072 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquired lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.007 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.291 2 DEBUG nova.compute.manager [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-changed-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.291 2 DEBUG nova.compute.manager [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Refreshing instance network info cache due to event network-changed-114d4e63-ee15-4133-b8bc-9cd2b1861072. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.292 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:41 np0005486808 nova_compute[259627]: 2025-10-14 09:15:41.371 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:15:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 2.7 MiB/s wr, 200 op/s
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.013 2 DEBUG nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.014 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.015 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.015 2 DEBUG oslo_concurrency.lockutils [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.016 2 DEBUG nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.016 2 WARNING nova.compute.manager [req-b88cea60-58ff-4f37-bc42-f4981c326ef9 req-724d15a9-f25c-4400-9542-92aac972864f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.519 2 INFO nova.compute.manager [None req-fe1aa59b-3a98-4bfb-a7a9-9a14018fd2d0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.793 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.816 2 DEBUG nova.network.neutron [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:15:42 np0005486808 nova_compute[259627]: 2025-10-14 09:15:42.817 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'info_cache' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.110 2 DEBUG nova.network.neutron [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.145 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Releasing lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.146 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance network_info: |[{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.146 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.147 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Refreshing network info cache for port 114d4e63-ee15-4133-b8bc-9cd2b1861072 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.152 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start _get_guest_xml network_info=[{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.159 2 WARNING nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.170 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.171 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.176 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.176 2 DEBUG nova.virt.libvirt.host [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.177 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.177 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.178 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.179 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.179 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.180 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.181 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.182 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.182 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.183 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.183 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.184 2 DEBUG nova.virt.hardware [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018646067347138113 of space, bias 1.0, pg target 0.5593820204141434 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.189 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:15:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752679014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.694 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.719 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.724 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:43 np0005486808 nova_compute[259627]: 2025-10-14 09:15:43.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 162 op/s
Oct 14 05:15:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816728327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.212 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.215 2 DEBUG nova.virt.libvirt.vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-ServersTestJSON-server-271457736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:37Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.215 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.217 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.219 2 DEBUG nova.objects.instance [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.238 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <uuid>c28cafe5-40e7-47f9-8793-6193487fccc3</uuid>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <name>instance-00000069</name>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServersTestJSON-server-271457736</nova:name>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:43</nova:creationTime>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:user uuid="a287ef08fc5c4f218bf06cd2c7ed021e">tempest-ServersTestJSON-2060951674-project-member</nova:user>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:project uuid="0a080fae2f3c4e39a6cca225203f5ec6">tempest-ServersTestJSON-2060951674</nova:project>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <nova:port uuid="114d4e63-ee15-4133-b8bc-9cd2b1861072">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="serial">c28cafe5-40e7-47f9-8793-6193487fccc3</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="uuid">c28cafe5-40e7-47f9-8793-6193487fccc3</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c28cafe5-40e7-47f9-8793-6193487fccc3_disk">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:d1:b1:69"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <target dev="tap114d4e63-ee"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/console.log" append="off"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:44 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:44 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:44 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:44 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.240 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Preparing to wait for external event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.241 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.241 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.242 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.243 2 DEBUG nova.virt.libvirt.vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-ServersTestJSON-server-271457736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:15:37Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.243 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.244 2 DEBUG nova.network.os_vif_util [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.245 2 DEBUG os_vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap114d4e63-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap114d4e63-ee, col_values=(('external_ids', {'iface-id': '114d4e63-ee15-4133-b8bc-9cd2b1861072', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:b1:69', 'vm-uuid': 'c28cafe5-40e7-47f9-8793-6193487fccc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:44 np0005486808 NetworkManager[44885]: <info>  [1760433344.2555] manager: (tap114d4e63-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.266 2 INFO os_vif [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee')#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.324 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] No VIF found with MAC fa:16:3e:d1:b1:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.325 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Using config drive#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.350 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.869 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Creating config drive at /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.878 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwe3g1i6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:44 np0005486808 nova_compute[259627]: 2025-10-14 09:15:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.049 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwe3g1i6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.092 2 DEBUG nova.storage.rbd_utils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] rbd image c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.097 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.153 2 DEBUG nova.network.neutron [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.180 2 DEBUG oslo_concurrency.lockutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.200 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433330.198205, 6b47438c-b2b5-48e7-a15c-ee7c5936da65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.200 2 INFO nova.compute.manager [-] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.223 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.224 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.229 2 DEBUG nova.compute.manager [None req-f535e7d7-882f-4c27-900b-d8a429d33636 - - - - - -] [instance: 6b47438c-b2b5-48e7-a15c-ee7c5936da65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.246 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.273 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.274 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.275 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.275 2 DEBUG os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e42bf44-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.284 2 DEBUG oslo_concurrency.processutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config c28cafe5-40e7-47f9-8793-6193487fccc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.284 2 INFO nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deleting local config drive /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3/disk.config because it was imported into RBD.#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.288 2 INFO os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.297 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start _get_guest_xml network_info=[{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.306 2 WARNING nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.315 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.316 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.323 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.libvirt.host [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.324 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.325 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.325 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.326 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.327 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.328 2 DEBUG nova.virt.hardware [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.328 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:45 np0005486808 kernel: tap114d4e63-ee: entered promiscuous mode
Oct 14 05:15:45 np0005486808 NetworkManager[44885]: <info>  [1760433345.3335] manager: (tap114d4e63-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct 14 05:15:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:45Z|01118|binding|INFO|Claiming lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 for this chassis.
Oct 14 05:15:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:45Z|01119|binding|INFO|114d4e63-ee15-4133-b8bc-9cd2b1861072: Claiming fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.345 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:b1:69 10.100.0.4'], port_security=['fa:16:3e:d1:b1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c28cafe5-40e7-47f9-8793-6193487fccc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=114d4e63-ee15-4133-b8bc-9cd2b1861072) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.346 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 114d4e63-ee15-4133-b8bc-9cd2b1861072 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e bound to our chassis#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.347 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.357 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:45Z|01120|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 ovn-installed in OVS
Oct 14 05:15:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:45Z|01121|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 up in Southbound
Oct 14 05:15:45 np0005486808 systemd-udevd[362382]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54acce03-57d8-41f7-859e-d94daed58017]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 NetworkManager[44885]: <info>  [1760433345.3750] device (tap114d4e63-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:15:45 np0005486808 NetworkManager[44885]: <info>  [1760433345.3756] device (tap114d4e63-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:15:45 np0005486808 systemd-machined[214636]: New machine qemu-131-instance-00000069.
Oct 14 05:15:45 np0005486808 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed328fb-f4ff-4c30-91cb-51b2fa380ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.399 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updated VIF entry in instance network info cache for port 114d4e63-ee15-4133-b8bc-9cd2b1861072. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.401 2 DEBUG nova.network.neutron [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [{"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.404 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[091c83e7-5d53-4b17-9162-568614237a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.429 2 DEBUG oslo_concurrency.lockutils [req-49b03081-616a-4b4b-a84c-07bb0c0c2691 req-0a7447e6-917f-43d9-97fe-664a18747f00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c28cafe5-40e7-47f9-8793-6193487fccc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.433 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b1f666-13f9-4eba-bf02-8255da95c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.451 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3ad77b-ff69-4768-80cf-5d1e72e2c88e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362396, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[425ab8da-010e-444f-b077-a77b8dc463fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362398, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362398, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.473 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.475 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.476 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.476 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:45.477 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.590 2 DEBUG nova.compute.manager [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.590 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG oslo_concurrency.lockutils [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.591 2 DEBUG nova.compute.manager [req-09496c98-325a-42c3-85cb-f8eb1f22c204 req-25722ad6-b4f2-427a-9576-61f926c15ef6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Processing event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:15:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525250669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.785 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:45 np0005486808 nova_compute[259627]: 2025-10-14 09:15:45.829 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 40 op/s
Oct 14 05:15:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57831561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.315 2 DEBUG oslo_concurrency.processutils [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.317 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.317 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.318 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.319 2 DEBUG nova.objects.instance [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.335 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <uuid>33969555-fe06-4613-b244-d03c9b4180ba</uuid>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <name>instance-00000067</name>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-383304321</nova:name>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:45</nova:creationTime>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <nova:port uuid="7e42bf44-c1f8-49df-bd5f-a26abe43a832">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="serial">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="uuid">33969555-fe06-4613-b244-d03c9b4180ba</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/33969555-fe06-4613-b244-d03c9b4180ba_disk.config">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:20:8d:8f"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <target dev="tap7e42bf44-c1"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba/console.log" append="off"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:46 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:46 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:46 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:46 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.336 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.337 2 DEBUG nova.virt.libvirt.driver [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.337 2 DEBUG nova.virt.libvirt.vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:39Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG nova.network.os_vif_util [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.338 2 DEBUG os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e42bf44-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e42bf44-c1, col_values=(('external_ids', {'iface-id': '7e42bf44-c1f8-49df-bd5f-a26abe43a832', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:8d:8f', 'vm-uuid': '33969555-fe06-4613-b244-d03c9b4180ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.3445] manager: (tap7e42bf44-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.349 2 INFO os_vif [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')#033[00m
Oct 14 05:15:46 np0005486808 kernel: tap7e42bf44-c1: entered promiscuous mode
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.4125] manager: (tap7e42bf44-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01122|binding|INFO|Claiming lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 for this chassis.
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01123|binding|INFO|7e42bf44-c1f8-49df-bd5f-a26abe43a832: Claiming fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 systemd-udevd[362386]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.4229] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.4238] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.4269] device (tap7e42bf44-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.4284] device (tap7e42bf44-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.429 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.430 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e bound to our chassis#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.442 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98a5fff2-ea28-4508-ab31-8dfecc02fc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.443 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f4225de-91 in ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.445 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f4225de-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06527965-6d28-4c82-9e36-663422feeb28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.446 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68b191cd-e54d-4da5-835e-01c1c8b7ed80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 systemd-machined[214636]: New machine qemu-132-instance-00000067.
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.457 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a52980a1-a868-463d-8b03-4b3f9fd40735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 systemd[1]: Started Virtual Machine qemu-132-instance-00000067.
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97238166-650c-4435-8ed5-5d5355a84e0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.512 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[655e02e9-5b40-45ba-91a9-e7904bd64eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14608336-d68a-423c-9938-c6fde0793a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.5203] manager: (tap7f4225de-90): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01124|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01125|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 ovn-installed in OVS
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01126|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 up in Southbound
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.563 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6f00ac51-6b3a-4812-8006-fa009d32e702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5257e62-928b-43ec-9ec0-b41427863b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.5885] device (tap7f4225de-90): carrier: link connected
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e782124b-a700-49ca-bcf3-def7a7ec96c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20252c68-adf2-4fc6-b8cb-a5428b2e3e7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712534, 'reachable_time': 28069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362505, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1f864-8f73-43e9-96f3-f9839b8a176d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:e234'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712534, 'tstamp': 712534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362524, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.652 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec330b4-c41d-4648-bb6a-b194796e77cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f4225de-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:e2:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712534, 'reachable_time': 28069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362531, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.681 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[60e5b20c-ac3a-46e4-9ab1-b423f88d07c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3959f9c7-a4d5-4a9a-8468-d5babf302c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.746 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4225de-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 NetworkManager[44885]: <info>  [1760433346.7485] manager: (tap7f4225de-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Oct 14 05:15:46 np0005486808 kernel: tap7f4225de-90: entered promiscuous mode
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.751 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f4225de-90, col_values=(('external_ids', {'iface-id': 'f5c700c1-27f2-4a9c-8db0-f69417ca2318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:15:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:46Z|01127|binding|INFO|Releasing lport f5c700c1-27f2-4a9c-8db0-f69417ca2318 from this chassis (sb_readonly=0)
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 nova_compute[259627]: 2025-10-14 09:15:46.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.770 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.771 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb70e6d-b7a4-42ef-8622-772fe12156dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.771 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.pid.haproxy
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7f4225de-9f3f-48e2-bad7-a89cf4884a2e
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:15:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:15:46.772 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'env', 'PROCESS_TAG=haproxy-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f4225de-9f3f-48e2-bad7-a89cf4884a2e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:15:47 np0005486808 podman[362581]: 2025-10-14 09:15:47.182050262 +0000 UTC m=+0.059324693 container create a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.210 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.212 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.209737, c28cafe5-40e7-47f9-8793-6193487fccc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.213 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.220 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.226 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance spawned successfully.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.226 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.242 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:47 np0005486808 podman[362581]: 2025-10-14 09:15:47.152634427 +0000 UTC m=+0.029908858 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.248 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:47 np0005486808 systemd[1]: Started libpod-conmon-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope.
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.262 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.263 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.263 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.264 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.264 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.265 2 DEBUG nova.virt.libvirt.driver [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.271 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.271 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.2101824, c28cafe5-40e7-47f9-8793-6193487fccc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:15:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.300 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88b20c8a5752e1d1bec1c2d117ad2bfbfb3601e7200bb4fc5f4f5f26fbe83704/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.314 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433347.2193303, c28cafe5-40e7-47f9-8793-6193487fccc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.314 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:47 np0005486808 podman[362581]: 2025-10-14 09:15:47.328389279 +0000 UTC m=+0.205663730 container init a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.330 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.335 2 INFO nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.336 2 DEBUG nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.336 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:47 np0005486808 podman[362581]: 2025-10-14 09:15:47.339889893 +0000 UTC m=+0.217164304 container start a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:15:47 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : New worker (362603) forked
Oct 14 05:15:47 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : Loading success.
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.367 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.396 2 INFO nova.compute.manager [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 11.42 seconds to build instance.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.416 2 DEBUG oslo_concurrency.lockutils [None req-262ff7b5-c2f7-41d6-9b92-5aa5cde38572 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.700 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.702 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.702 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.703 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.703 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.704 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.705 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.705 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.706 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.706 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.707 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.707 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.708 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.708 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.709 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.709 2 DEBUG oslo_concurrency.lockutils [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.710 2 DEBUG nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:15:47 np0005486808 nova_compute[259627]: 2025-10-14 09:15:47.710 2 WARNING nova.compute.manager [req-baafef8a-528c-4039-ad3f-56488dc60091 req-03143fc7-2812-4ad9-a9b5-c33e285d0f42 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct 14 05:15:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.022 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 33969555-fe06-4613-b244-d03c9b4180ba due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.023 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433348.0222712, 33969555-fe06-4613-b244-d03c9b4180ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.023 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.026 2 DEBUG nova.compute.manager [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.030 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance rebooted successfully.#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.030 2 DEBUG nova.compute.manager [None req-c188878a-fd09-46fb-b650-7d005a852c49 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.058 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.061 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.106 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.107 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433348.0231667, 33969555-fe06-4613-b244-d03c9b4180ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.107 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.150 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.153 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.476 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.478 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.494 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.579 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.580 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.587 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.588 2 INFO nova.compute.claims [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.741 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433333.7404816, e1aea504-3ecf-4273-a867-66afb39de726 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.741 2 INFO nova.compute.manager [-] [instance: e1aea504-3ecf-4273-a867-66afb39de726] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.757 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:48 np0005486808 nova_compute[259627]: 2025-10-14 09:15:48.805 2 DEBUG nova.compute.manager [None req-2548896b-b81c-4e44-9374-e34c866ec5e4 - - - - - -] [instance: e1aea504-3ecf-4273-a867-66afb39de726] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991585889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.213 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.218 2 DEBUG nova.compute.provider_tree [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.237 2 DEBUG nova.scheduler.client.report [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.264 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.264 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.331 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.357 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.378 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.484 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.486 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.487 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating image(s)#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.524 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.564 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.603 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.609 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.714 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.715 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.716 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.717 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.741 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:49 np0005486808 nova_compute[259627]: 2025-10-14 09:15:49.745 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 246 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.062 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.064 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.065 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.151 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.160 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.221 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.222 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.261 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.262 2 INFO nova.compute.claims [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.269 2 DEBUG nova.objects.instance [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.292 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.292 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Ensure instance console log exists: /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.293 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.295 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.300 2 WARNING nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.304 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.305 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.308 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.308 2 DEBUG nova.virt.libvirt.host [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.309 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.309 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.310 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.311 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.312 2 DEBUG nova.virt.hardware [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.315 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.386 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.387 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.387 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.391 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.392 2 DEBUG nova.objects.instance [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'flavor' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.421 2 DEBUG nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.539 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:50 np0005486808 podman[362864]: 2025-10-14 09:15:50.667472156 +0000 UTC m=+0.082575317 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:15:50 np0005486808 podman[362863]: 2025-10-14 09:15:50.68997825 +0000 UTC m=+0.098536869 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 05:15:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383577701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.766 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.807 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:50 np0005486808 nova_compute[259627]: 2025-10-14 09:15:50.816 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:15:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091754112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.002 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.024 2 DEBUG nova.compute.provider_tree [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.048 2 DEBUG nova.scheduler.client.report [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.082 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.083 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.161 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.219 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.259 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:15:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824327778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.279 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.281 2 DEBUG nova.objects.instance [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.306 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <uuid>0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</uuid>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <name>instance-0000006a</name>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerShowV247Test-server-1513228304</nova:name>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:50</nova:creationTime>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="serial">0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="uuid">0c7c28d8-ba3d-471a-bf37-8ff1870d27c8</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/console.log" append="off"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:51 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:51 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:51 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:51 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.360 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.361 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.361 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Using config drive#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.408 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.419 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.421 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.421 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating image(s)#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.450 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.483 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.515 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.519 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.640 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.643 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.644 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.645 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.671 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.677 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.825 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Creating config drive at /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config#033[00m
Oct 14 05:15:51 np0005486808 nova_compute[259627]: 2025-10-14 09:15:51.839 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp371n0rop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.024 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp371n0rop" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.049 2 DEBUG nova.storage.rbd_utils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.053 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.090 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.179 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.220 2 DEBUG oslo_concurrency.processutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.220 2 INFO nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deleting local config drive /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8/disk.config because it was imported into RBD.#033[00m
Oct 14 05:15:52 np0005486808 systemd-machined[214636]: New machine qemu-133-instance-0000006a.
Oct 14 05:15:52 np0005486808 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.341 2 DEBUG nova.objects.instance [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.372 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.372 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ensure instance console log exists: /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.373 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.374 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.374 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.376 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.381 2 WARNING nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.386 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.387 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.host [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.390 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.391 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.392 2 DEBUG nova.virt.hardware [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.394 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897067073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.876 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.913 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:52 np0005486808 nova_compute[259627]: 2025-10-14 09:15:52.917 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:15:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391217156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.364 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.366 2 DEBUG nova.objects.instance [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.383 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <uuid>548b81f4-df26-4f76-910f-5a14445c93c5</uuid>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <name>instance-0000006b</name>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerShowV247Test-server-1778827495</nova:name>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:15:52</nova:creationTime>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="serial">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="uuid">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk.config">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log" append="off"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:15:53 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:15:53 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:15:53 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:15:53 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.430 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.430 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.430 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Using config drive#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.451 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.603 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating config drive at /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.608 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud5hobtm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.746 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud5hobtm" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.782 2 DEBUG nova.storage.rbd_utils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.785 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.986 2 DEBUG oslo_concurrency.processutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:15:53 np0005486808 nova_compute[259627]: 2025-10-14 09:15:53.987 2 INFO nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting local config drive /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config because it was imported into RBD.#033[00m
Oct 14 05:15:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 14 05:15:54 np0005486808 systemd-machined[214636]: New machine qemu-134-instance-0000006b.
Oct 14 05:15:54 np0005486808 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.633 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433354.6325357, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.634 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.636 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.636 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.639 2 INFO nova.virt.libvirt.driver [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance spawned successfully.#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.640 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.655 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.659 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.662 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.663 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.664 2 DEBUG nova.virt.libvirt.driver [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.697 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.698 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433354.6356375, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.698 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.727 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.729 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.739 2 INFO nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 5.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.739 2 DEBUG nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.750 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.787 2 INFO nova.compute.manager [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 6.25 seconds to build instance.#033[00m
Oct 14 05:15:54 np0005486808 nova_compute[259627]: 2025-10-14 09:15:54.803 2 DEBUG oslo_concurrency.lockutils [None req-67eb3ecc-96ed-462c-a047-2e58db2bfc1c c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.174 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433355.1738203, 548b81f4-df26-4f76-910f-5a14445c93c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.174 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.175 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.176 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.178 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance spawned successfully.#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.179 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.215 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.218 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.227 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.228 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.228 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.229 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.230 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.230 2 DEBUG nova.virt.libvirt.driver [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.272 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433355.173933, 548b81f4-df26-4f76-910f-5a14445c93c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.272 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Started (Lifecycle Event)#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.298 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.301 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.310 2 INFO nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 3.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.311 2 DEBUG nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.321 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.371 2 INFO nova.compute.manager [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 5.17 seconds to build instance.#033[00m
Oct 14 05:15:55 np0005486808 nova_compute[259627]: 2025-10-14 09:15:55.409 2 DEBUG oslo_concurrency.lockutils [None req-7d404a4f-c3cc-41ff-9633-f6057e5538cf c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:15:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 14 05:15:56 np0005486808 nova_compute[259627]: 2025-10-14 09:15:56.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.205 2 INFO nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Rebuilding instance#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.471 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.487 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.539 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_requests' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.552 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'pci_devices' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.562 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.575 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'migration_context' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.592 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:15:57 np0005486808 nova_compute[259627]: 2025-10-14 09:15:57.598 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:15:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:15:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 05:15:58 np0005486808 nova_compute[259627]: 2025-10-14 09:15:58.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:15:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:59Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 05:15:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:15:59Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:b1:69 10.100.0.4
Oct 14 05:16:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 339 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 233 op/s
Oct 14 05:16:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:00.416 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:00.417 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:16:00 np0005486808 nova_compute[259627]: 2025-10-14 09:16:00.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:00Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:8d:8f 10.100.0.4
Oct 14 05:16:00 np0005486808 nova_compute[259627]: 2025-10-14 09:16:00.673 2 DEBUG nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:16:01 np0005486808 nova_compute[259627]: 2025-10-14 09:16:01.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.7 MiB/s wr, 449 op/s
Oct 14 05:16:02 np0005486808 podman[363423]: 2025-10-14 09:16:02.684752639 +0000 UTC m=+0.082473954 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:02 np0005486808 podman[363422]: 2025-10-14 09:16:02.783660027 +0000 UTC m=+0.173380165 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:02 np0005486808 kernel: tap114d4e63-ee (unregistering): left promiscuous mode
Oct 14 05:16:02 np0005486808 NetworkManager[44885]: <info>  [1760433362.9541] device (tap114d4e63-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:16:02 np0005486808 nova_compute[259627]: 2025-10-14 09:16:02.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:02Z|01128|binding|INFO|Releasing lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 from this chassis (sb_readonly=0)
Oct 14 05:16:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:02Z|01129|binding|INFO|Setting lport 114d4e63-ee15-4133-b8bc-9cd2b1861072 down in Southbound
Oct 14 05:16:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:02Z|01130|binding|INFO|Removing iface tap114d4e63-ee ovn-installed in OVS
Oct 14 05:16:02 np0005486808 nova_compute[259627]: 2025-10-14 09:16:02.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.980 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:b1:69 10.100.0.4'], port_security=['fa:16:3e:d1:b1:69 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c28cafe5-40e7-47f9-8793-6193487fccc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=114d4e63-ee15-4133-b8bc-9cd2b1861072) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.982 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 114d4e63-ee15-4133-b8bc-9cd2b1861072 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:16:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:02.984 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbecee11-4892-4e36-88d8-98879af7bb1e#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.004 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2364bea3-9f6b-4b9d-8c58-cae1a17de8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 14 05:16:03 np0005486808 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 13.216s CPU time.
Oct 14 05:16:03 np0005486808 systemd-machined[214636]: Machine qemu-131-instance-00000069 terminated.
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.037 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaa386b-045b-46f5-b7c5-aa77661ba5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.040 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e49e9118-5f4a-4182-8953-02830729bff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.067 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc8eba9-653b-46a2-8897-893199a0f2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15b21982-7360-4376-bd70-ea8c7a7fb30d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbecee11-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:b3:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700079, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363475, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dde9ff-2028-462a-942e-fdeb8130f290]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700095, 'tstamp': 700095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363476, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbecee11-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 700099, 'tstamp': 700099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363476, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.106 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.111 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbecee11-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbecee11-40, col_values=(('external_ids', {'iface-id': 'b1a95e67-ca39-4a56-9a91-3165df166ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:03 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:03.112 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.694 2 INFO nova.virt.libvirt.driver [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.702 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance destroyed successfully.#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.704 2 DEBUG nova.objects.instance [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'numa_topology' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.724 2 DEBUG nova.compute.manager [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:03 np0005486808 nova_compute[259627]: 2025-10-14 09:16:03.775 2 DEBUG oslo_concurrency.lockutils [None req-c5b9475e-aa9b-4636-9bfe-e48defe1f262 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 372 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.059 2 DEBUG nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.061 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.062 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.063 2 DEBUG oslo_concurrency.lockutils [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.063 2 DEBUG nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:05 np0005486808 nova_compute[259627]: 2025-10-14 09:16:05.064 2 WARNING nova.compute.manager [req-27b00d34-c061-4420-bf2c-4eb38d74dc28 req-ce824c91-b037-457d-9aaa-c7f1732db9c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-unplugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state stopped and task_state None.#033[00m
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509509054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:16:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 35464b58-09f3-4df3-bb0e-00cd93f45fda does not exist
Oct 14 05:16:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ab35fca6-05df-460b-b2db-c9d0589b620c does not exist
Oct 14 05:16:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0784b6c0-3fae-45cd-b51f-062462d6c15a does not exist
Oct 14 05:16:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:16:06 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:06.454 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.741 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.742 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.742 2 INFO nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Terminating instance#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.743 2 DEBUG nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.750 2 INFO nova.virt.libvirt.driver [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Instance destroyed successfully.#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.751 2 DEBUG nova.objects.instance [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid c28cafe5-40e7-47f9-8793-6193487fccc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.772 2 DEBUG nova.virt.libvirt.vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-271457736',display_name='tempest-Íñstáñcé-775666780',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-271457736',id=105,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-tuwn13dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:16:05Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=c28cafe5-40e7-47f9-8793-6193487fccc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.773 2 DEBUG nova.network.os_vif_util [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "address": "fa:16:3e:d1:b1:69", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap114d4e63-ee", "ovs_interfaceid": "114d4e63-ee15-4133-b8bc-9cd2b1861072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.774 2 DEBUG nova.network.os_vif_util [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.774 2 DEBUG os_vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap114d4e63-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:16:06 np0005486808 nova_compute[259627]: 2025-10-14 09:16:06.785 2 INFO os_vif [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:b1:69,bridge_name='br-int',has_traffic_filtering=True,id=114d4e63-ee15-4133-b8bc-9cd2b1861072,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap114d4e63-ee')#033[00m
Oct 14 05:16:06 np0005486808 podman[363760]: 2025-10-14 09:16:06.849210308 +0000 UTC m=+0.057261262 container create 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 05:16:06 np0005486808 systemd[1]: Started libpod-conmon-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope.
Oct 14 05:16:06 np0005486808 podman[363760]: 2025-10-14 09:16:06.817595139 +0000 UTC m=+0.025646123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:06 np0005486808 podman[363760]: 2025-10-14 09:16:06.998552129 +0000 UTC m=+0.206603163 container init 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:16:07 np0005486808 podman[363760]: 2025-10-14 09:16:07.009284003 +0000 UTC m=+0.217334967 container start 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:16:07 np0005486808 podman[363760]: 2025-10-14 09:16:07.02011345 +0000 UTC m=+0.228164484 container attach 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 05:16:07 np0005486808 suspicious_almeida[363795]: 167 167
Oct 14 05:16:07 np0005486808 systemd[1]: libpod-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope: Deactivated successfully.
Oct 14 05:16:07 np0005486808 podman[363760]: 2025-10-14 09:16:07.023090953 +0000 UTC m=+0.231141897 container died 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:16:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.031 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0e7aec122f5704b5bcccfcfd46ff2938f7ca56867b2e1a74ea2b70b8edd07709-merged.mount: Deactivated successfully.
Oct 14 05:16:07 np0005486808 podman[363760]: 2025-10-14 09:16:07.078652802 +0000 UTC m=+0.286703746 container remove 7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_almeida, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:16:07 np0005486808 systemd[1]: libpod-conmon-7e785e7910e4eafabc39a0fbac82558734712a9c56985c2dc86370d8a3d749b2.scope: Deactivated successfully.
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.185 2 DEBUG nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.186 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.186 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.187 2 DEBUG oslo_concurrency.lockutils [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.187 2 DEBUG nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] No waiting events found dispatching network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.188 2 WARNING nova.compute.manager [req-26c6909a-f40f-43a2-9e80-4a04c47f1519 req-21830682-fc80-44a9-bbd4-d776f3bafbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received unexpected event network-vif-plugged-114d4e63-ee15-4133-b8bc-9cd2b1861072 for instance with vm_state stopped and task_state deleting.#033[00m
Oct 14 05:16:07 np0005486808 podman[363819]: 2025-10-14 09:16:07.28753696 +0000 UTC m=+0.058052322 container create 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:16:07 np0005486808 systemd[1]: Started libpod-conmon-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope.
Oct 14 05:16:07 np0005486808 podman[363819]: 2025-10-14 09:16:07.265009335 +0000 UTC m=+0.035524717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:07 np0005486808 podman[363819]: 2025-10-14 09:16:07.387658848 +0000 UTC m=+0.158174210 container init 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:16:07 np0005486808 podman[363819]: 2025-10-14 09:16:07.396491945 +0000 UTC m=+0.167007317 container start 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:16:07 np0005486808 podman[363819]: 2025-10-14 09:16:07.400572726 +0000 UTC m=+0.171088088 container attach 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.440 2 INFO nova.virt.libvirt.driver [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deleting instance files /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3_del#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.441 2 INFO nova.virt.libvirt.driver [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deletion of /var/lib/nova/instances/c28cafe5-40e7-47f9-8793-6193487fccc3_del complete#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.499 2 INFO nova.compute.manager [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.499 2 DEBUG oslo.service.loopingcall [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.500 2 DEBUG nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.500 2 DEBUG nova.network.neutron [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.649 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:16:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:07 np0005486808 nova_compute[259627]: 2025-10-14 09:16:07.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.266 2 INFO nova.compute.manager [None req-03568d0e-f610-49cd-8eb8-f36caa06d433 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Get console output#033[00m
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.278 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.413 2 DEBUG nova.network.neutron [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.440 2 INFO nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:08 np0005486808 zealous_goldstine[363835]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:16:08 np0005486808 zealous_goldstine[363835]: --> relative data size: 1.0
Oct 14 05:16:08 np0005486808 zealous_goldstine[363835]: --> All data devices are unavailable
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.501 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.502 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:08 np0005486808 systemd[1]: libpod-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope: Deactivated successfully.
Oct 14 05:16:08 np0005486808 podman[363819]: 2025-10-14 09:16:08.522747544 +0000 UTC m=+1.293262916 container died 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:16:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d18e4485de7e2fc8f27de4ca2d9315b1b8f6b55307925d110a4d7d753f4729a0-merged.mount: Deactivated successfully.
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:08 np0005486808 podman[363819]: 2025-10-14 09:16:08.597433485 +0000 UTC m=+1.367948857 container remove 09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:16:08 np0005486808 systemd[1]: libpod-conmon-09f061e3f235550d08b1da0c3595454c8901ae60b7b53a088c9fb1026e0eed81.scope: Deactivated successfully.
Oct 14 05:16:08 np0005486808 nova_compute[259627]: 2025-10-14 09:16:08.619 2 DEBUG oslo_concurrency.processutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640539548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.139 2 DEBUG oslo_concurrency.processutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.148 2 DEBUG nova.compute.provider_tree [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.186 2 DEBUG nova.scheduler.client.report [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.190 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.190 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.192 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.193 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.193 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.194 2 INFO nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Terminating instance#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.195 2 DEBUG nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.212 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:09 np0005486808 kernel: tap7e42bf44-c1 (unregistering): left promiscuous mode
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.245 2 INFO nova.scheduler.client.report [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance c28cafe5-40e7-47f9-8793-6193487fccc3#033[00m
Oct 14 05:16:09 np0005486808 NetworkManager[44885]: <info>  [1760433369.2561] device (tap7e42bf44-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:09Z|01131|binding|INFO|Releasing lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 from this chassis (sb_readonly=0)
Oct 14 05:16:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:09Z|01132|binding|INFO|Setting lport 7e42bf44-c1f8-49df-bd5f-a26abe43a832 down in Southbound
Oct 14 05:16:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:09Z|01133|binding|INFO|Removing iface tap7e42bf44-c1 ovn-installed in OVS
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.276 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:8d:8f 10.100.0.4'], port_security=['fa:16:3e:20:8d:8f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '33969555-fe06-4613-b244-d03c9b4180ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7fb2d92f-9be1-4133-9fba-da943dad4162', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10a0b33e-96f5-46d7-a240-9f59c55a6b07, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=7e42bf44-c1f8-49df-bd5f-a26abe43a832) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.277 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 in datapath 7f4225de-9f3f-48e2-bad7-a89cf4884a2e unbound from our chassis#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.278 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a14f8b-be06-465a-956d-3e61cd4bc267]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.281 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e namespace which is not needed anymore#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.compute.manager [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.compute.manager [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing instance network info cache due to event network-changed-7e42bf44-c1f8-49df-bd5f-a26abe43a832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.285 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Refreshing network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 14 05:16:09 np0005486808 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000067.scope: Consumed 13.392s CPU time.
Oct 14 05:16:09 np0005486808 systemd-machined[214636]: Machine qemu-132-instance-00000067 terminated.
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.348 2 DEBUG oslo_concurrency.lockutils [None req-8c7f6ae4-acf7-45b4-930f-0fe4a819db1f a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "c28cafe5-40e7-47f9-8793-6193487fccc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.375866641 +0000 UTC m=+0.052370172 container create 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:16:09 np0005486808 systemd[1]: Started libpod-conmon-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope.
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.436 2 INFO nova.virt.libvirt.driver [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Instance destroyed successfully.#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.437 2 DEBUG nova.objects.instance [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 33969555-fe06-4613-b244-d03c9b4180ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.35595426 +0000 UTC m=+0.032457811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.451 2 DEBUG nova.compute.manager [req-ada7b3b5-4683-45ed-a649-573f3ff49630 req-40a78d58-a54e-4d36-9fdb-77d0f9ea164b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Received event network-vif-deleted-114d4e63-ee15-4133-b8bc-9cd2b1861072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.453 2 DEBUG nova.virt.libvirt.vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383304321',display_name='tempest-TestNetworkAdvancedServerOps-server-383304321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383304321',id=103,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrPP24QakCw1fsyp4n1/agXWsNdRHbY8VHQeER4RKvAc+m7J2Hp3JOcegzdlhA2wYneNiK6O1lPBfYB/HncbHtqlMdS+KcigG2/AdZVDhSvn9p2YfNQ60fauvNg9v/ylQ==',key_name='tempest-TestNetworkAdvancedServerOps-1100962189',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-xg2zg09k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:15:48Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=33969555-fe06-4613-b244-d03c9b4180ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.453 2 DEBUG nova.network.os_vif_util [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.454 2 DEBUG nova.network.os_vif_util [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.454 2 DEBUG os_vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e42bf44-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : haproxy version is 2.8.14-c23fe91
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [NOTICE]   (362601) : path to executable is /usr/sbin/haproxy
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : Exiting Master process...
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : Exiting Master process...
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [ALERT]    (362601) : Current worker (362603) exited with code 143 (Terminated)
Oct 14 05:16:09 np0005486808 neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e[362597]: [WARNING]  (362601) : All workers exited. Exiting... (0)
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.469 2 INFO os_vif [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:8d:8f,bridge_name='br-int',has_traffic_filtering=True,id=7e42bf44-c1f8-49df-bd5f-a26abe43a832,network=Network(7f4225de-9f3f-48e2-bad7-a89cf4884a2e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e42bf44-c1')#033[00m
Oct 14 05:16:09 np0005486808 systemd[1]: libpod-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope: Deactivated successfully.
Oct 14 05:16:09 np0005486808 podman[364077]: 2025-10-14 09:16:09.478998993 +0000 UTC m=+0.055482989 container died a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.492629568 +0000 UTC m=+0.169133109 container init 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.503712882 +0000 UTC m=+0.180216403 container start 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.507650409 +0000 UTC m=+0.184153960 container attach 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:16:09 np0005486808 agitated_lumiere[364089]: 167 167
Oct 14 05:16:09 np0005486808 systemd[1]: libpod-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope: Deactivated successfully.
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.509502154 +0000 UTC m=+0.186005695 container died 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:16:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:16:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-88b20c8a5752e1d1bec1c2d117ad2bfbfb3601e7200bb4fc5f4f5f26fbe83704-merged.mount: Deactivated successfully.
Oct 14 05:16:09 np0005486808 podman[364077]: 2025-10-14 09:16:09.531127377 +0000 UTC m=+0.107611363 container cleanup a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:16:09 np0005486808 systemd[1]: libpod-conmon-a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6.scope: Deactivated successfully.
Oct 14 05:16:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f90829a5cc4deb50936b244f5ae36ea81f15dd759d405ef2104a58eca9f3cbfc-merged.mount: Deactivated successfully.
Oct 14 05:16:09 np0005486808 podman[364044]: 2025-10-14 09:16:09.563804063 +0000 UTC m=+0.240307594 container remove 99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 05:16:09 np0005486808 systemd[1]: libpod-conmon-99fa9f4b4b5b22257633cb0bcda20ea78c621e81caff8d6b567099727d907064.scope: Deactivated successfully.
Oct 14 05:16:09 np0005486808 podman[364142]: 2025-10-14 09:16:09.61561604 +0000 UTC m=+0.054152996 container remove a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.621 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82f4ef9f-0a11-4510-8964-67d7c5f0c0f4]: (4, ('Tue Oct 14 09:16:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6)\na5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6\nTue Oct 14 09:16:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e (a5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6)\na5f58647710bc761dec1126060eccfd4434c280bd62a8f6369a66e7a779f7ae6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[657218a5-7b60-4394-96db-c9a78445c33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.627 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4225de-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 kernel: tap7f4225de-90: left promiscuous mode
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.642 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0d24820b-8527-4ffe-a752-e48a21283262]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.658 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3d7349-ad40-4e8e-9600-58256dbcde72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b016a66-17ff-49ed-9652-052cc6440339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.682 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6f39f819-f8d6-42ef-a1ca-1f8c1b13514e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712526, 'reachable_time': 18536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364162, 'error': None, 'target': 'ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7f4225de\x2d9f3f\x2d48e2\x2dbad7\x2da89cf4884a2e.mount: Deactivated successfully.
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.686 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f4225de-9f3f-48e2-bad7-a89cf4884a2e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:16:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:09.686 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c225fc9a-14a3-45a9-8159-9b59be96a3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:09 np0005486808 podman[364168]: 2025-10-14 09:16:09.753234982 +0000 UTC m=+0.040442038 container create 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:16:09 np0005486808 systemd[1]: Started libpod-conmon-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope.
Oct 14 05:16:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:09 np0005486808 podman[364168]: 2025-10-14 09:16:09.738381215 +0000 UTC m=+0.025588291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:09 np0005486808 podman[364168]: 2025-10-14 09:16:09.864313579 +0000 UTC m=+0.151520655 container init 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:16:09 np0005486808 podman[364168]: 2025-10-14 09:16:09.874541881 +0000 UTC m=+0.161748937 container start 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:16:09 np0005486808 podman[364168]: 2025-10-14 09:16:09.878593321 +0000 UTC m=+0.165800407 container attach 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.923 2 INFO nova.virt.libvirt.driver [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deleting instance files /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba_del#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.925 2 INFO nova.virt.libvirt.driver [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deletion of /var/lib/nova/instances/33969555-fe06-4613-b244-d03c9b4180ba_del complete#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.982 2 INFO nova.compute.manager [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG oslo.service.loopingcall [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:09 np0005486808 nova_compute[259627]: 2025-10-14 09:16:09.983 2 DEBUG nova.network.neutron [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 374 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.593 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updated VIF entry in instance network info cache for port 7e42bf44-c1f8-49df-bd5f-a26abe43a832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.594 2 DEBUG nova.network.neutron [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [{"id": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "address": "fa:16:3e:20:8d:8f", "network": {"id": "7f4225de-9f3f-48e2-bad7-a89cf4884a2e", "bridge": "br-int", "label": "tempest-network-smoke--1728644856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e42bf44-c1", "ovs_interfaceid": "7e42bf44-c1f8-49df-bd5f-a26abe43a832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]: {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    "0": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "devices": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "/dev/loop3"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            ],
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_name": "ceph_lv0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_size": "21470642176",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "name": "ceph_lv0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "tags": {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_name": "ceph",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.crush_device_class": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.encrypted": "0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_id": "0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.vdo": "0"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            },
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "vg_name": "ceph_vg0"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        }
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    ],
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    "1": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "devices": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "/dev/loop4"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            ],
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_name": "ceph_lv1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_size": "21470642176",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "name": "ceph_lv1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "tags": {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_name": "ceph",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.crush_device_class": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.encrypted": "0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_id": "1",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.vdo": "0"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            },
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "vg_name": "ceph_vg1"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        }
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    ],
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    "2": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "devices": [
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "/dev/loop5"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            ],
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_name": "ceph_lv2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_size": "21470642176",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "name": "ceph_lv2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "tags": {
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.cluster_name": "ceph",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.crush_device_class": "",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.encrypted": "0",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osd_id": "2",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:                "ceph.vdo": "0"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            },
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "type": "block",
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:            "vg_name": "ceph_vg2"
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:        }
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]:    ]
Oct 14 05:16:10 np0005486808 lucid_elgamal[364185]: }
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.614 2 DEBUG oslo_concurrency.lockutils [req-be2719a2-8ae5-43ba-ae05-1ca8cd89a7bf req-4d46cc0e-9890-478e-bc0f-79f14eeaeb83 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-33969555-fe06-4613-b244-d03c9b4180ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:10 np0005486808 systemd[1]: libpod-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope: Deactivated successfully.
Oct 14 05:16:10 np0005486808 podman[364168]: 2025-10-14 09:16:10.630844552 +0000 UTC m=+0.918051618 container died 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:16:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-30efc8d03c4ef01360b89bb30ccbe8b8c0e8b96d0814f1b8bfe870e0b8a513ab-merged.mount: Deactivated successfully.
Oct 14 05:16:10 np0005486808 podman[364168]: 2025-10-14 09:16:10.701210845 +0000 UTC m=+0.988417891 container remove 8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:16:10 np0005486808 systemd[1]: libpod-conmon-8b5869d2df9b6eec17d5aa1cd7dc7bee5877a5a6223220364ed31fd46d234f97.scope: Deactivated successfully.
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.924 2 DEBUG nova.network.neutron [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.945 2 INFO nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Took 0.96 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.988 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:10 np0005486808 nova_compute[259627]: 2025-10-14 09:16:10.989 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.119 2 DEBUG oslo_concurrency.processutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.155 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.157 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.157 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.158 2 INFO nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Terminating instance#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.159 2 DEBUG nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:11Z|01134|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 13.642s CPU time.
Oct 14 05:16:11 np0005486808 kernel: tap05e92470-26 (unregistering): left promiscuous mode
Oct 14 05:16:11 np0005486808 NetworkManager[44885]: <info>  [1760433371.2255] device (tap05e92470-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:16:11 np0005486808 systemd-machined[214636]: Machine qemu-134-instance-0000006b terminated.
Oct 14 05:16:11 np0005486808 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Consumed 18.777s CPU time.
Oct 14 05:16:11 np0005486808 systemd-machined[214636]: Machine qemu-117-instance-0000005f terminated.
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:11Z|01135|binding|INFO|Releasing lport 05e92470-2658-4ea2-9c44-e91cd5226905 from this chassis (sb_readonly=0)
Oct 14 05:16:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:11Z|01136|binding|INFO|Releasing lport b1a95e67-ca39-4a56-9a91-3165df166ea9 from this chassis (sb_readonly=0)
Oct 14 05:16:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:11Z|01137|binding|INFO|Removing iface tap05e92470-26 ovn-installed in OVS
Oct 14 05:16:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:11Z|01138|binding|INFO|Setting lport 05e92470-2658-4ea2-9c44-e91cd5226905 down in Southbound
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.357 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:fb:45 10.100.0.13'], port_security=['fa:16:3e:7f:fb:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd46b6953-9413-4e6a-94f7-7b5ac9634c16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbecee11-4892-4e36-88d8-98879af7bb1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a080fae2f3c4e39a6cca225203f5ec6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03d7ceab-aac8-44ec-88a3-f0ef79d050f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d77764d-de2c-4408-8882-77d03cb7288a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=05e92470-2658-4ea2-9c44-e91cd5226905) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.358 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 05e92470-2658-4ea2-9c44-e91cd5226905 in datapath fbecee11-4892-4e36-88d8-98879af7bb1e unbound from our chassis#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.359 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbecee11-4892-4e36-88d8-98879af7bb1e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ececd49a-be6f-4914-8d10-1d9e99e49146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.360 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e namespace which is not needed anymore#033[00m
Oct 14 05:16:11 np0005486808 NetworkManager[44885]: <info>  [1760433371.3889] manager: (tap05e92470-26): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.420 2 DEBUG nova.compute.manager [req-cc6fd093-4331-4d6f-b1b3-f26bf3df2216 req-c5eaa625-7606-427b-963f-eab976921eb5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-deleted-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.421133989 +0000 UTC m=+0.052720051 container create 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.428 2 INFO nova.virt.libvirt.driver [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Instance destroyed successfully.#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.428 2 DEBUG nova.objects.instance [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lazy-loading 'resources' on Instance uuid d46b6953-9413-4e6a-94f7-7b5ac9634c16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.439 2 DEBUG nova.virt.libvirt.vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:13:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1224416488',display_name='tempest-₡-1224416488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1224416488',id=95,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a080fae2f3c4e39a6cca225203f5ec6',ramdisk_id='',reservation_id='r-c5eeedgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-2060951674',owner_user_name='tempest-ServersTestJSON-2060951674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:13:48Z,user_data=None,user_id='a287ef08fc5c4f218bf06cd2c7ed021e',uuid=d46b6953-9413-4e6a-94f7-7b5ac9634c16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.440 2 DEBUG nova.network.os_vif_util [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converting VIF {"id": "05e92470-2658-4ea2-9c44-e91cd5226905", "address": "fa:16:3e:7f:fb:45", "network": {"id": "fbecee11-4892-4e36-88d8-98879af7bb1e", "bridge": "br-int", "label": "tempest-ServersTestJSON-138235340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a080fae2f3c4e39a6cca225203f5ec6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05e92470-26", "ovs_interfaceid": "05e92470-2658-4ea2-9c44-e91cd5226905", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.441 2 DEBUG nova.network.os_vif_util [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.441 2 DEBUG os_vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05e92470-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.450 2 INFO os_vif [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:fb:45,bridge_name='br-int',has_traffic_filtering=True,id=05e92470-2658-4ea2-9c44-e91cd5226905,network=Network(fbecee11-4892-4e36-88d8-98879af7bb1e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05e92470-26')#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: Started libpod-conmon-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope.
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.397711081 +0000 UTC m=+0.029297153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : haproxy version is 2.8.14-c23fe91
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [NOTICE]   (353603) : path to executable is /usr/sbin/haproxy
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : Exiting Master process...
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : Exiting Master process...
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.516762116 +0000 UTC m=+0.148348178 container init 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [ALERT]    (353603) : Current worker (353605) exited with code 143 (Terminated)
Oct 14 05:16:11 np0005486808 neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e[353599]: [WARNING]  (353603) : All workers exited. Exiting... (0)
Oct 14 05:16:11 np0005486808 systemd[1]: libpod-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.524314442 +0000 UTC m=+0.155900484 container start 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.527771757 +0000 UTC m=+0.159357799 container attach 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:16:11 np0005486808 podman[364421]: 2025-10-14 09:16:11.527978232 +0000 UTC m=+0.048637580 container died 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:16:11 np0005486808 hardcore_babbage[364422]: 167 167
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.537756863 +0000 UTC m=+0.169342895 container died 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:16:11 np0005486808 systemd[1]: libpod-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.541 2 DEBUG nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "33969555-fe06-4613-b244-d03c9b4180ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG oslo_concurrency.lockutils [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.542 2 DEBUG nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] No waiting events found dispatching network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.543 2 WARNING nova.compute.manager [req-a63d84e4-2fc7-4b7e-848e-048c6a543d52 req-6e76a0d2-57b4-40e0-af0b-23aceb4a8071 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Received unexpected event network-vif-plugged-7e42bf44-c1f8-49df-bd5f-a26abe43a832 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d-userdata-shm.mount: Deactivated successfully.
Oct 14 05:16:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5b011f8a1370d78d7a628d9aaf1bebb9c89844fdc71c852fd6c3b4b8bc5c51cd-merged.mount: Deactivated successfully.
Oct 14 05:16:11 np0005486808 podman[364421]: 2025-10-14 09:16:11.565112617 +0000 UTC m=+0.085771955 container cleanup 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:16:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089509023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4c513db856503f7dc9d7ba243220ddbc6737731744ecb4a1e7efb0c2d236e0de-merged.mount: Deactivated successfully.
Oct 14 05:16:11 np0005486808 podman[364367]: 2025-10-14 09:16:11.591869227 +0000 UTC m=+0.223455279 container remove 4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.594 2 DEBUG oslo_concurrency.processutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.603 2 DEBUG nova.compute.provider_tree [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: libpod-conmon-4c9436b5597d0b4ea3231b0748bd73f72bc509b54e57826334ca8aa9d81222b5.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 systemd[1]: libpod-conmon-5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d.scope: Deactivated successfully.
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.619 2 DEBUG nova.scheduler.client.report [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.643 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:11 np0005486808 podman[364473]: 2025-10-14 09:16:11.654413968 +0000 UTC m=+0.064111101 container remove 5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b47cd716-e3be-42c8-b05e-066b630fa218]: (4, ('Tue Oct 14 09:16:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e (5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d)\n5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d\nTue Oct 14 09:16:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e (5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d)\n5e1919be510758b0a67ed4c5de831fdefc63c3939eb1c546188a8b35cac7308d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a85ec095-3832-4293-abce-89058f78404b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbecee11-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.696 2 INFO nova.scheduler.client.report [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 33969555-fe06-4613-b244-d03c9b4180ba#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.705 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance shutdown successfully after 14 seconds.#033[00m
Oct 14 05:16:11 np0005486808 kernel: tapfbecee11-40: left promiscuous mode
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.715 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcf835d-7291-43b7-bd11-d884ea80aba1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.728 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[697ac0f8-3d74-4cd6-b9cd-3277873391b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4d8a4d-c5e1-4b76-a7f5-238a870551f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[37c2959a-ffa7-4462-ae02-1c331a5bbc0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 700068, 'reachable_time': 41015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364523, 'error': None, 'target': 'ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2dfbecee11\x2d4892\x2d4e36\x2d88d8\x2d98879af7bb1e.mount: Deactivated successfully.
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.766 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbecee11-4892-4e36-88d8-98879af7bb1e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:16:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:11.766 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[58cf4675-4d31-4638-966e-490e02b8c127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:11 np0005486808 podman[364497]: 2025-10-14 09:16:11.775328058 +0000 UTC m=+0.048598288 container create da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.790 2 DEBUG oslo_concurrency.lockutils [None req-6089d5df-22cb-46e1-8825-3f043c9caa4f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "33969555-fe06-4613-b244-d03c9b4180ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:11 np0005486808 systemd[1]: Started libpod-conmon-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope.
Oct 14 05:16:11 np0005486808 podman[364497]: 2025-10-14 09:16:11.754894075 +0000 UTC m=+0.028164335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:16:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:11 np0005486808 podman[364497]: 2025-10-14 09:16:11.874645736 +0000 UTC m=+0.147915996 container init da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:16:11 np0005486808 podman[364497]: 2025-10-14 09:16:11.881984287 +0000 UTC m=+0.155254517 container start da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:16:11 np0005486808 podman[364497]: 2025-10-14 09:16:11.887897573 +0000 UTC m=+0.161167833 container attach da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.897 2 INFO nova.virt.libvirt.driver [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deleting instance files /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16_del#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.898 2 INFO nova.virt.libvirt.driver [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deletion of /var/lib/nova/instances/d46b6953-9413-4e6a-94f7-7b5ac9634c16_del complete#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.977 2 INFO nova.compute.manager [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.980 2 DEBUG oslo.service.loopingcall [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.981 2 DEBUG nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:11 np0005486808 nova_compute[259627]: 2025-10-14 09:16:11.981 2 DEBUG nova.network.neutron [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 403 op/s
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.153 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting instance files /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.155 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deletion of /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del complete#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.366 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.367 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating image(s)#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.401 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.448 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.478 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.483 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.569 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.571 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.572 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.572 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.601 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.607 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.689 2 DEBUG nova.network.neutron [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.710 2 INFO nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.778 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.779 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:12 np0005486808 nifty_booth[364533]: {
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_id": 2,
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "type": "bluestore"
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    },
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_id": 1,
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "type": "bluestore"
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    },
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_id": 0,
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:        "type": "bluestore"
Oct 14 05:16:12 np0005486808 nifty_booth[364533]:    }
Oct 14 05:16:12 np0005486808 nifty_booth[364533]: }
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.858 2 DEBUG oslo_concurrency.processutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:12 np0005486808 systemd[1]: libpod-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope: Deactivated successfully.
Oct 14 05:16:12 np0005486808 podman[364497]: 2025-10-14 09:16:12.873089235 +0000 UTC m=+1.146359495 container died da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:16:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c9aaa93075a585c8e53b24de7a5b023e669efcca5cb5edae4c7a8a51c61e00b6-merged.mount: Deactivated successfully.
Oct 14 05:16:12 np0005486808 nova_compute[259627]: 2025-10-14 09:16:12.935 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 548b81f4-df26-4f76-910f-5a14445c93c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:12 np0005486808 podman[364497]: 2025-10-14 09:16:12.944780422 +0000 UTC m=+1.218050672 container remove da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:16:12 np0005486808 systemd[1]: libpod-conmon-da1cc91af64e081590c63b4d278cb012722bc60083ee1607004a476ed2639fdd.scope: Deactivated successfully.
Oct 14 05:16:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:16:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:16:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c54b0415-85e0-49bb-bc66-d16bdd43b601 does not exist
Oct 14 05:16:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 48e33322-1993-4c5e-8ecd-b8c0cf0bd9f4 does not exist
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.033 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] resizing rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.136 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.136 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Ensure instance console log exists: /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.137 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.138 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.142 2 WARNING nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.146 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.147 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.149 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.149 2 DEBUG nova.virt.libvirt.host [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.150 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.151 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.152 2 DEBUG nova.virt.hardware [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.152 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.179 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412028711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.315 2 DEBUG oslo_concurrency.processutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.322 2 DEBUG nova.compute.provider_tree [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.338 2 DEBUG nova.scheduler.client.report [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.357 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.382 2 INFO nova.scheduler.client.report [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Deleted allocations for instance d46b6953-9413-4e6a-94f7-7b5ac9634c16#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.447 2 DEBUG oslo_concurrency.lockutils [None req-013559cf-b1d2-43e0-8c2b-abdc5ad01484 a287ef08fc5c4f218bf06cd2c7ed021e 0a080fae2f3c4e39a6cca225203f5ec6 - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.521 2 WARNING nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-unplugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG oslo_concurrency.lockutils [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d46b6953-9413-4e6a-94f7-7b5ac9634c16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] No waiting events found dispatching network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.522 2 WARNING nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received unexpected event network-vif-plugged-05e92470-2658-4ea2-9c44-e91cd5226905 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.523 2 DEBUG nova.compute.manager [req-71eb1e43-7154-4177-a0b6-7962e868f5bc req-6a818de3-618e-4f88-9b4c-174dde76b5c0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Received event network-vif-deleted-05e92470-2658-4ea2-9c44-e91cd5226905 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626267375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.602 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.639 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:13 np0005486808 nova_compute[259627]: 2025-10-14 09:16:13.645 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 279 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 187 op/s
Oct 14 05:16:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398352582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.077 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.080 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <uuid>548b81f4-df26-4f76-910f-5a14445c93c5</uuid>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <name>instance-0000006b</name>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerShowV247Test-server-1778827495</nova:name>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:16:13</nova:creationTime>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:user uuid="c5d4a1c172e947e0a129b9f397f961cf">tempest-ServerShowV247Test-1595240674-project-member</nova:user>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <nova:project uuid="222bb7bdbd34453db62947152ca9b44a">tempest-ServerShowV247Test-1595240674</nova:project>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="serial">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="uuid">548b81f4-df26-4f76-910f-5a14445c93c5</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/548b81f4-df26-4f76-910f-5a14445c93c5_disk.config">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/console.log" append="off"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:16:14 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:16:14 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:16:14 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:16:14 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.156 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.156 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Using config drive#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.180 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.200 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:14 np0005486808 nova_compute[259627]: 2025-10-14 09:16:14.256 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'keypairs' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.131 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Creating config drive at /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.139 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45ysjlbj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.312 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45ysjlbj" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.362 2 DEBUG nova.storage.rbd_utils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] rbd image 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.367 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.548 2 DEBUG oslo_concurrency.processutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config 548b81f4-df26-4f76-910f-5a14445c93c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:15 np0005486808 nova_compute[259627]: 2025-10-14 09:16:15.550 2 INFO nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting local config drive /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5/disk.config because it was imported into RBD.#033[00m
Oct 14 05:16:15 np0005486808 systemd-machined[214636]: New machine qemu-135-instance-0000006b.
Oct 14 05:16:15 np0005486808 systemd[1]: Started Virtual Machine qemu-135-instance-0000006b.
Oct 14 05:16:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 6.1 MiB/s wr, 272 op/s
Oct 14 05:16:16 np0005486808 nova_compute[259627]: 2025-10-14 09:16:16.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.097 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 548b81f4-df26-4f76-910f-5a14445c93c5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.098 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433377.0968168, 548b81f4-df26-4f76-910f-5a14445c93c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.098 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.103 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.103 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.108 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance spawned successfully.#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.109 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.137 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.141 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.153 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.154 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.155 2 DEBUG nova.virt.libvirt.driver [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.197 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.198 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433377.1015692, 548b81f4-df26-4f76-910f-5a14445c93c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.198 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Started (Lifecycle Event)#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.255 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.259 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.264 2 DEBUG nova.compute.manager [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.293 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.326 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.327 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.327 2 DEBUG nova.objects.instance [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:16:17 np0005486808 nova_compute[259627]: 2025-10-14 09:16:17.384 2 DEBUG oslo_concurrency.lockutils [None req-9e4ead6f-4c2c-4fd8-a547-96e5ed80e687 c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 05:16:18 np0005486808 nova_compute[259627]: 2025-10-14 09:16:18.193 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433363.1928039, c28cafe5-40e7-47f9-8793-6193487fccc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:18 np0005486808 nova_compute[259627]: 2025-10-14 09:16:18.194 2 INFO nova.compute.manager [-] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:16:18 np0005486808 nova_compute[259627]: 2025-10-14 09:16:18.221 2 DEBUG nova.compute.manager [None req-768767c4-efe7-4764-bccc-73bcb29b1be9 - - - - - -] [instance: c28cafe5-40e7-47f9-8793-6193487fccc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:18 np0005486808 nova_compute[259627]: 2025-10-14 09:16:18.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.123 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.125 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.125 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.126 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.127 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.128 2 INFO nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Terminating instance#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.130 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.131 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquired lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.132 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:16:19 np0005486808 nova_compute[259627]: 2025-10-14 09:16:19.623 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 167 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 268 op/s
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.230 2 DEBUG nova.network.neutron [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.263 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Releasing lock "refresh_cache-548b81f4-df26-4f76-910f-5a14445c93c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.265 2 DEBUG nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:20 np0005486808 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 14 05:16:20 np0005486808 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006b.scope: Consumed 4.653s CPU time.
Oct 14 05:16:20 np0005486808 systemd-machined[214636]: Machine qemu-135-instance-0000006b terminated.
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.489 2 INFO nova.virt.libvirt.driver [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance destroyed successfully.#033[00m
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.490 2 DEBUG nova.objects.instance [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 548b81f4-df26-4f76-910f-5a14445c93c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.951 2 INFO nova.virt.libvirt.driver [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deleting instance files /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del#033[00m
Oct 14 05:16:20 np0005486808 nova_compute[259627]: 2025-10-14 09:16:20.953 2 INFO nova.virt.libvirt.driver [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deletion of /var/lib/nova/instances/548b81f4-df26-4f76-910f-5a14445c93c5_del complete#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.067 2 INFO nova.compute.manager [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.067 2 DEBUG oslo.service.loopingcall [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.068 2 DEBUG nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.068 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.319 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.333 2 DEBUG nova.network.neutron [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.349 2 INFO nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Took 0.28 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.397 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.397 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.504 2 DEBUG oslo_concurrency.processutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:21 np0005486808 podman[365018]: 2025-10-14 09:16:21.711681915 +0000 UTC m=+0.109437138 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:16:21 np0005486808 podman[365019]: 2025-10-14 09:16:21.720475602 +0000 UTC m=+0.116317048 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:16:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760005497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.964 2 DEBUG oslo_concurrency.processutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.971 2 DEBUG nova.compute.provider_tree [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:21 np0005486808 nova_compute[259627]: 2025-10-14 09:16:21.992 2 DEBUG nova.scheduler.client.report [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 343 op/s
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.019 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.052 2 INFO nova.scheduler.client.report [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Deleted allocations for instance 548b81f4-df26-4f76-910f-5a14445c93c5#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.197 2 DEBUG oslo_concurrency.lockutils [None req-6a2e05d4-5861-4fb7-9991-79591cfe71ab c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "548b81f4-df26-4f76-910f-5a14445c93c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.915 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.916 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.916 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.917 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.917 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.919 2 INFO nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Terminating instance#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.922 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.922 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquired lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:22 np0005486808 nova_compute[259627]: 2025-10-14 09:16:22.923 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.176 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.553 2 DEBUG nova.network.neutron [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.575 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Releasing lock "refresh_cache-0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.577 2 DEBUG nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:23 np0005486808 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 14 05:16:23 np0005486808 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 15.199s CPU time.
Oct 14 05:16:23 np0005486808 systemd-machined[214636]: Machine qemu-133-instance-0000006a terminated.
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.837 2 INFO nova.virt.libvirt.driver [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance destroyed successfully.#033[00m
Oct 14 05:16:23 np0005486808 nova_compute[259627]: 2025-10-14 09:16:23.838 2 DEBUG nova.objects.instance [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lazy-loading 'resources' on Instance uuid 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 153 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.343 2 INFO nova.virt.libvirt.driver [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deleting instance files /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_del#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.344 2 INFO nova.virt.libvirt.driver [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deletion of /var/lib/nova/instances/0c7c28d8-ba3d-471a-bf37-8ff1870d27c8_del complete#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.405 2 INFO nova.compute.manager [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.405 2 DEBUG oslo.service.loopingcall [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.406 2 DEBUG nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.406 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.435 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433369.4336374, 33969555-fe06-4613-b244-d03c9b4180ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.436 2 INFO nova.compute.manager [-] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.461 2 DEBUG nova.compute.manager [None req-7be534b2-c728-4c1c-b22a-cbb44a483298 - - - - - -] [instance: 33969555-fe06-4613-b244-d03c9b4180ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.552 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.553 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.584 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.645 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.645 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.656 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.656 2 INFO nova.compute.claims [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.814 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.868 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.886 2 DEBUG nova.network.neutron [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.905 2 INFO nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Took 0.50 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:24 np0005486808 nova_compute[259627]: 2025-10-14 09:16:24.986 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462707404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.353 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.359 2 DEBUG nova.compute.provider_tree [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.377 2 DEBUG nova.scheduler.client.report [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.405 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.406 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.410 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.464 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.490 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.496 2 DEBUG oslo_concurrency.processutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.578 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.696 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.698 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.698 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating image(s)#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.735 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.775 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.809 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.814 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.924 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.926 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.926 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.927 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.959 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:25 np0005486808 nova_compute[259627]: 2025-10-14 09:16:25.965 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187996767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.006 2 DEBUG oslo_concurrency.processutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.014 2 DEBUG nova.compute.provider_tree [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.039 2 DEBUG nova.scheduler.client.report [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.064 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.104 2 INFO nova.scheduler.client.report [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Deleted allocations for instance 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.187 2 DEBUG oslo_concurrency.lockutils [None req-d18d2bca-735b-4209-a0f8-cf2f9c4cc5aa c5d4a1c172e947e0a129b9f397f961cf 222bb7bdbd34453db62947152ca9b44a - - default default] Lock "0c7c28d8-ba3d-471a-bf37-8ff1870d27c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.258 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.337 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] resizing rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.450 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433371.425544, d46b6953-9413-4e6a-94f7-7b5ac9634c16 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.450 2 INFO nova.compute.manager [-] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.460 2 DEBUG nova.objects.instance [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'migration_context' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.481 2 DEBUG nova.compute.manager [None req-1952e744-7506-4ce4-a09c-e3f313583521 - - - - - -] [instance: d46b6953-9413-4e6a-94f7-7b5ac9634c16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.483 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.484 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ensure instance console log exists: /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.484 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.485 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.485 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.488 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.495 2 WARNING nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.501 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.502 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.505 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.506 2 DEBUG nova.virt.libvirt.host [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.506 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.507 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.508 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.509 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.510 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.511 2 DEBUG nova.virt.hardware [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.515 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096043527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.952 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.983 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:26 np0005486808 nova_compute[259627]: 2025-10-14 09:16:26.986 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007108185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.408 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.412 2 DEBUG nova.objects.instance [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.430 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <uuid>7eb647bc-75e4-4d38-aaa4-67570c4713f9</uuid>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <name>instance-0000006c</name>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerShowV257Test-server-1454600294</nova:name>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:16:26</nova:creationTime>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:user uuid="c832ba0d968c4ca4a20d386152e5b5bb">tempest-ServerShowV257Test-1068269928-project-member</nova:user>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <nova:project uuid="4ba227fc561b4e9cb9d86e1727015a7d">tempest-ServerShowV257Test-1068269928</nova:project>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="serial">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="uuid">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log" append="off"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:16:27 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:16:27 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:16:27 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:16:27 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.495 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.496 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.497 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Using config drive#033[00m
Oct 14 05:16:27 np0005486808 nova_compute[259627]: 2025-10-14 09:16:27.534 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.009 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating config drive at /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config#033[00m
Oct 14 05:16:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.018 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqm8tngm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.190 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqm8tngm" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.222 2 DEBUG nova.storage.rbd_utils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.227 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.407 2 DEBUG oslo_concurrency.processutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.408 2 INFO nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting local config drive /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:16:28 np0005486808 systemd-machined[214636]: New machine qemu-136-instance-0000006c.
Oct 14 05:16:28 np0005486808 systemd[1]: Started Virtual Machine qemu-136-instance-0000006c.
Oct 14 05:16:28 np0005486808 nova_compute[259627]: 2025-10-14 09:16:28.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.254 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433389.2535648, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.254 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.257 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.257 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.261 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance spawned successfully.#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.261 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.286 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.294 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.299 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.300 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.300 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.301 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.301 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.302 2 DEBUG nova.virt.libvirt.driver [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.328 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.329 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433389.2546666, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.329 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.358 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.362 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.391 2 INFO nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 3.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.392 2 DEBUG nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.393 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.458 2 INFO nova.compute.manager [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 4.83 seconds to build instance.#033[00m
Oct 14 05:16:29 np0005486808 nova_compute[259627]: 2025-10-14 09:16:29.475 2 DEBUG oslo_concurrency.lockutils [None req-0268edc0-36be-4903-a4c4-b270f11da364 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Oct 14 05:16:30 np0005486808 nova_compute[259627]: 2025-10-14 09:16:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.527 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.528 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.548 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.631 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.632 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.640 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.641 2 INFO nova.compute.claims [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.795 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:31 np0005486808 nova_compute[259627]: 2025-10-14 09:16:31.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.051 2 INFO nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Rebuilding instance#033[00m
Oct 14 05:16:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242382949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.359 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.360 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.367 2 DEBUG nova.compute.provider_tree [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.383 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.384 2 DEBUG nova.scheduler.client.report [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.426 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.428 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.432 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.432 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.433 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.433 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.493 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_requests' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.512 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.537 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.538 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.544 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'resources' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.561 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.565 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'migration_context' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.603 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.609 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.616 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.702 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.713 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.714 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating image(s)#033[00m
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.742 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.770 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:16:32
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta']
Oct 14 05:16:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.799 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.803 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.849 2 DEBUG nova.policy [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e992bcb79c4946a8985e3df25eb216ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d24993a343a425dbddac7e32be0c86b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:16:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.890 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.891 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.891 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.892 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.923 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.927 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e0fe601-33b8-44f0-8452-d821825b9176_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442803607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:32 np0005486808 nova_compute[259627]: 2025-10-14 09:16:32.980 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.078 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.078 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:16:33 np0005486808 podman[365609]: 2025-10-14 09:16:33.081210892 +0000 UTC m=+0.057057848 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 14 05:16:33 np0005486808 podman[365607]: 2025-10-14 09:16:33.159316917 +0000 UTC m=+0.125909525 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:16:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.252 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e0fe601-33b8-44f0-8452-d821825b9176_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.303 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] resizing rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.358 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.392 2 DEBUG nova.objects.instance [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'migration_context' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.418 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.419 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Ensure instance console log exists: /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.419 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.420 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.420 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7eb647bc-75e4-4d38-aaa4-67570c4713f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8e0fe601-33b8-44f0-8452-d821825b9176 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.459 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.460 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.520 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:33 np0005486808 nova_compute[259627]: 2025-10-14 09:16:33.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571790934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 88 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.018 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.038 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.075 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.076 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:34 np0005486808 nova_compute[259627]: 2025-10-14 09:16:34.195 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Successfully created port: ae1bda9b-5957-43de-bcf7-97164f008565 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.141 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Successfully updated port: ae1bda9b-5957-43de-bcf7-97164f008565 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.163 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.164 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.164 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.380 2 DEBUG nova.compute.manager [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.381 2 DEBUG nova.compute.manager [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.382 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.478 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.487 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433380.486112, 548b81f4-df26-4f76-910f-5a14445c93c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.487 2 INFO nova.compute.manager [-] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:16:35 np0005486808 nova_compute[259627]: 2025-10-14 09:16:35.516 2 DEBUG nova.compute.manager [None req-eff9a11e-da52-4c24-9266-3537089ebd37 - - - - - -] [instance: 548b81f4-df26-4f76-910f-5a14445c93c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.597 2 DEBUG nova.network.neutron [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.617 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.618 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance network_info: |[{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.619 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.620 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.625 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start _get_guest_xml network_info=[{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.633 2 WARNING nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.647 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.648 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.655 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.656 2 DEBUG nova.virt.libvirt.host [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.657 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.658 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.659 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.660 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.661 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.661 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.662 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.663 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.664 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.664 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.665 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.666 2 DEBUG nova.virt.hardware [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:16:36 np0005486808 nova_compute[259627]: 2025-10-14 09:16:36.672 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718026794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.153 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.187 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.193 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714991685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.638 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.642 2 DEBUG nova.virt.libvirt.vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:16:32Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.643 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.645 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.647 2 DEBUG nova.objects.instance [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.678 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <uuid>8e0fe601-33b8-44f0-8452-d821825b9176</uuid>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <name>instance-0000006d</name>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1974746659</nova:name>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:16:36</nova:creationTime>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:user uuid="e992bcb79c4946a8985e3df25eb216ca">tempest-TestNetworkAdvancedServerOps-94788416-project-member</nova:user>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:project uuid="2d24993a343a425dbddac7e32be0c86b">tempest-TestNetworkAdvancedServerOps-94788416</nova:project>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <nova:port uuid="ae1bda9b-5957-43de-bcf7-97164f008565">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="serial">8e0fe601-33b8-44f0-8452-d821825b9176</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="uuid">8e0fe601-33b8-44f0-8452-d821825b9176</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8e0fe601-33b8-44f0-8452-d821825b9176_disk">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8e0fe601-33b8-44f0-8452-d821825b9176_disk.config">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b7:a7:bb"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <target dev="tapae1bda9b-59"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/console.log" append="off"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:16:37 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:16:37 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:16:37 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:16:37 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.693 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Preparing to wait for external event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.694 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.694 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.695 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.696 2 DEBUG nova.virt.libvirt.vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:16:32Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.696 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.697 2 DEBUG nova.network.os_vif_util [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.704 2 DEBUG os_vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1bda9b-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1bda9b-59, col_values=(('external_ids', {'iface-id': 'ae1bda9b-5957-43de-bcf7-97164f008565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a7:bb', 'vm-uuid': '8e0fe601-33b8-44f0-8452-d821825b9176'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:37 np0005486808 NetworkManager[44885]: <info>  [1760433397.7155] manager: (tapae1bda9b-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.722 2 INFO os_vif [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.782 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] No VIF found with MAC fa:16:3e:b7:a7:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.783 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Using config drive#033[00m
Oct 14 05:16:37 np0005486808 nova_compute[259627]: 2025-10-14 09:16:37.804 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.367 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Creating config drive at /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.376 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89c02boe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.441 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updated VIF entry in instance network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.442 2 DEBUG nova.network.neutron [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.461 2 DEBUG oslo_concurrency.lockutils [req-eaf05351-b5f3-4cd8-aa88-a61a55bfdc4f req-56fe0240-fcd1-43e2-818f-ba672dbd712e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.546 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89c02boe" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.586 2 DEBUG nova.storage.rbd_utils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] rbd image 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.590 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.787 2 DEBUG oslo_concurrency.processutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config 8e0fe601-33b8-44f0-8452-d821825b9176_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.788 2 INFO nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deleting local config drive /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176/disk.config because it was imported into RBD.#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.835 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433383.8332632, 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.835 2 INFO nova.compute.manager [-] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:16:38 np0005486808 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 05:16:38 np0005486808 NetworkManager[44885]: <info>  [1760433398.8372] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:38Z|01139|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 05:16:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:38Z|01140|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.857 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.858 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d bound to our chassis#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.859 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d#033[00m
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.870 2 DEBUG nova.compute.manager [None req-148bb98d-af63-4be7-aacb-a8f5cb02b15a - - - - - -] [instance: 0c7c28d8-ba3d-471a-bf37-8ff1870d27c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb1ad6b-9c45-40ae-aefe-b66e67821a7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.873 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd7e2e81-91 in ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.876 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd7e2e81-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4301f6df-dba5-4c81-ad2c-c6155c45036a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 systemd-udevd[365902]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.878 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12afcb-34ba-48f5-b198-ef31033c940a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 systemd-machined[214636]: New machine qemu-137-instance-0000006d.
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.895 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b8990696-d985-4a54-bb74-6e35ce04beb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 NetworkManager[44885]: <info>  [1760433398.9012] device (tapae1bda9b-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:16:38 np0005486808 NetworkManager[44885]: <info>  [1760433398.9023] device (tapae1bda9b-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:16:38 np0005486808 systemd[1]: Started Virtual Machine qemu-137-instance-0000006d.
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.925 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4d70c3be-bc93-4203-ad50-1d39c946cc44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:38Z|01141|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 05:16:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:38Z|01142|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 05:16:38 np0005486808 nova_compute[259627]: 2025-10-14 09:16:38.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.956 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d8f6a8-fb14-407b-9b87-490b9407fcc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:38 np0005486808 NetworkManager[44885]: <info>  [1760433398.9705] manager: (tapbd7e2e81-90): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct 14 05:16:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:38.969 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e593613c-fe91-4d68-9590-0f56351477e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.026 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4f02ae8c-b319-452d-9728-714fce872174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.030 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4ab5ec-0aaf-423c-a935-7c611cbf742e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 NetworkManager[44885]: <info>  [1760433399.0575] device (tapbd7e2e81-90): carrier: link connected
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.061 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d8071ba8-84e0-4fa1-ab67-c6d0039d0e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.077 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.093 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[313e8bdd-b1ff-40ed-aa77-3b25736eb082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717781, 'reachable_time': 42678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365934, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.096 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.097 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.113 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3feb05-0a6b-446b-ac80-e825d168784d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8ede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717781, 'tstamp': 717781}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365935, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.143 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d44b2170-29eb-45aa-8c89-a9c8f3695ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717781, 'reachable_time': 42678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365936, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[480afbf6-beb8-406c-9892-3831d04f7c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2d1faa-f256-4fbb-8cfe-e6686087b6ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.282 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.283 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd7e2e81-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:39 np0005486808 kernel: tapbd7e2e81-90: entered promiscuous mode
Oct 14 05:16:39 np0005486808 NetworkManager[44885]: <info>  [1760433399.3301] manager: (tapbd7e2e81-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.333 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd7e2e81-90, col_values=(('external_ids', {'iface-id': '9945b67f-e925-4ba1-a5f4-5c7846f9de7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:39 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:39Z|01143|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.364 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.365 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97ff1e06-34bc-452d-81f5-dd293f208d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.366 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:16:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:39.368 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'env', 'PROCESS_TAG=haproxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.522 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.578 2 DEBUG nova.compute.manager [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG oslo_concurrency.lockutils [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.579 2 DEBUG nova.compute.manager [req-de22e413-59a7-4da7-9e21-3c890327baaf req-a45d6ac6-0347-4e3b-a4fd-8e36b985ac9d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Processing event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.834 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8342152, 8e0fe601-33b8-44f0-8452-d821825b9176 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.834 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Started (Lifecycle Event)#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.836 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.839 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.842 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance spawned successfully.#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.842 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.858 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.860 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.861 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.862 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.862 2 DEBUG nova.virt.libvirt.driver [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.866 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:39 np0005486808 podman[366010]: 2025-10-14 09:16:39.891336247 +0000 UTC m=+0.053020248 container create b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.897 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.897 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8343365, 8e0fe601-33b8-44f0-8452-d821825b9176 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.897 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.909 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.921 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.924 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.925 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.926 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433399.8390906, 8e0fe601-33b8-44f0-8452-d821825b9176 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.926 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:16:39 np0005486808 systemd[1]: Started libpod-conmon-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope.
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.939 2 INFO nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 7.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.940 2 DEBUG nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.951 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:39 np0005486808 podman[366010]: 2025-10-14 09:16:39.864625498 +0000 UTC m=+0.026309529 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:16:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:16:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4b02960d8aa9c8b5d05fa99183f05f82538622c06a41271790d657e5edb93b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:16:39 np0005486808 nova_compute[259627]: 2025-10-14 09:16:39.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:39 np0005486808 podman[366010]: 2025-10-14 09:16:39.9835528 +0000 UTC m=+0.145236821 container init b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:16:40 np0005486808 nova_compute[259627]: 2025-10-14 09:16:40.000 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:16:40 np0005486808 podman[366010]: 2025-10-14 09:16:40.004884435 +0000 UTC m=+0.166568436 container start b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:16:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 14 05:16:40 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : New worker (366031) forked
Oct 14 05:16:40 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : Loading success.
Oct 14 05:16:40 np0005486808 nova_compute[259627]: 2025-10-14 09:16:40.036 2 INFO nova.compute.manager [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 8.44 seconds to build instance.#033[00m
Oct 14 05:16:40 np0005486808 nova_compute[259627]: 2025-10-14 09:16:40.051 2 DEBUG oslo_concurrency.lockutils [None req-5d71d48c-b6ef-4f81-b136-6ad3fe981333 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:40 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.819 2 DEBUG nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.819 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG oslo_concurrency.lockutils [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.820 2 DEBUG nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:16:41 np0005486808 nova_compute[259627]: 2025-10-14 09:16:41.820 2 WARNING nova.compute.manager [req-14a78fe3-b7d3-49ba-bef3-2b2636b4e662 req-d77b2ac2-e6d0-472b-b1f3-3b05e6c1edd4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:16:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 174 op/s
Oct 14 05:16:42 np0005486808 nova_compute[259627]: 2025-10-14 09:16:42.681 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct 14 05:16:42 np0005486808 nova_compute[259627]: 2025-10-14 09:16:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001064762811747086 of space, bias 1.0, pg target 0.31942884352412576 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:16:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:16:43 np0005486808 nova_compute[259627]: 2025-10-14 09:16:43.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 155 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 98 op/s
Oct 14 05:16:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:44Z|01144|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:44 np0005486808 NetworkManager[44885]: <info>  [1760433404.1107] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 14 05:16:44 np0005486808 NetworkManager[44885]: <info>  [1760433404.1119] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 14 05:16:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:44Z|01145|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.460 2 DEBUG nova.compute.manager [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.461 2 DEBUG nova.compute.manager [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.462 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.462 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.463 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:16:44 np0005486808 nova_compute[259627]: 2025-10-14 09:16:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:16:45 np0005486808 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 14 05:16:45 np0005486808 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Consumed 13.203s CPU time.
Oct 14 05:16:45 np0005486808 systemd-machined[214636]: Machine qemu-136-instance-0000006c terminated.
Oct 14 05:16:45 np0005486808 nova_compute[259627]: 2025-10-14 09:16:45.696 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance shutdown successfully after 13 seconds.#033[00m
Oct 14 05:16:45 np0005486808 nova_compute[259627]: 2025-10-14 09:16:45.703 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.#033[00m
Oct 14 05:16:45 np0005486808 nova_compute[259627]: 2025-10-14 09:16:45.709 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.#033[00m
Oct 14 05:16:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.144 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting instance files /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.145 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deletion of /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del complete#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.310 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.311 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating image(s)#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.341 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.365 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.390 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.393 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.477 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.478 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.479 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.479 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "31b09e724e39ad6100a7d39b565399944ae3b6cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.502 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.506 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.766 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.839 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] resizing rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.942 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.943 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Ensure instance console log exists: /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.944 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.946 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.951 2 WARNING nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.958 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.958 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.962 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.963 2 DEBUG nova.virt.libvirt.host [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.963 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=e2368e3e-f504-40e6-a9d3-67df18c845bb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.964 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.965 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.966 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.967 2 DEBUG nova.virt.hardware [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.967 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.969 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updated VIF entry in instance network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:16:46 np0005486808 nova_compute[259627]: 2025-10-14 09:16:46.970 2 DEBUG nova.network.neutron [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.004 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.046 2 DEBUG oslo_concurrency.lockutils [req-cc019e47-f578-4b8b-82a3-59fde7bd2033 req-1f1ceb90-64e7-44a5-989a-0cf14a977cff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184964300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.476 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.500 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.505 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:47 np0005486808 nova_compute[259627]: 2025-10-14 09:16:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:16:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475672375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.004 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.008 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <uuid>7eb647bc-75e4-4d38-aaa4-67570c4713f9</uuid>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <name>instance-0000006c</name>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:name>tempest-ServerShowV257Test-server-1454600294</nova:name>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:16:46</nova:creationTime>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:user uuid="c832ba0d968c4ca4a20d386152e5b5bb">tempest-ServerShowV257Test-1068269928-project-member</nova:user>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <nova:project uuid="4ba227fc561b4e9cb9d86e1727015a7d">tempest-ServerShowV257Test-1068269928</nova:project>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="e2368e3e-f504-40e6-a9d3-67df18c845bb"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="serial">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="uuid">7eb647bc-75e4-4d38-aaa4-67570c4713f9</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/console.log" append="off"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:16:48 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:16:48 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:16:48 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:16:48 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:16:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.077 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.079 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.080 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Using config drive#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.117 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.157 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.202 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'keypairs' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.520 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Creating config drive at /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.534 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj44_5_s1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.708 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj44_5_s1" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.743 2 DEBUG nova.storage.rbd_utils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] rbd image 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.750 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.934 2 DEBUG oslo_concurrency.processutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config 7eb647bc-75e4-4d38-aaa4-67570c4713f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:48 np0005486808 nova_compute[259627]: 2025-10-14 09:16:48.936 2 INFO nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting local config drive /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:16:49 np0005486808 systemd-machined[214636]: New machine qemu-138-instance-0000006c.
Oct 14 05:16:49 np0005486808 systemd[1]: Started Virtual Machine qemu-138-instance-0000006c.
Oct 14 05:16:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.291 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 7eb647bc-75e4-4d38-aaa4-67570c4713f9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.293 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433410.2914808, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.293 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.297 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.297 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.302 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance spawned successfully.#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.303 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.327 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.334 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.339 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.339 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.340 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.340 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.341 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.341 2 DEBUG nova.virt.libvirt.driver [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.392 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.393 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433410.2931767, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.394 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.425 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.430 2 DEBUG nova.compute.manager [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.431 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.470 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.517 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.518 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.519 2 DEBUG nova.objects.instance [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 14 05:16:50 np0005486808 nova_compute[259627]: 2025-10-14 09:16:50.602 2 DEBUG oslo_concurrency.lockutils [None req-868ff88d-0ddf-42f9-af27-b5972da5a4dd c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.073 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.074 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.075 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.075 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.076 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.078 2 INFO nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Terminating instance#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.080 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.080 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquired lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.081 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.366 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:52Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 05:16:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:52Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 05:16:52 np0005486808 podman[366407]: 2025-10-14 09:16:52.680409352 +0000 UTC m=+0.084507094 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:16:52 np0005486808 podman[366408]: 2025-10-14 09:16:52.684474352 +0000 UTC m=+0.088537453 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.767 2 DEBUG nova.network.neutron [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.780 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Releasing lock "refresh_cache-7eb647bc-75e4-4d38-aaa4-67570c4713f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:16:52 np0005486808 nova_compute[259627]: 2025-10-14 09:16:52.780 2 DEBUG nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:16:52 np0005486808 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 14 05:16:52 np0005486808 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Consumed 3.689s CPU time.
Oct 14 05:16:52 np0005486808 systemd-machined[214636]: Machine qemu-138-instance-0000006c terminated.
Oct 14 05:16:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.014 2 INFO nova.virt.libvirt.driver [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance destroyed successfully.#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.015 2 DEBUG nova.objects.instance [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lazy-loading 'resources' on Instance uuid 7eb647bc-75e4-4d38-aaa4-67570c4713f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.491 2 INFO nova.virt.libvirt.driver [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deleting instance files /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.492 2 INFO nova.virt.libvirt.driver [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deletion of /var/lib/nova/instances/7eb647bc-75e4-4d38-aaa4-67570c4713f9_del complete#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.542 2 INFO nova.compute.manager [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG oslo.service.loopingcall [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.543 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.965 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.980 2 DEBUG nova.network.neutron [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:16:53 np0005486808 nova_compute[259627]: 2025-10-14 09:16:53.996 2 INFO nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Took 0.45 seconds to deallocate network for instance.#033[00m
Oct 14 05:16:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 138 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 164 op/s
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.050 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.051 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.126 2 DEBUG oslo_concurrency.processutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:16:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:16:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495703736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.615 2 DEBUG oslo_concurrency.processutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.621 2 DEBUG nova.compute.provider_tree [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.647 2 DEBUG nova.scheduler.client.report [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.679 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.719 2 INFO nova.scheduler.client.report [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Deleted allocations for instance 7eb647bc-75e4-4d38-aaa4-67570c4713f9#033[00m
Oct 14 05:16:54 np0005486808 nova_compute[259627]: 2025-10-14 09:16:54.812 2 DEBUG oslo_concurrency.lockutils [None req-742821fc-e3d9-408f-b83a-0ddf387afcc3 c832ba0d968c4ca4a20d386152e5b5bb 4ba227fc561b4e9cb9d86e1727015a7d - - default default] Lock "7eb647bc-75e4-4d38-aaa4-67570c4713f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:16:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 310 op/s
Oct 14 05:16:56 np0005486808 nova_compute[259627]: 2025-10-14 09:16:56.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:57 np0005486808 nova_compute[259627]: 2025-10-14 09:16:57.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:16:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.519 2 INFO nova.compute.manager [None req-d2f3b051-6622-4e7b-878e-04209fdc4ea0 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.533 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.937 2 DEBUG nova.objects.instance [None req-1e0814ea-774c-492f-8c98-e7642f5bc78f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.966 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433418.966416, 8e0fe601-33b8-44f0-8452-d821825b9176 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.967 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.992 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:58 np0005486808 nova_compute[259627]: 2025-10-14 09:16:58.998 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:16:59 np0005486808 nova_compute[259627]: 2025-10-14 09:16:59.020 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:16:59 np0005486808 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 05:16:59 np0005486808 NetworkManager[44885]: <info>  [1760433419.6534] device (tapae1bda9b-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:16:59 np0005486808 nova_compute[259627]: 2025-10-14 09:16:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:59Z|01146|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 05:16:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:59Z|01147|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 05:16:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:16:59Z|01148|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 05:16:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.676 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:16:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.681 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis#033[00m
Oct 14 05:16:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.684 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:16:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.686 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[876435db-c1f3-4526-a89c-05c8cd12d1ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:16:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:16:59.688 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace which is not needed anymore#033[00m
Oct 14 05:16:59 np0005486808 nova_compute[259627]: 2025-10-14 09:16:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:16:59 np0005486808 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 14 05:16:59 np0005486808 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Consumed 13.158s CPU time.
Oct 14 05:16:59 np0005486808 systemd-machined[214636]: Machine qemu-137-instance-0000006d terminated.
Oct 14 05:16:59 np0005486808 nova_compute[259627]: 2025-10-14 09:16:59.819 2 DEBUG nova.compute.manager [None req-1e0814ea-774c-492f-8c98-e7642f5bc78f e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : haproxy version is 2.8.14-c23fe91
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [NOTICE]   (366029) : path to executable is /usr/sbin/haproxy
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : Exiting Master process...
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : Exiting Master process...
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [ALERT]    (366029) : Current worker (366031) exited with code 143 (Terminated)
Oct 14 05:16:59 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366025]: [WARNING]  (366029) : All workers exited. Exiting... (0)
Oct 14 05:16:59 np0005486808 systemd[1]: libpod-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope: Deactivated successfully.
Oct 14 05:16:59 np0005486808 podman[366521]: 2025-10-14 09:16:59.887966192 +0000 UTC m=+0.059836125 container died b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:16:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a-userdata-shm.mount: Deactivated successfully.
Oct 14 05:16:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4e4b02960d8aa9c8b5d05fa99183f05f82538622c06a41271790d657e5edb93b-merged.mount: Deactivated successfully.
Oct 14 05:16:59 np0005486808 podman[366521]: 2025-10-14 09:16:59.946736121 +0000 UTC m=+0.118606084 container cleanup b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:16:59 np0005486808 systemd[1]: libpod-conmon-b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a.scope: Deactivated successfully.
Oct 14 05:17:00 np0005486808 podman[366559]: 2025-10-14 09:17:00.020916119 +0000 UTC m=+0.047063851 container remove b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.028 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[10357666-6cb3-43e2-8bcc-408d3ded7743]: (4, ('Tue Oct 14 09:16:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a)\nb270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a\nTue Oct 14 09:16:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (b270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a)\nb270055ce7693e9d9613129d58bad4977135ab0f0f3886a78430eb1d13c2bf3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.030 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2be7e4b0-b45a-4bf6-98e8-a208354b3d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.032 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:00 np0005486808 kernel: tapbd7e2e81-90: left promiscuous mode
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.078 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dd449731-3a89-48be-80bc-44ba66f1d8de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.103 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03eaec65-863c-4ffb-b179-d38d6e037390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.105 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[599a99ce-edb4-4673-a015-46196fac4f47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.124 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac9afd2-0f20-46b6-ac6f-4e9cf0bc73e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717771, 'reachable_time': 23324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366577, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.127 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:17:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:00.127 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[15f670c6-53c5-4c93-a7e4-270a76895636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:00 np0005486808 systemd[1]: run-netns-ovnmeta\x2dbd7e2e81\x2d9355\x2d4e48\x2dbcd1\x2d3cdb592b9c9d.mount: Deactivated successfully.
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.255 2 DEBUG nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.255 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG oslo_concurrency.lockutils [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.256 2 DEBUG nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.257 2 WARNING nova.compute.manager [req-5ed6f09e-5016-49bc-a9c9-518fca2a7b92 req-62d54962-a57c-4b35-a914-ec31d7ba5aff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-unplugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:17:00 np0005486808 nova_compute[259627]: 2025-10-14 09:17:00.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:01.176 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:01.177 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:17:01 np0005486808 nova_compute[259627]: 2025-10-14 09:17:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.748 2 DEBUG nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.749 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.749 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.750 2 DEBUG oslo_concurrency.lockutils [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.750 2 DEBUG nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.750 2 WARNING nova.compute.manager [req-49deeaec-472b-4349-b7db-ecb45ea4735a req-17d7345a-cc2e-48bc-b083-d1cd7f82ac63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:17:02 np0005486808 nova_compute[259627]: 2025-10-14 09:17:02.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:03 np0005486808 nova_compute[259627]: 2025-10-14 09:17:03.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:03 np0005486808 podman[366580]: 2025-10-14 09:17:03.707359187 +0000 UTC m=+0.103935683 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:17:03 np0005486808 podman[366579]: 2025-10-14 09:17:03.756151809 +0000 UTC m=+0.156437206 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:17:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.115 2 INFO nova.compute.manager [None req-f24b0e6c-49d2-4efa-ad31-87963eebc7bd e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output#033[00m
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.424 2 INFO nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Resuming#033[00m
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.425 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'flavor' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.470 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.471 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:17:04 np0005486808 nova_compute[259627]: 2025-10-14 09:17:04.472 2 DEBUG nova.network.neutron [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:17:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:05.179 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.567 2 DEBUG nova.network.neutron [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [{"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.595 2 DEBUG oslo_concurrency.lockutils [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.603 2 DEBUG nova.virt.libvirt.vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:16:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:16:59Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.604 2 DEBUG nova.network.os_vif_util [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.605 2 DEBUG nova.network.os_vif_util [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.606 2 DEBUG os_vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1bda9b-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1bda9b-59, col_values=(('external_ids', {'iface-id': 'ae1bda9b-5957-43de-bcf7-97164f008565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a7:bb', 'vm-uuid': '8e0fe601-33b8-44f0-8452-d821825b9176'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.616 2 INFO os_vif [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.652 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:06 np0005486808 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.7463] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 14 05:17:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:06Z|01149|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 05:17:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:06Z|01150|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.7655] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.7669] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.769 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.771 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d bound to our chassis#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.772 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[605cc3d1-52ab-453d-a42c-ec2a92cd4dbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.790 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd7e2e81-91 in ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.795 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd7e2e81-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc197c12-312a-491b-8011-bf734a3cd7d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.797 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e86f69c-2c4e-46cf-93cb-e4f31e63475f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 systemd-udevd[366635]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.811 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4fd08a-c6e9-4756-910b-85454107f66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 systemd-machined[214636]: New machine qemu-139-instance-0000006d.
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.8242] device (tapae1bda9b-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.8272] device (tapae1bda9b-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:17:06 np0005486808 systemd[1]: Started Virtual Machine qemu-139-instance-0000006d.
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[91331368-b36d-4e4d-a2b4-9090202ab31e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.892 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3435688-acdc-44ec-adf2-99978d600b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 systemd-udevd[366638]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:17:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:06Z|01151|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 05:17:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:06Z|01152|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 05:17:06 np0005486808 nova_compute[259627]: 2025-10-14 09:17:06.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.9062] manager: (tapbd7e2e81-90): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.902 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe17ea-f208-4f06-ac84-0b14287330ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.932 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0a563b-7a30-46c9-88d3-93ff09149c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[14101090-af6b-457e-bf29-e313a83dc026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 NetworkManager[44885]: <info>  [1760433426.9551] device (tapbd7e2e81-90): carrier: link connected
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.960 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[955d662f-7691-4d97-a69f-0b4b43bed870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9e4671-dbb0-42bf-9ca4-2775e15b185a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720571, 'reachable_time': 37118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366666, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:06.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a57b6e47-a55d-470c-adf2-a8b07185d783]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8ede'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720571, 'tstamp': 720571}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366667, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.004 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[194336f1-4fa6-4281-8032-a419dacd18e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd7e2e81-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:8e:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720571, 'reachable_time': 37118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366668, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.032 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a278e057-834d-47df-a997-b87041b13bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.114 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0c144d-941c-4807-a33b-bb722b5c845b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.117 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd7e2e81-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:07 np0005486808 NetworkManager[44885]: <info>  [1760433427.1387] manager: (tapbd7e2e81-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:07 np0005486808 kernel: tapbd7e2e81-90: entered promiscuous mode
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.143 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd7e2e81-90, col_values=(('external_ids', {'iface-id': '9945b67f-e925-4ba1-a5f4-5c7846f9de7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:07Z|01153|binding|INFO|Releasing lport 9945b67f-e925-4ba1-a5f4-5c7846f9de7a from this chassis (sb_readonly=0)
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.180 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e57bf702-6ea4-4afe-bef4-265ce807130f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.183 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.pid.haproxy
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:17:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:07.186 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'env', 'PROCESS_TAG=haproxy-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:17:07 np0005486808 podman[366742]: 2025-10-14 09:17:07.613913378 +0000 UTC m=+0.062660595 container create 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:17:07 np0005486808 systemd[1]: Started libpod-conmon-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope.
Oct 14 05:17:07 np0005486808 podman[366742]: 2025-10-14 09:17:07.579697805 +0000 UTC m=+0.028445112 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:17:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae44bdbb224e73e78d9bd19aa707268d3d53d52647578ee723bd9e3305ab7185/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:07 np0005486808 podman[366742]: 2025-10-14 09:17:07.70610198 +0000 UTC m=+0.154849217 container init 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:17:07 np0005486808 podman[366742]: 2025-10-14 09:17:07.711410211 +0000 UTC m=+0.160157428 container start 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:17:07 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : New worker (366763) forked
Oct 14 05:17:07 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : Loading success.
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.893 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 8e0fe601-33b8-44f0-8452-d821825b9176 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433427.8927, 8e0fe601-33b8-44f0-8452-d821825b9176 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Started (Lifecycle Event)#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.922 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.949 2 DEBUG nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.950 2 DEBUG nova.objects.instance [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.970 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance running successfully.#033[00m
Oct 14 05:17:07 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.971 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.972 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433427.8967888, 8e0fe601-33b8-44f0-8452-d821825b9176 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.972 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.974 2 DEBUG nova.virt.libvirt.guest [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.974 2 DEBUG nova.compute.manager [None req-017a161c-1ea1-42a7-a204-3f545ae0afed e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.993 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:07 np0005486808 nova_compute[259627]: 2025-10-14 09:17:07.996 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.012 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433413.0108325, 7eb647bc-75e4-4d38-aaa4-67570c4713f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.012 2 INFO nova.compute.manager [-] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.020 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:17:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.038 2 DEBUG nova.compute.manager [None req-658878cf-62d7-450b-bba7-c80930442b0d - - - - - -] [instance: 7eb647bc-75e4-4d38-aaa4-67570c4713f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.471 2 DEBUG nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.471 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG oslo_concurrency.lockutils [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.472 2 DEBUG nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.473 2 WARNING nova.compute.manager [req-1a5d9c37-edf9-4d67-9d10-128bf44a204e req-3cafc744-a9a2-407f-835b-077de53a6f32 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:17:08 np0005486808 nova_compute[259627]: 2025-10-14 09:17:08.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:09 np0005486808 nova_compute[259627]: 2025-10-14 09:17:09.352 2 INFO nova.compute.manager [None req-563710b9-02bf-4070-a2a2-92e427f1d468 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Get console output#033[00m
Oct 14 05:17:09 np0005486808 nova_compute[259627]: 2025-10-14 09:17:09.360 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:17:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.806 2 DEBUG nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.807 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.808 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.808 2 DEBUG oslo_concurrency.lockutils [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.809 2 DEBUG nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] No waiting events found dispatching network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.809 2 WARNING nova.compute.manager [req-185e1918-1345-4b02-95f4-2c2a85d698c4 req-d05c3629-d3d1-4b85-ae4e-3fe83778632f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received unexpected event network-vif-plugged-ae1bda9b-5957-43de-bcf7-97164f008565 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.867 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.867 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.868 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.868 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.869 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.871 2 INFO nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Terminating instance#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.873 2 DEBUG nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:17:10 np0005486808 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 05:17:10 np0005486808 NetworkManager[44885]: <info>  [1760433430.9229] device (tapae1bda9b-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:10Z|01154|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 05:17:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:10Z|01155|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 05:17:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:10Z|01156|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 05:17:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.940 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.941 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis#033[00m
Oct 14 05:17:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.942 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:17:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[069d8ff0-52bf-4066-bbe5-e9b2bf12c207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:10.943 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d namespace which is not needed anymore#033[00m
Oct 14 05:17:10 np0005486808 nova_compute[259627]: 2025-10-14 09:17:10.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 14 05:17:11 np0005486808 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Consumed 1.131s CPU time.
Oct 14 05:17:11 np0005486808 systemd-machined[214636]: Machine qemu-139-instance-0000006d terminated.
Oct 14 05:17:11 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : haproxy version is 2.8.14-c23fe91
Oct 14 05:17:11 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [NOTICE]   (366761) : path to executable is /usr/sbin/haproxy
Oct 14 05:17:11 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [WARNING]  (366761) : Exiting Master process...
Oct 14 05:17:11 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [ALERT]    (366761) : Current worker (366763) exited with code 143 (Terminated)
Oct 14 05:17:11 np0005486808 neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d[366757]: [WARNING]  (366761) : All workers exited. Exiting... (0)
Oct 14 05:17:11 np0005486808 systemd[1]: libpod-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope: Deactivated successfully.
Oct 14 05:17:11 np0005486808 podman[366796]: 2025-10-14 09:17:11.092391159 +0000 UTC m=+0.048652520 container died 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:17:11 np0005486808 kernel: tapae1bda9b-59: entered promiscuous mode
Oct 14 05:17:11 np0005486808 kernel: tapae1bda9b-59 (unregistering): left promiscuous mode
Oct 14 05:17:11 np0005486808 NetworkManager[44885]: <info>  [1760433431.1027] manager: (tapae1bda9b-59): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01157|binding|INFO|Claiming lport ae1bda9b-5957-43de-bcf7-97164f008565 for this chassis.
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01158|binding|INFO|ae1bda9b-5957-43de-bcf7-97164f008565: Claiming fa:16:3e:b7:a7:bb 10.100.0.9
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.116 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.127 2 INFO nova.virt.libvirt.driver [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance destroyed successfully.#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.128 2 DEBUG nova.objects.instance [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lazy-loading 'resources' on Instance uuid 8e0fe601-33b8-44f0-8452-d821825b9176 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01159|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 ovn-installed in OVS
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01160|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 up in Southbound
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01161|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=1)
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01162|if_status|INFO|Dropped 1 log messages in last 145 seconds (most recently, 145 seconds ago) due to excessive rate
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01163|if_status|INFO|Not setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down as sb is readonly
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01164|binding|INFO|Removing iface tapae1bda9b-59 ovn-installed in OVS
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01165|binding|INFO|Releasing lport ae1bda9b-5957-43de-bcf7-97164f008565 from this chassis (sb_readonly=0)
Oct 14 05:17:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:11Z|01166|binding|INFO|Setting lport ae1bda9b-5957-43de-bcf7-97164f008565 down in Southbound
Oct 14 05:17:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5-userdata-shm.mount: Deactivated successfully.
Oct 14 05:17:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ae44bdbb224e73e78d9bd19aa707268d3d53d52647578ee723bd9e3305ab7185-merged.mount: Deactivated successfully.
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.142 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a7:bb 10.100.0.9'], port_security=['fa:16:3e:b7:a7:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8e0fe601-33b8-44f0-8452-d821825b9176', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d24993a343a425dbddac7e32be0c86b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f08b4d4-f298-4d7f-881f-65fc5ca17fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7171c907-a00d-4336-9a3c-18b14ff390bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ae1bda9b-5957-43de-bcf7-97164f008565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.144 2 DEBUG nova.virt.libvirt.vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1974746659',display_name='tempest-TestNetworkAdvancedServerOps-server-1974746659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1974746659',id=109,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLo//MvhWo3fWQGHjr+urxX4+VcySarwttwotGnxIUKeYHMPJD2ibaE4RsIINhRhtCN4FfH4X/uMkC3+sHENw8r8re1VC10CtKYPlARtw6F00oKxhFKvSmM/1BEvmDnTA==',key_name='tempest-TestNetworkAdvancedServerOps-261373730',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:16:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d24993a343a425dbddac7e32be0c86b',ramdisk_id='',reservation_id='r-ttn68b3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-94788416',owner_user_name='tempest-TestNetworkAdvancedServerOps-94788416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:17:08Z,user_data=None,user_id='e992bcb79c4946a8985e3df25eb216ca',uuid=8e0fe601-33b8-44f0-8452-d821825b9176,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.144 2 DEBUG nova.network.os_vif_util [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converting VIF {"id": "ae1bda9b-5957-43de-bcf7-97164f008565", "address": "fa:16:3e:b7:a7:bb", "network": {"id": "bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d", "bridge": "br-int", "label": "tempest-network-smoke--1209319139", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d24993a343a425dbddac7e32be0c86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1bda9b-59", "ovs_interfaceid": "ae1bda9b-5957-43de-bcf7-97164f008565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.145 2 DEBUG nova.network.os_vif_util [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.145 2 DEBUG os_vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1bda9b-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 podman[366796]: 2025-10-14 09:17:11.150550942 +0000 UTC m=+0.106812283 container cleanup 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.151 2 INFO os_vif [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a7:bb,bridge_name='br-int',has_traffic_filtering=True,id=ae1bda9b-5957-43de-bcf7-97164f008565,network=Network(bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1bda9b-59')#033[00m
Oct 14 05:17:11 np0005486808 systemd[1]: libpod-conmon-5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5.scope: Deactivated successfully.
Oct 14 05:17:11 np0005486808 podman[366834]: 2025-10-14 09:17:11.218131418 +0000 UTC m=+0.045210855 container remove 5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.225 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8649b1d9-ca4f-4289-a64d-9405daf48fc2]: (4, ('Tue Oct 14 09:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5)\n5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5\nTue Oct 14 09:17:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d (5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5)\n5c785f668b8df0aeabe5b1a8cb5d072545c7a4ecaf35d07d25bec14c699b76a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d48ef5-127e-48a0-879c-09e987e71c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.228 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd7e2e81-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 kernel: tapbd7e2e81-90: left promiscuous mode
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae884c7c-c95d-4bbb-a297-b66ea46a54c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01a6a100-a9ba-4f73-8254-a8e372979a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34b04b29-bc60-4961-9a34-1ee82f8f07ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1d5e9a-3667-4161-93c5-81f4cb239915]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720564, 'reachable_time': 40133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366860, 'error': None, 'target': 'ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.284 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.284 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ac501521-29ee-4a6b-8f7c-fa560b99d2ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.285 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis#033[00m
Oct 14 05:17:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2dbd7e2e81\x2d9355\x2d4e48\x2dbcd1\x2d3cdb592b9c9d.mount: Deactivated successfully.
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9ca503-2b02-4253-86a7-5178bbf8f8cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.287 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ae1bda9b-5957-43de-bcf7-97164f008565 in datapath bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d unbound from our chassis#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.288 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd7e2e81-9355-4e48-bcd1-3cdb592b9c9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:17:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:11.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24d36271-3218-4d24-9d8e-146ff397e000]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.526 2 INFO nova.virt.libvirt.driver [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deleting instance files /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176_del#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.527 2 INFO nova.virt.libvirt.driver [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deletion of /var/lib/nova/instances/8e0fe601-33b8-44f0-8452-d821825b9176_del complete#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.587 2 INFO nova.compute.manager [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.588 2 DEBUG oslo.service.loopingcall [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.589 2 DEBUG nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:17:11 np0005486808 nova_compute[259627]: 2025-10-14 09:17:11.589 2 DEBUG nova.network.neutron [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:17:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.412 2 DEBUG nova.network.neutron [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.447 2 INFO nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Took 0.86 seconds to deallocate network for instance.#033[00m
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.514 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.515 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.545 2 DEBUG nova.compute.manager [req-b5358482-892b-4666-9e46-b3833f96372f req-a278d2ab-e129-4f59-96f7-72f9537f1fd9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-vif-deleted-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:12 np0005486808 nova_compute[259627]: 2025-10-14 09:17:12.574 2 DEBUG oslo_concurrency.processutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590132240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.011 2 DEBUG nova.compute.manager [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Received event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.013 2 DEBUG nova.compute.manager [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing instance network info cache due to event network-changed-ae1bda9b-5957-43de-bcf7-97164f008565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.014 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.014 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.015 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Refreshing network info cache for port ae1bda9b-5957-43de-bcf7-97164f008565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.020 2 DEBUG oslo_concurrency.processutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.028 2 DEBUG nova.compute.provider_tree [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.044 2 DEBUG nova.scheduler.client.report [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.068 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.097 2 INFO nova.scheduler.client.report [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Deleted allocations for instance 8e0fe601-33b8-44f0-8452-d821825b9176#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.179 2 DEBUG oslo_concurrency.lockutils [None req-22e724ca-b69f-4d43-842f-b3f849d8d482 e992bcb79c4946a8985e3df25eb216ca 2d24993a343a425dbddac7e32be0c86b - - default default] Lock "8e0fe601-33b8-44f0-8452-d821825b9176" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.381 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.660610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433660658, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1762, "num_deletes": 254, "total_data_size": 2548741, "memory_usage": 2599216, "flush_reason": "Manual Compaction"}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433675697, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2498041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38393, "largest_seqno": 40154, "table_properties": {"data_size": 2490139, "index_size": 4652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17414, "raw_average_key_size": 20, "raw_value_size": 2473911, "raw_average_value_size": 2903, "num_data_blocks": 206, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433278, "oldest_key_time": 1760433278, "file_creation_time": 1760433433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15137 microseconds, and 9422 cpu microseconds.
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.675755) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2498041 bytes OK
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.675779) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.677979) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.678008) EVENT_LOG_v1 {"time_micros": 1760433433677998, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.678062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2541097, prev total WAL file size 2541097, number of live WAL files 2.
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.679521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2439KB)], [86(8355KB)]
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433679580, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11054559, "oldest_snapshot_seqno": -1}
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6458 keys, 9388871 bytes, temperature: kUnknown
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433734938, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9388871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9344897, "index_size": 26712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164219, "raw_average_key_size": 25, "raw_value_size": 9228441, "raw_average_value_size": 1428, "num_data_blocks": 1072, "num_entries": 6458, "num_filter_entries": 6458, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.736072) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9388871 bytes
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.737703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.5 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6980, records dropped: 522 output_compression: NoCompression
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.737723) EVENT_LOG_v1 {"time_micros": 1760433433737714, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56249, "compaction_time_cpu_micros": 37063, "output_level": 6, "num_output_files": 1, "total_output_size": 9388871, "num_input_records": 6980, "num_output_records": 6458, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433738335, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433433739643, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.679420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:17:13.739742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.802 2 DEBUG nova.network.neutron [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:13 np0005486808 nova_compute[259627]: 2025-10-14 09:17:13.823 2 DEBUG oslo_concurrency.lockutils [req-cbe2551f-dc8f-4794-b959-86fda7920044 req-8a3afdb4-508e-490f-b45c-a92560c2d6a2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e0fe601-33b8-44f0-8452-d821825b9176" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:17:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 121 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 14 05:17:14 np0005486808 podman[367057]: 2025-10-14 09:17:14.150521471 +0000 UTC m=+0.088094642 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:17:14 np0005486808 podman[367057]: 2025-10-14 09:17:14.284655027 +0000 UTC m=+0.222228178 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:17:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:17:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:17:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 592e3910-49a1-4c5b-8e0c-8343b4277b7c does not exist
Oct 14 05:17:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d1f7e463-9e65-4140-a16c-6bff4819af3c does not exist
Oct 14 05:17:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6158858e-dfeb-4c25-86b8-540db55ae2f4 does not exist
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:17:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:17:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 05:17:16 np0005486808 nova_compute[259627]: 2025-10-14 09:17:16.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:16 np0005486808 nova_compute[259627]: 2025-10-14 09:17:16.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.50094726 +0000 UTC m=+0.055480618 container create 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:17:16 np0005486808 nova_compute[259627]: 2025-10-14 09:17:16.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:16 np0005486808 systemd[1]: Started libpod-conmon-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope.
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.481131462 +0000 UTC m=+0.035664850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.646317223 +0000 UTC m=+0.200850591 container init 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.658074593 +0000 UTC m=+0.212607941 container start 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.661486067 +0000 UTC m=+0.216019425 container attach 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:17:16 np0005486808 crazy_clarke[367502]: 167 167
Oct 14 05:17:16 np0005486808 systemd[1]: libpod-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope: Deactivated successfully.
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.666638564 +0000 UTC m=+0.221171912 container died 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:17:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:17:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0ca724b0dea3e14b57a3dfe829864c83ead37f4c46acfdf72b0b517b7e3d8a93-merged.mount: Deactivated successfully.
Oct 14 05:17:16 np0005486808 podman[367485]: 2025-10-14 09:17:16.706065106 +0000 UTC m=+0.260598454 container remove 4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:17:16 np0005486808 systemd[1]: libpod-conmon-4dda0388112638f58ecb743b9cdde8167efc093413cd3f2ffadaaa4b4fc33c0e.scope: Deactivated successfully.
Oct 14 05:17:16 np0005486808 podman[367525]: 2025-10-14 09:17:16.86774337 +0000 UTC m=+0.042690813 container create 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:17:16 np0005486808 systemd[1]: Started libpod-conmon-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope.
Oct 14 05:17:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:16 np0005486808 podman[367525]: 2025-10-14 09:17:16.84907624 +0000 UTC m=+0.024023703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:16 np0005486808 podman[367525]: 2025-10-14 09:17:16.960393044 +0000 UTC m=+0.135340507 container init 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:17:16 np0005486808 podman[367525]: 2025-10-14 09:17:16.974339888 +0000 UTC m=+0.149287331 container start 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:17:16 np0005486808 podman[367525]: 2025-10-14 09:17:16.977703051 +0000 UTC m=+0.152650514 container attach 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 05:17:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 05:17:18 np0005486808 great_pascal[367542]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:17:18 np0005486808 great_pascal[367542]: --> relative data size: 1.0
Oct 14 05:17:18 np0005486808 great_pascal[367542]: --> All data devices are unavailable
Oct 14 05:17:18 np0005486808 systemd[1]: libpod-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Deactivated successfully.
Oct 14 05:17:18 np0005486808 systemd[1]: libpod-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Consumed 1.112s CPU time.
Oct 14 05:17:18 np0005486808 podman[367525]: 2025-10-14 09:17:18.121743967 +0000 UTC m=+1.296691460 container died 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ad0634224dc10418d6a2879bc0186cbf74d329c4cf79ce250e2a3539125c5327-merged.mount: Deactivated successfully.
Oct 14 05:17:18 np0005486808 podman[367525]: 2025-10-14 09:17:18.205306877 +0000 UTC m=+1.380254360 container remove 07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:17:18 np0005486808 systemd[1]: libpod-conmon-07de34f1e65862c9f6314d3af5ea54f52743dfb796958724dc978984514d212c.scope: Deactivated successfully.
Oct 14 05:17:18 np0005486808 nova_compute[259627]: 2025-10-14 09:17:18.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:18 np0005486808 podman[367725]: 2025-10-14 09:17:18.954968133 +0000 UTC m=+0.067599817 container create d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:17:19 np0005486808 systemd[1]: Started libpod-conmon-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope.
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:18.925458315 +0000 UTC m=+0.038090009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:19.055160512 +0000 UTC m=+0.167792296 container init d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:19.067326462 +0000 UTC m=+0.179958136 container start d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:19.07172836 +0000 UTC m=+0.184360095 container attach d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 05:17:19 np0005486808 loving_swartz[367742]: 167 167
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:19.075159805 +0000 UTC m=+0.187791489 container died d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:17:19 np0005486808 systemd[1]: libpod-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope: Deactivated successfully.
Oct 14 05:17:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3989aa692df6b47e94af2cf37197c9f7ef72acabbf9b0c4d3bd3eb8e77cc162d-merged.mount: Deactivated successfully.
Oct 14 05:17:19 np0005486808 podman[367725]: 2025-10-14 09:17:19.128584912 +0000 UTC m=+0.241216596 container remove d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:17:19 np0005486808 systemd[1]: libpod-conmon-d2f3cc1b4b728bda042b60151a6baece370056119ec16c89f8b981152bfb6069.scope: Deactivated successfully.
Oct 14 05:17:19 np0005486808 podman[367767]: 2025-10-14 09:17:19.388348814 +0000 UTC m=+0.071858482 container create d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:17:19 np0005486808 systemd[1]: Started libpod-conmon-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope.
Oct 14 05:17:19 np0005486808 podman[367767]: 2025-10-14 09:17:19.361570944 +0000 UTC m=+0.045080652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:19 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:19 np0005486808 podman[367767]: 2025-10-14 09:17:19.494527221 +0000 UTC m=+0.178036939 container init d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:17:19 np0005486808 podman[367767]: 2025-10-14 09:17:19.515942149 +0000 UTC m=+0.199451817 container start d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:17:19 np0005486808 podman[367767]: 2025-10-14 09:17:19.520350898 +0000 UTC m=+0.203860606 container attach d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:17:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 05:17:20 np0005486808 objective_cori[367784]: {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    "0": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "devices": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "/dev/loop3"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            ],
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_name": "ceph_lv0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_size": "21470642176",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "name": "ceph_lv0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "tags": {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_name": "ceph",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.crush_device_class": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.encrypted": "0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_id": "0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.vdo": "0"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            },
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "vg_name": "ceph_vg0"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        }
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    ],
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    "1": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "devices": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "/dev/loop4"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            ],
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_name": "ceph_lv1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_size": "21470642176",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "name": "ceph_lv1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "tags": {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_name": "ceph",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.crush_device_class": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.encrypted": "0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_id": "1",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.vdo": "0"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            },
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "vg_name": "ceph_vg1"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        }
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    ],
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    "2": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "devices": [
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "/dev/loop5"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            ],
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_name": "ceph_lv2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_size": "21470642176",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "name": "ceph_lv2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "tags": {
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.cluster_name": "ceph",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.crush_device_class": "",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.encrypted": "0",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osd_id": "2",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:                "ceph.vdo": "0"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            },
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "type": "block",
Oct 14 05:17:20 np0005486808 objective_cori[367784]:            "vg_name": "ceph_vg2"
Oct 14 05:17:20 np0005486808 objective_cori[367784]:        }
Oct 14 05:17:20 np0005486808 objective_cori[367784]:    ]
Oct 14 05:17:20 np0005486808 objective_cori[367784]: }
Oct 14 05:17:20 np0005486808 podman[367767]: 2025-10-14 09:17:20.322142959 +0000 UTC m=+1.005652627 container died d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:17:20 np0005486808 systemd[1]: libpod-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope: Deactivated successfully.
Oct 14 05:17:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-43fe3aa26ce9607aa57b864467d9ffdb35481fe32f3064a0cf45b15929cd66a5-merged.mount: Deactivated successfully.
Oct 14 05:17:20 np0005486808 podman[367767]: 2025-10-14 09:17:20.379333399 +0000 UTC m=+1.062843037 container remove d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:17:20 np0005486808 systemd[1]: libpod-conmon-d0d61a219293ab5c5839b91c0bdd3c033386d119943fd1ea6e3ed5dc037c2b2f.scope: Deactivated successfully.
Oct 14 05:17:21 np0005486808 nova_compute[259627]: 2025-10-14 09:17:21.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.257658436 +0000 UTC m=+0.068601661 container create 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:17:21 np0005486808 systemd[1]: Started libpod-conmon-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope.
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.229553054 +0000 UTC m=+0.040496349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.369568395 +0000 UTC m=+0.180511680 container init 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.378346111 +0000 UTC m=+0.189289336 container start 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.382563735 +0000 UTC m=+0.193507010 container attach 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:21 np0005486808 exciting_germain[367963]: 167 167
Oct 14 05:17:21 np0005486808 systemd[1]: libpod-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope: Deactivated successfully.
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.385143809 +0000 UTC m=+0.196087064 container died 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:17:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b430b634a78da92063c905473125bde27515c177d46e6dd9061fe44f25d8452a-merged.mount: Deactivated successfully.
Oct 14 05:17:21 np0005486808 podman[367946]: 2025-10-14 09:17:21.441760014 +0000 UTC m=+0.252703249 container remove 52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:17:21 np0005486808 systemd[1]: libpod-conmon-52aa76c18b626d1340a14a6ce910a217adf340695c8379a55bd0e2388334bb66.scope: Deactivated successfully.
Oct 14 05:17:21 np0005486808 podman[367989]: 2025-10-14 09:17:21.671522607 +0000 UTC m=+0.068594942 container create 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:17:21 np0005486808 systemd[1]: Started libpod-conmon-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope.
Oct 14 05:17:21 np0005486808 podman[367989]: 2025-10-14 09:17:21.644425409 +0000 UTC m=+0.041497794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:17:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:21 np0005486808 podman[367989]: 2025-10-14 09:17:21.785141117 +0000 UTC m=+0.182213512 container init 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:17:21 np0005486808 podman[367989]: 2025-10-14 09:17:21.796809225 +0000 UTC m=+0.193881550 container start 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:17:21 np0005486808 podman[367989]: 2025-10-14 09:17:21.800977177 +0000 UTC m=+0.198049572 container attach 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:17:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]: {
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_id": 2,
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "type": "bluestore"
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    },
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_id": 1,
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "type": "bluestore"
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    },
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_id": 0,
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:        "type": "bluestore"
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]:    }
Oct 14 05:17:22 np0005486808 wonderful_nash[368006]: }
Oct 14 05:17:22 np0005486808 systemd[1]: libpod-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Deactivated successfully.
Oct 14 05:17:22 np0005486808 systemd[1]: libpod-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Consumed 1.052s CPU time.
Oct 14 05:17:22 np0005486808 podman[367989]: 2025-10-14 09:17:22.841608565 +0000 UTC m=+1.238680900 container died 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:17:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5429cd68315d1600534f673076140a445e2e900d910022ef4f865f1aab88e1f3-merged.mount: Deactivated successfully.
Oct 14 05:17:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:22 np0005486808 podman[367989]: 2025-10-14 09:17:22.899880011 +0000 UTC m=+1.296952316 container remove 876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nash, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:17:22 np0005486808 systemd[1]: libpod-conmon-876c578c09f44ca6c32101e179c5f0122edbb3121e0f3b61633f90624e350f82.scope: Deactivated successfully.
Oct 14 05:17:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:17:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:17:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:22 np0005486808 podman[368039]: 2025-10-14 09:17:22.988154876 +0000 UTC m=+0.112547944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 14 05:17:22 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 17e6efa1-b253-42e8-ab80-a0a87e197e43 does not exist
Oct 14 05:17:22 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e96bf0d9-6511-4f62-b6ae-022e5a7ee6f9 does not exist
Oct 14 05:17:23 np0005486808 podman[368042]: 2025-10-14 09:17:23.011729607 +0000 UTC m=+0.136355901 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:17:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:23 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:17:23 np0005486808 nova_compute[259627]: 2025-10-14 09:17:23.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:17:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:17:26 np0005486808 nova_compute[259627]: 2025-10-14 09:17:26.127 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433431.1264865, 8e0fe601-33b8-44f0-8452-d821825b9176 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:26 np0005486808 nova_compute[259627]: 2025-10-14 09:17:26.128 2 INFO nova.compute.manager [-] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:17:26 np0005486808 nova_compute[259627]: 2025-10-14 09:17:26.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:26 np0005486808 nova_compute[259627]: 2025-10-14 09:17:26.215 2 DEBUG nova.compute.manager [None req-a6ef2daf-8567-4726-9d53-bc3a20788986 - - - - - -] [instance: 8e0fe601-33b8-44f0-8452-d821825b9176] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:28 np0005486808 nova_compute[259627]: 2025-10-14 09:17:28.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:30 np0005486808 nova_compute[259627]: 2025-10-14 09:17:30.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:31 np0005486808 nova_compute[259627]: 2025-10-14 09:17:31.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:31 np0005486808 nova_compute[259627]: 2025-10-14 09:17:31.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.008 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:17:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264772301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.446 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.663 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.664 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3715MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.664 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.665 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:17:32 np0005486808 nova_compute[259627]: 2025-10-14 09:17:32.787 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:17:32
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root']
Oct 14 05:17:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:17:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:17:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:17:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:17:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578243817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.221 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.230 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.245 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.281 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.282 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:33 np0005486808 nova_compute[259627]: 2025-10-14 09:17:33.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:34 np0005486808 nova_compute[259627]: 2025-10-14 09:17:34.279 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:34 np0005486808 nova_compute[259627]: 2025-10-14 09:17:34.280 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:34 np0005486808 nova_compute[259627]: 2025-10-14 09:17:34.280 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:34 np0005486808 podman[368184]: 2025-10-14 09:17:34.689244486 +0000 UTC m=+0.088385360 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 05:17:34 np0005486808 podman[368183]: 2025-10-14 09:17:34.736969892 +0000 UTC m=+0.141864758 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 05:17:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:36 np0005486808 nova_compute[259627]: 2025-10-14 09:17:36.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:38 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:38 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:38 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:17:38 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:17:39 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:17:39 np0005486808 nova_compute[259627]: 2025-10-14 09:17:38.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:39 np0005486808 nova_compute[259627]: 2025-10-14 09:17:39.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:17:39 np0005486808 nova_compute[259627]: 2025-10-14 09:17:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.051 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.051 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.067 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.153 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.154 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.164 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.164 2 INFO nova.compute.claims [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.276 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:17:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628149342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.795 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.805 2 DEBUG nova.compute.provider_tree [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.829 2 DEBUG nova.scheduler.client.report [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.874 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.876 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.955 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.956 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:17:41 np0005486808 nova_compute[259627]: 2025-10-14 09:17:41.984 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.004 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:17:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.118 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.120 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.121 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating image(s)#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.156 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.196 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.218 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.223 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.274 2 DEBUG nova.policy [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.326 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.327 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.327 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.328 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.349 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.353 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 22c6f034-c238-499b-8b07-1c0f5879297e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.628 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 22c6f034-c238-499b-8b07-1c0f5879297e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.723 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.836 2 DEBUG nova.objects.instance [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.851 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.852 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Ensure instance console log exists: /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.852 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.853 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:42 np0005486808 nova_compute[259627]: 2025-10-14 09:17:42.853 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:43 np0005486808 nova_compute[259627]: 2025-10-14 09:17:43.167 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Successfully created port: 9053aa9d-2747-4b65-9480-c8d5d0c126fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:17:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:17:43 np0005486808 nova_compute[259627]: 2025-10-14 09:17:43.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 742 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.672 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Successfully updated port: 9053aa9d-2747-4b65-9480-c8d5d0c126fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.686 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.687 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.687 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.873 2 DEBUG nova.compute.manager [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.874 2 DEBUG nova.compute.manager [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:17:44 np0005486808 nova_compute[259627]: 2025-10-14 09:17:44.875 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:17:45 np0005486808 nova_compute[259627]: 2025-10-14 09:17:45.114 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:17:45 np0005486808 nova_compute[259627]: 2025-10-14 09:17:45.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:17:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.712 2 DEBUG nova.network.neutron [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.730 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.731 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance network_info: |[{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.732 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.732 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.736 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start _get_guest_xml network_info=[{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.742 2 WARNING nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.747 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.748 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.759 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.759 2 DEBUG nova.virt.libvirt.host [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.760 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.761 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.761 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.762 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.762 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.763 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.763 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.764 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.764 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.765 2 DEBUG nova.virt.hardware [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:17:46 np0005486808 nova_compute[259627]: 2025-10-14 09:17:46.770 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:17:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021480470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.232 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.260 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.264 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:17:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783090329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.717 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.718 2 DEBUG nova.virt.libvirt.vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:17:42Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.719 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.719 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.720 2 DEBUG nova.objects.instance [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.737 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <uuid>22c6f034-c238-499b-8b07-1c0f5879297e</uuid>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <name>instance-0000006e</name>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-620217933</nova:name>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:17:46</nova:creationTime>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <nova:port uuid="9053aa9d-2747-4b65-9480-c8d5d0c126fc">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="serial">22c6f034-c238-499b-8b07-1c0f5879297e</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="uuid">22c6f034-c238-499b-8b07-1c0f5879297e</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/22c6f034-c238-499b-8b07-1c0f5879297e_disk">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/22c6f034-c238-499b-8b07-1c0f5879297e_disk.config">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:e4:68:6d"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <target dev="tap9053aa9d-27"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/console.log" append="off"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:17:47 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:17:47 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:17:47 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:17:47 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Preparing to wait for external event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.738 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.739 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.739 2 DEBUG nova.virt.libvirt.vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:17:42Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.740 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.740 2 DEBUG nova.network.os_vif_util [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.741 2 DEBUG os_vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.744 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9053aa9d-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9053aa9d-27, col_values=(('external_ids', {'iface-id': '9053aa9d-2747-4b65-9480-c8d5d0c126fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:68:6d', 'vm-uuid': '22c6f034-c238-499b-8b07-1c0f5879297e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:47 np0005486808 NetworkManager[44885]: <info>  [1760433467.7483] manager: (tap9053aa9d-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.756 2 INFO os_vif [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27')#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.804 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.805 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.805 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:e4:68:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.805 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Using config drive#033[00m
Oct 14 05:17:47 np0005486808 nova_compute[259627]: 2025-10-14 09:17:47.827 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.445 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Creating config drive at /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.453 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx4e71hs_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.518 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.520 2 DEBUG nova.network.neutron [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.539 2 DEBUG oslo_concurrency.lockutils [req-f6d6992b-63f2-4833-8dc6-57c9dfd3ef5d req-6e170eed-6d21-4fa8-a8af-1e7744fe29bd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.630 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx4e71hs_" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.664 2 DEBUG nova.storage.rbd_utils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.668 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.883 2 DEBUG oslo_concurrency.processutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config 22c6f034-c238-499b-8b07-1c0f5879297e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.884 2 INFO nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deleting local config drive /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e/disk.config because it was imported into RBD.#033[00m
Oct 14 05:17:48 np0005486808 kernel: tap9053aa9d-27: entered promiscuous mode
Oct 14 05:17:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:48Z|01167|binding|INFO|Claiming lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc for this chassis.
Oct 14 05:17:48 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:48Z|01168|binding|INFO|9053aa9d-2747-4b65-9480-c8d5d0c126fc: Claiming fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 05:17:48 np0005486808 nova_compute[259627]: 2025-10-14 09:17:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:48 np0005486808 NetworkManager[44885]: <info>  [1760433468.9997] manager: (tap9053aa9d-27): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.023 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:68:6d 10.100.0.12'], port_security=['fa:16:3e:e4:68:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22c6f034-c238-499b-8b07-1c0f5879297e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac21096-7818-4990-8868-baedcdcf8f83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f54993ea-ba9b-4930-8e05-f68e5e27d0b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cda2976-3075-437c-b2bd-7362e360206b, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9053aa9d-2747-4b65-9480-c8d5d0c126fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.026 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9053aa9d-2747-4b65-9480-c8d5d0c126fc in datapath 6ac21096-7818-4990-8868-baedcdcf8f83 bound to our chassis#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.028 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ac21096-7818-4990-8868-baedcdcf8f83#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c54d5f1-3de7-4aab-b4d9-111755d5b0cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.048 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ac21096-71 in ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ac21096-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a90082-02cc-46ec-887e-42daa6a69f6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.054 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29b97d4f-9ab2-4d0f-8a44-84d6d3336f98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 systemd-machined[214636]: New machine qemu-140-instance-0000006e.
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.074 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f45636f4-6a8b-45ac-854b-d5c69b71f505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:49Z|01169|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc ovn-installed in OVS
Oct 14 05:17:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:49Z|01170|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc up in Southbound
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 systemd-udevd[368554]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c13ef9-2935-4ce7-88fb-f3a935cff89a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 NetworkManager[44885]: <info>  [1760433469.1351] device (tap9053aa9d-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:17:49 np0005486808 NetworkManager[44885]: <info>  [1760433469.1372] device (tap9053aa9d-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.156 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[36230e10-b75f-4dde-b549-c609c301a896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.165 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7813270a-5662-46e0-8c6e-9a8f567b81e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 NetworkManager[44885]: <info>  [1760433469.1667] manager: (tap6ac21096-70): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.219 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3dee8b89-d856-4cd3-a2f5-0ec1f1cd6c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.225 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e9b639-16f8-44b9-8590-453887d25b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 NetworkManager[44885]: <info>  [1760433469.2660] device (tap6ac21096-70): carrier: link connected
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.273 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a9592ac3-43bf-43c3-9281-2fde5c853419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4408c64-63c7-48fa-b07d-0070d85af943]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac21096-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c8:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724802, 'reachable_time': 41515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368584, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.312 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3db162-3624-47bd-aed6-82637939117c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c8fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724802, 'tstamp': 724802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368585, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3f68c23b-c92d-4f45-a99b-0ecfd2832d01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac21096-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c8:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724802, 'reachable_time': 41515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368586, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.379 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2846fc17-ebcc-4e32-8748-7d862192ad5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.457 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[04c6c183-6a5f-474b-b517-ad213094e569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.459 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac21096-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.459 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.460 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ac21096-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 NetworkManager[44885]: <info>  [1760433469.5097] manager: (tap6ac21096-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Oct 14 05:17:49 np0005486808 kernel: tap6ac21096-70: entered promiscuous mode
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.515 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ac21096-70, col_values=(('external_ids', {'iface-id': '437783e2-5dd6-4d07-bfc8-e4c0a4808267'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:49Z|01171|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.519 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.520 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23b3f1f8-d43b-4718-8e16-20cc6892e760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.522 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-6ac21096-7818-4990-8868-baedcdcf8f83
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/6ac21096-7818-4990-8868-baedcdcf8f83.pid.haproxy
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 6ac21096-7818-4990-8868-baedcdcf8f83
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:17:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:17:49.523 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'env', 'PROCESS_TAG=haproxy-6ac21096-7818-4990-8868-baedcdcf8f83', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ac21096-7818-4990-8868-baedcdcf8f83.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.714 2 DEBUG nova.compute.manager [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.722 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG oslo_concurrency.lockutils [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:49 np0005486808 nova_compute[259627]: 2025-10-14 09:17:49.723 2 DEBUG nova.compute.manager [req-5a18e091-b5d8-4800-89af-b7e5f28c44b8 req-a364d7be-52c8-462d-ada2-2eaf7b8b2979 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Processing event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:17:49 np0005486808 podman[368661]: 2025-10-14 09:17:49.976708716 +0000 UTC m=+0.088721257 container create 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:17:50 np0005486808 podman[368661]: 2025-10-14 09:17:49.933391259 +0000 UTC m=+0.045403800 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:17:50 np0005486808 systemd[1]: Started libpod-conmon-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope.
Oct 14 05:17:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 05:17:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:17:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f2cda8d9a484751f468c0a5f14364c4cc3b93a764e75f80052eecd6130f987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:17:50 np0005486808 podman[368661]: 2025-10-14 09:17:50.079239483 +0000 UTC m=+0.191252034 container init 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:17:50 np0005486808 podman[368661]: 2025-10-14 09:17:50.091478905 +0000 UTC m=+0.203491446 container start 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:17:50 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : New worker (368682) forked
Oct 14 05:17:50 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : Loading success.
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2389555, 22c6f034-c238-499b-8b07-1c0f5879297e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.244 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.247 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.251 2 INFO nova.virt.libvirt.driver [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance spawned successfully.#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.251 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.271 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.277 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.281 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.282 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.283 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.283 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.284 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.284 2 DEBUG nova.virt.libvirt.driver [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.311 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.311 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2391984, 22c6f034-c238-499b-8b07-1c0f5879297e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.312 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.347 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.350 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433470.2468646, 22c6f034-c238-499b-8b07-1c0f5879297e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.351 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.362 2 INFO nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 8.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.362 2 DEBUG nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.369 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.372 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.401 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.428 2 INFO nova.compute.manager [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 9.31 seconds to build instance.#033[00m
Oct 14 05:17:50 np0005486808 nova_compute[259627]: 2025-10-14 09:17:50.444 2 DEBUG oslo_concurrency.lockutils [None req-fec8a0cf-8df9-45d0-b72a-dcb7b0172666 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.914 2 DEBUG nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.915 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.915 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.916 2 DEBUG oslo_concurrency.lockutils [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.916 2 DEBUG nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:17:51 np0005486808 nova_compute[259627]: 2025-10-14 09:17:51.916 2 WARNING nova.compute.manager [req-a0eb4b14-6558-4e5a-936b-31c3b26c17c8 req-b12c4326-dc1b-4ac7-8d0f-82f17d4abd38 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received unexpected event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with vm_state active and task_state None.#033[00m
Oct 14 05:17:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 05:17:52 np0005486808 nova_compute[259627]: 2025-10-14 09:17:52.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:53 np0005486808 podman[368691]: 2025-10-14 09:17:53.658688164 +0000 UTC m=+0.074724163 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:17:53 np0005486808 podman[368692]: 2025-10-14 09:17:53.721728037 +0000 UTC m=+0.122891829 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:17:53 np0005486808 nova_compute[259627]: 2025-10-14 09:17:53.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:54 np0005486808 NetworkManager[44885]: <info>  [1760433474.1756] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 14 05:17:54 np0005486808 NetworkManager[44885]: <info>  [1760433474.1763] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Oct 14 05:17:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:54Z|01172|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 05:17:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:17:54Z|01173|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.680 2 DEBUG nova.compute.manager [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.681 2 DEBUG nova.compute.manager [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.682 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.682 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:17:54 np0005486808 nova_compute[259627]: 2025-10-14 09:17:54.683 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:17:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:17:56 np0005486808 nova_compute[259627]: 2025-10-14 09:17:56.726 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:17:56 np0005486808 nova_compute[259627]: 2025-10-14 09:17:56.727 2 DEBUG nova.network.neutron [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:17:56 np0005486808 nova_compute[259627]: 2025-10-14 09:17:56.759 2 DEBUG oslo_concurrency.lockutils [req-fff71192-acbe-443d-8baf-6d1d9f0b47f2 req-a21e2d6f-4328-4915-9ced-0b3baff890ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:17:57 np0005486808 nova_compute[259627]: 2025-10-14 09:17:57.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:17:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:17:58 np0005486808 nova_compute[259627]: 2025-10-14 09:17:58.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:17:58 np0005486808 nova_compute[259627]: 2025-10-14 09:17:58.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:18:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:01Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 05:18:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:01Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:68:6d 10.100.0.12
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 72 op/s
Oct 14 05:18:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.037 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:18:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:02.041 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:02 np0005486808 nova_compute[259627]: 2025-10-14 09:18:02.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:02 np0005486808 nova_compute[259627]: 2025-10-14 09:18:02.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:03 np0005486808 nova_compute[259627]: 2025-10-14 09:18:03.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 88 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 05:18:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:18:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:18:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:18:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859659764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:18:05 np0005486808 podman[368731]: 2025-10-14 09:18:05.68475457 +0000 UTC m=+0.082234737 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 05:18:05 np0005486808 podman[368730]: 2025-10-14 09:18:05.705617495 +0000 UTC m=+0.114451042 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:18:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 14 05:18:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.033 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.033 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:07 np0005486808 nova_compute[259627]: 2025-10-14 09:18:07.594 2 INFO nova.compute.manager [None req-4fd78700-5112-4910-8122-da67fa1bbd10 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Get console output#033[00m
Oct 14 05:18:07 np0005486808 nova_compute[259627]: 2025-10-14 09:18:07.600 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:18:07 np0005486808 nova_compute[259627]: 2025-10-14 09:18:07.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:18:08 np0005486808 nova_compute[259627]: 2025-10-14 09:18:08.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:18:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:18:12 np0005486808 nova_compute[259627]: 2025-10-14 09:18:12.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:13 np0005486808 nova_compute[259627]: 2025-10-14 09:18:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:18:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:18:17 np0005486808 nova_compute[259627]: 2025-10-14 09:18:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.750 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.750 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.773 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.877 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.878 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.887 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:18:18 np0005486808 nova_compute[259627]: 2025-10-14 09:18:18.888 2 INFO nova.compute.claims [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.016 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554790416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.513 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.522 2 DEBUG nova.compute.provider_tree [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.544 2 DEBUG nova.scheduler.client.report [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.571 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.572 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.626 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.626 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.649 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.667 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.756 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.758 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.758 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating image(s)#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.788 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.825 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.860 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.865 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.971 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.972 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:19 np0005486808 nova_compute[259627]: 2025-10-14 09:18:19.972 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.000 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.004 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.110 2 DEBUG nova.policy [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.338 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.426 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.525 2 DEBUG nova.objects.instance [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.541 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.541 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Ensure instance console log exists: /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.542 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:20 np0005486808 nova_compute[259627]: 2025-10-14 09:18:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:18:21 np0005486808 nova_compute[259627]: 2025-10-14 09:18:21.483 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Successfully created port: da41936a-3d81-49ed-9021-22d56f07b75b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:18:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 747 KiB/s wr, 3 op/s
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.424 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Successfully updated port: da41936a-3d81-49ed-9021-22d56f07b75b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.444 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.444 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.445 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.650 2 DEBUG nova.compute.manager [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-changed-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.650 2 DEBUG nova.compute.manager [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Refreshing instance network info cache due to event network-changed-da41936a-3d81-49ed-9021-22d56f07b75b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.651 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:22 np0005486808 nova_compute[259627]: 2025-10-14 09:18:22.901 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:18:23 np0005486808 nova_compute[259627]: 2025-10-14 09:18:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 142 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 735 KiB/s wr, 2 op/s
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3b8bc598-dff9-4cab-9ae9-a5a146209f30 does not exist
Oct 14 05:18:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev cdbdb531-5ee3-409e-bb73-d08609da5c5b does not exist
Oct 14 05:18:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b9dd1b44-bda7-4576-b952-e78b34446bc3 does not exist
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:18:24 np0005486808 podman[369121]: 2025-10-14 09:18:24.316211468 +0000 UTC m=+0.087901508 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:18:24 np0005486808 podman[369122]: 2025-10-14 09:18:24.33943574 +0000 UTC m=+0.111510089 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:24 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.542 2 DEBUG nova.network.neutron [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.561 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.562 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance network_info: |[{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.563 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.563 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Refreshing network info cache for port da41936a-3d81-49ed-9021-22d56f07b75b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.569 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start _get_guest_xml network_info=[{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.578 2 WARNING nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.586 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.586 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.598 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.598 2 DEBUG nova.virt.libvirt.host [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.599 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.599 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.600 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.600 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.601 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.601 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.602 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.603 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.603 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.604 2 DEBUG nova.virt.hardware [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:18:24 np0005486808 nova_compute[259627]: 2025-10-14 09:18:24.607 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.786450188 +0000 UTC m=+0.045998165 container create f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:18:24 np0005486808 systemd[1]: Started libpod-conmon-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope.
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.763339028 +0000 UTC m=+0.022887015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.893887526 +0000 UTC m=+0.153435593 container init f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.902826476 +0000 UTC m=+0.162374493 container start f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.906191309 +0000 UTC m=+0.165739296 container attach f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:18:24 np0005486808 jolly_mendeleev[369312]: 167 167
Oct 14 05:18:24 np0005486808 systemd[1]: libpod-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope: Deactivated successfully.
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.912655378 +0000 UTC m=+0.172203395 container died f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:18:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8866373d7df29d742d77df53aaafccc026991a07ad6c1af16b39fa2a36510118-merged.mount: Deactivated successfully.
Oct 14 05:18:24 np0005486808 podman[369279]: 2025-10-14 09:18:24.957917734 +0000 UTC m=+0.217465711 container remove f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:18:24 np0005486808 systemd[1]: libpod-conmon-f20a561100924fa3c167a9a276183d8a200564a8f00fbfa85c85ea76b8fb8de8.scope: Deactivated successfully.
Oct 14 05:18:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:18:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580672828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.099 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.135 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:25 np0005486808 podman[369337]: 2025-10-14 09:18:25.140696279 +0000 UTC m=+0.053179552 container create e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.141 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:25 np0005486808 systemd[1]: Started libpod-conmon-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope.
Oct 14 05:18:25 np0005486808 podman[369337]: 2025-10-14 09:18:25.11683027 +0000 UTC m=+0.029313583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:25 np0005486808 podman[369337]: 2025-10-14 09:18:25.252393832 +0000 UTC m=+0.164877215 container init e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 05:18:25 np0005486808 podman[369337]: 2025-10-14 09:18:25.273407999 +0000 UTC m=+0.185891292 container start e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:18:25 np0005486808 podman[369337]: 2025-10-14 09:18:25.278535086 +0000 UTC m=+0.191018449 container attach e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:18:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:18:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381779128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.605 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.607 2 DEBUG nova.virt.libvirt.vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:18:19Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.607 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.608 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.609 2 DEBUG nova.objects.instance [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.634 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <uuid>78ff2018-e6dc-4337-8a74-90e5a3963a12</uuid>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <name>instance-0000006f</name>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-656534791</nova:name>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:18:24</nova:creationTime>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <nova:port uuid="da41936a-3d81-49ed-9021-22d56f07b75b">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="serial">78ff2018-e6dc-4337-8a74-90e5a3963a12</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="uuid">78ff2018-e6dc-4337-8a74-90e5a3963a12</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/78ff2018-e6dc-4337-8a74-90e5a3963a12_disk">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:44:0a:a6"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <target dev="tapda41936a-3d"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/console.log" append="off"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:18:25 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:18:25 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:18:25 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:18:25 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Preparing to wait for external event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.635 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.636 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.636 2 DEBUG nova.virt.libvirt.vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:18:19Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.637 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.637 2 DEBUG nova.network.os_vif_util [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.638 2 DEBUG os_vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda41936a-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda41936a-3d, col_values=(('external_ids', {'iface-id': 'da41936a-3d81-49ed-9021-22d56f07b75b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:0a:a6', 'vm-uuid': '78ff2018-e6dc-4337-8a74-90e5a3963a12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:25 np0005486808 NetworkManager[44885]: <info>  [1760433505.6528] manager: (tapda41936a-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.663 2 INFO os_vif [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d')#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.725 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:44:0a:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.726 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Using config drive#033[00m
Oct 14 05:18:25 np0005486808 nova_compute[259627]: 2025-10-14 09:18:25.746 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.095 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updated VIF entry in instance network info cache for port da41936a-3d81-49ed-9021-22d56f07b75b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.096 2 DEBUG nova.network.neutron [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [{"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.120 2 DEBUG oslo_concurrency.lockutils [req-61e0a4f4-7b58-46e8-b5d6-b8273450092e req-47a48a26-1648-4480-8dd3-5f192494cd94 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-78ff2018-e6dc-4337-8a74-90e5a3963a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.236 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Creating config drive at /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.245 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412iiwh8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.411 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412iiwh8" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:26 np0005486808 jolly_keldysh[369375]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:18:26 np0005486808 jolly_keldysh[369375]: --> relative data size: 1.0
Oct 14 05:18:26 np0005486808 jolly_keldysh[369375]: --> All data devices are unavailable
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.444 2 DEBUG nova.storage.rbd_utils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.450 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:26 np0005486808 systemd[1]: libpod-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Deactivated successfully.
Oct 14 05:18:26 np0005486808 systemd[1]: libpod-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Consumed 1.112s CPU time.
Oct 14 05:18:26 np0005486808 podman[369337]: 2025-10-14 09:18:26.455897463 +0000 UTC m=+1.368380756 container died e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:18:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-31c49fdc4fd82329d03cb20eb80ffbd77df7bb65f804d3a8a4eb83d57c0c5ed2-merged.mount: Deactivated successfully.
Oct 14 05:18:26 np0005486808 podman[369337]: 2025-10-14 09:18:26.517214414 +0000 UTC m=+1.429697697 container remove e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:18:26 np0005486808 systemd[1]: libpod-conmon-e2d4c4260a5effbb95cc4fe1d1ba8e46742c0bc7309dc3905f40fc85d5cb76e0.scope: Deactivated successfully.
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.636 2 DEBUG oslo_concurrency.processutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config 78ff2018-e6dc-4337-8a74-90e5a3963a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.637 2 INFO nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deleting local config drive /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12/disk.config because it was imported into RBD.#033[00m
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.6843] manager: (tapda41936a-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Oct 14 05:18:26 np0005486808 kernel: tapda41936a-3d: entered promiscuous mode
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:26Z|01174|binding|INFO|Claiming lport da41936a-3d81-49ed-9021-22d56f07b75b for this chassis.
Oct 14 05:18:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:26Z|01175|binding|INFO|da41936a-3d81-49ed-9021-22d56f07b75b: Claiming fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.703 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0a:a6 10.100.0.23'], port_security=['fa:16:3e:44:0a:a6 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '78ff2018-e6dc-4337-8a74-90e5a3963a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71257d1-6873-4759-898a-ce32e06e2fe5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9d71fdc-7fb6-4c43-8934-1eec5337be2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cdc378b-1288-436b-9821-34981ec9f7fe, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da41936a-3d81-49ed-9021-22d56f07b75b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.704 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da41936a-3d81-49ed-9021-22d56f07b75b in datapath f71257d1-6873-4759-898a-ce32e06e2fe5 bound to our chassis#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.705 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f71257d1-6873-4759-898a-ce32e06e2fe5#033[00m
Oct 14 05:18:26 np0005486808 systemd-udevd[369561]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[15c95710-9154-44dd-953e-ec19ff0968ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.719 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf71257d1-61 in ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.721 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf71257d1-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec8a8d-a322-4834-b68b-7f2cbb903eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 systemd-machined[214636]: New machine qemu-141-instance-0000006f.
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.7302] device (tapda41936a-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.7316] device (tapda41936a-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78ef373a-bde3-4923-a43a-5fceb09ca2b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 systemd[1]: Started Virtual Machine qemu-141-instance-0000006f.
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.751 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7999880c-9387-4786-b702-03d71e80fb68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:26Z|01176|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b ovn-installed in OVS
Oct 14 05:18:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:26Z|01177|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b up in Southbound
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e21b4a3b-6956-4170-80f9-de59d9136135]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.797 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0083f93e-4d9d-4c0c-932e-9e271925b5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.8043] manager: (tapf71257d1-60): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Oct 14 05:18:26 np0005486808 systemd-udevd[369572]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.803 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af80acdf-b9c9-44e3-b6f0-dee65997d7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.839 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8844502f-943a-4145-bc3b-f5d9393e45bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.842 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a53df3-0a8e-4a8b-972e-9e8a73e1a4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.8619] device (tapf71257d1-60): carrier: link connected
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.865 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6a6352-23ca-45ab-95e3-64fc59ea18ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.880 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[accdf4ad-dbb7-4ba9-9075-17ca0d548636]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf71257d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:52:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728562, 'reachable_time': 37622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369645, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.893 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32ea3e58-178f-4c18-afb1-af6c77489831]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:5253'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728562, 'tstamp': 728562}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369646, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.907 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[03069423-17f3-4a4b-bc96-1857b1cc6977]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf71257d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:52:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728562, 'reachable_time': 37622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369647, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.934 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[19003c18-31a3-42ba-bbb4-aceb25d0366f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff74600b-cf52-4d75-9d38-ff42ff0e93d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.985 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf71257d1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.986 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf71257d1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 kernel: tapf71257d1-60: entered promiscuous mode
Oct 14 05:18:26 np0005486808 NetworkManager[44885]: <info>  [1760433506.9902] manager: (tapf71257d1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct 14 05:18:26 np0005486808 nova_compute[259627]: 2025-10-14 09:18:26.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:26.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf71257d1-60, col_values=(('external_ids', {'iface-id': '1080cece-d27b-4391-82d1-6b7871bf79ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:27 np0005486808 nova_compute[259627]: 2025-10-14 09:18:27.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:27Z|01178|binding|INFO|Releasing lport 1080cece-d27b-4391-82d1-6b7871bf79ba from this chassis (sb_readonly=0)
Oct 14 05:18:27 np0005486808 nova_compute[259627]: 2025-10-14 09:18:27.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.015 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[012980ac-d769-430d-ac00-7d3be2352960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.016 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-f71257d1-6873-4759-898a-ce32e06e2fe5
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/f71257d1-6873-4759-898a-ce32e06e2fe5.pid.haproxy
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID f71257d1-6873-4759-898a-ce32e06e2fe5
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:18:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:27.017 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'env', 'PROCESS_TAG=haproxy-f71257d1-6873-4759-898a-ce32e06e2fe5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f71257d1-6873-4759-898a-ce32e06e2fe5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:18:27 np0005486808 nova_compute[259627]: 2025-10-14 09:18:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.194556387 +0000 UTC m=+0.045958534 container create e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 14 05:18:27 np0005486808 systemd[1]: Started libpod-conmon-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope.
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.170439353 +0000 UTC m=+0.021841480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.325249108 +0000 UTC m=+0.176651235 container init e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.33464295 +0000 UTC m=+0.186045067 container start e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.339147751 +0000 UTC m=+0.190549858 container attach e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:18:27 np0005486808 sad_bell[369717]: 167 167
Oct 14 05:18:27 np0005486808 systemd[1]: libpod-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope: Deactivated successfully.
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.342682138 +0000 UTC m=+0.194084245 container died e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:18:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-624d0a7e7e71146fbb07872a14b4aa53f659cadd4ebe59eb3c5afa7ada743ff5-merged.mount: Deactivated successfully.
Oct 14 05:18:27 np0005486808 podman[369696]: 2025-10-14 09:18:27.382492739 +0000 UTC m=+0.233894856 container remove e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 14 05:18:27 np0005486808 systemd[1]: libpod-conmon-e9c66f79778c69d4ac5902bfcfb7d36ddca4966cdda8d7543665ba6c5cd064dc.scope: Deactivated successfully.
Oct 14 05:18:27 np0005486808 podman[369738]: 2025-10-14 09:18:27.430474642 +0000 UTC m=+0.082854513 container create c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:18:27 np0005486808 systemd[1]: Started libpod-conmon-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope.
Oct 14 05:18:27 np0005486808 podman[369738]: 2025-10-14 09:18:27.3975464 +0000 UTC m=+0.049926241 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:18:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e7bed502d13946ec34b297bcd718011311b8b7aa0eaf61e9799a54475ca22e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:27 np0005486808 podman[369738]: 2025-10-14 09:18:27.539188961 +0000 UTC m=+0.191568802 container init c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 05:18:27 np0005486808 podman[369738]: 2025-10-14 09:18:27.547125137 +0000 UTC m=+0.199504988 container start c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:18:27 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : New worker (369831) forked
Oct 14 05:18:27 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : Loading success.
Oct 14 05:18:27 np0005486808 podman[369815]: 2025-10-14 09:18:27.621593452 +0000 UTC m=+0.089536847 container create a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:18:27 np0005486808 systemd[1]: Started libpod-conmon-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope.
Oct 14 05:18:27 np0005486808 podman[369815]: 2025-10-14 09:18:27.590742712 +0000 UTC m=+0.058686137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:27 np0005486808 podman[369815]: 2025-10-14 09:18:27.713854986 +0000 UTC m=+0.181798401 container init a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:18:27 np0005486808 podman[369815]: 2025-10-14 09:18:27.727110893 +0000 UTC m=+0.195054288 container start a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:18:27 np0005486808 podman[369815]: 2025-10-14 09:18:27.731496271 +0000 UTC m=+0.199439696 container attach a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:18:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.020 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433508.018999, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.021 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Started (Lifecycle Event)#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.041 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.047 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433508.023958, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.048 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.065 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.069 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.086 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]: {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    "0": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "devices": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "/dev/loop3"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            ],
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_name": "ceph_lv0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_size": "21470642176",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "name": "ceph_lv0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "tags": {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_name": "ceph",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.crush_device_class": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.encrypted": "0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_id": "0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.vdo": "0"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            },
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "vg_name": "ceph_vg0"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        }
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    ],
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    "1": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "devices": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "/dev/loop4"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            ],
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_name": "ceph_lv1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_size": "21470642176",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "name": "ceph_lv1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "tags": {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_name": "ceph",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.crush_device_class": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.encrypted": "0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_id": "1",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.vdo": "0"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            },
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "vg_name": "ceph_vg1"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        }
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    ],
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    "2": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "devices": [
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "/dev/loop5"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            ],
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_name": "ceph_lv2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_size": "21470642176",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "name": "ceph_lv2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "tags": {
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.cluster_name": "ceph",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.crush_device_class": "",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.encrypted": "0",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osd_id": "2",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:                "ceph.vdo": "0"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            },
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "type": "block",
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:            "vg_name": "ceph_vg2"
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:        }
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]:    ]
Oct 14 05:18:28 np0005486808 optimistic_lovelace[369844]: }
Oct 14 05:18:28 np0005486808 systemd[1]: libpod-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope: Deactivated successfully.
Oct 14 05:18:28 np0005486808 podman[369815]: 2025-10-14 09:18:28.51775096 +0000 UTC m=+0.985694355 container died a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 14 05:18:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d4366e99a2a7abed1302e598ebedf521655b838462b43aab8e2024311e0b8de8-merged.mount: Deactivated successfully.
Oct 14 05:18:28 np0005486808 podman[369815]: 2025-10-14 09:18:28.592803809 +0000 UTC m=+1.060747204 container remove a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:18:28 np0005486808 systemd[1]: libpod-conmon-a578c4e78890beba36b988abe4a63bfbd07b7b82775d7607de3dcc852b5ab391.scope: Deactivated successfully.
Oct 14 05:18:28 np0005486808 nova_compute[259627]: 2025-10-14 09:18:28.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.427079001 +0000 UTC m=+0.055985311 container create 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:18:29 np0005486808 systemd[1]: Started libpod-conmon-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope.
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.39945288 +0000 UTC m=+0.028359240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.53292509 +0000 UTC m=+0.161831450 container init 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.543037919 +0000 UTC m=+0.171944229 container start 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:18:29 np0005486808 adoring_volhard[370022]: 167 167
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.550328489 +0000 UTC m=+0.179234789 container attach 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:18:29 np0005486808 systemd[1]: libpod-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope: Deactivated successfully.
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.552254417 +0000 UTC m=+0.181160727 container died 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:18:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c909d7239c03b4d9236547c9b46b017413575e0951edf84cbf8e29ffc1ea5184-merged.mount: Deactivated successfully.
Oct 14 05:18:29 np0005486808 podman[370006]: 2025-10-14 09:18:29.606455762 +0000 UTC m=+0.235362072 container remove 430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:18:29 np0005486808 systemd[1]: libpod-conmon-430bfc6d90e135bd62e0517639422a6033b74016975f676296a76c68962a9adc.scope: Deactivated successfully.
Oct 14 05:18:29 np0005486808 podman[370045]: 2025-10-14 09:18:29.882472225 +0000 UTC m=+0.076683581 container create f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:18:29 np0005486808 systemd[1]: Started libpod-conmon-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope.
Oct 14 05:18:29 np0005486808 podman[370045]: 2025-10-14 09:18:29.853415759 +0000 UTC m=+0.047627165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:18:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:18:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:18:29 np0005486808 podman[370045]: 2025-10-14 09:18:29.993347468 +0000 UTC m=+0.187558864 container init f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:18:30 np0005486808 podman[370045]: 2025-10-14 09:18:30.010739157 +0000 UTC m=+0.204950503 container start f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:18:30 np0005486808 podman[370045]: 2025-10-14 09:18:30.015044633 +0000 UTC m=+0.209255979 container attach f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:18:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:18:30 np0005486808 nova_compute[259627]: 2025-10-14 09:18:30.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]: {
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_id": 2,
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "type": "bluestore"
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    },
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_id": 1,
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "type": "bluestore"
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    },
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_id": 0,
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:        "type": "bluestore"
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]:    }
Oct 14 05:18:31 np0005486808 dazzling_chatterjee[370061]: }
Oct 14 05:18:31 np0005486808 systemd[1]: libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Deactivated successfully.
Oct 14 05:18:31 np0005486808 systemd[1]: libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Consumed 1.209s CPU time.
Oct 14 05:18:31 np0005486808 conmon[370061]: conmon f77a0a2a318c4fed8108 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope/container/memory.events
Oct 14 05:18:31 np0005486808 podman[370045]: 2025-10-14 09:18:31.215317985 +0000 UTC m=+1.409529301 container died f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:18:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4a08b6a27e200de621892590710258d70ddd4c224cb584f9883d565503245e5e-merged.mount: Deactivated successfully.
Oct 14 05:18:31 np0005486808 podman[370045]: 2025-10-14 09:18:31.273983511 +0000 UTC m=+1.468194837 container remove f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:18:31 np0005486808 systemd[1]: libpod-conmon-f77a0a2a318c4fed8108f5c307c8bcb8fc84548235c1f310ba8c4a1b173d9525.scope: Deactivated successfully.
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:31 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev efe05eb0-081c-472e-910a-a7b0de0f21f5 does not exist
Oct 14 05:18:31 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ab9b5431-b01c-49fb-a6fb-337082cf67b4 does not exist
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.801 2 DEBUG nova.compute.manager [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.804 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.804 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.805 2 DEBUG oslo_concurrency.lockutils [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.805 2 DEBUG nova.compute.manager [req-d5dc2f57-8c3d-4693-bdd9-30912ff7f939 req-763206bb-cc4d-4af9-8d31-fe92d1dc5f5b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Processing event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.806 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.813 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433511.8133683, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.818 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.823 2 INFO nova.virt.libvirt.driver [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance spawned successfully.#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.824 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.846 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.855 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.862 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.863 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.863 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.864 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.865 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.865 2 DEBUG nova.virt.libvirt.driver [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.877 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.929 2 INFO nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 12.17 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:18:31 np0005486808 nova_compute[259627]: 2025-10-14 09:18:31.930 2 DEBUG nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:32 np0005486808 nova_compute[259627]: 2025-10-14 09:18:32.040 2 INFO nova.compute.manager [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 13.20 seconds to build instance.#033[00m
Oct 14 05:18:32 np0005486808 nova_compute[259627]: 2025-10-14 09:18:32.065 2 DEBUG oslo_concurrency.lockutils [None req-fe577a3d-c0cc-4dcf-91f1-389b4f5944c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 05:18:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 14 05:18:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 14 05:18:32 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:18:32
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.mgr']
Oct 14 05:18:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:18:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:32 np0005486808 nova_compute[259627]: 2025-10-14 09:18:32.997 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:32 np0005486808 nova_compute[259627]: 2025-10-14 09:18:32.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.020 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:18:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:18:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664284408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.493 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.594 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.595 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.602 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.602 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.866 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.867 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.92180633544922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.868 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:33 np0005486808 nova_compute[259627]: 2025-10-14 09:18:33.868 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.092 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 22c6f034-c238-499b-8b07-1c0f5879297e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.093 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 78ff2018-e6dc-4337-8a74-90e5a3963a12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.093 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.094 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.276 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059653271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.752 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.758 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.904 2 DEBUG nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.905 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.906 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.907 2 DEBUG oslo_concurrency.lockutils [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.907 2 DEBUG nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:18:34 np0005486808 nova_compute[259627]: 2025-10-14 09:18:34.908 2 WARNING nova.compute.manager [req-1bc84417-5a25-45a1-9032-08438a874adc req-c9edadb3-e21a-4346-8014-3f39ac2176e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received unexpected event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with vm_state active and task_state None.#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.021 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.048 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.049 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.617 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.618 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.665 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.766 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.767 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.775 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:18:35 np0005486808 nova_compute[259627]: 2025-10-14 09:18:35.776 2 INFO nova.compute.claims [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:18:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 05:18:36 np0005486808 podman[370207]: 2025-10-14 09:18:36.683923697 +0000 UTC m=+0.092820549 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 05:18:36 np0005486808 podman[370206]: 2025-10-14 09:18:36.699158432 +0000 UTC m=+0.111085739 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:18:36 np0005486808 nova_compute[259627]: 2025-10-14 09:18:36.765 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.025 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.027 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.028 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.177 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.179 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.181 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:37.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84ab5b71-0af2-4f7d-bb98-74afc1a24e2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587226045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.204 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.211 2 DEBUG nova.compute.provider_tree [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.229 2 DEBUG nova.scheduler.client.report [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.256 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.257 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.324 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.325 2 DEBUG nova.network.neutron [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.484 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.647 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.787 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.791 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.792 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating image(s)#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.829 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.869 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.894 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.899 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "499f01f883833265e17c7f0a92fa640265d12fd1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.900 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "499f01f883833265e17c7f0a92fa640265d12fd1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.903 2 DEBUG nova.network.neutron [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 05:18:37 np0005486808 nova_compute[259627]: 2025-10-14 09:18:37.903 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:18:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 104 op/s
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.297 2 DEBUG nova.virt.libvirt.imagebackend [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.367 2 DEBUG nova.virt.libvirt.imagebackend [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/727bbed1-74a5-4c59-8492-67ee2fd49862/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.369 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] cloning images/727bbed1-74a5-4c59-8492-67ee2fd49862@snap to None/333c933a-d8e8-42b0-ab77-72546d8ab982_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.513 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "499f01f883833265e17c7f0a92fa640265d12fd1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.700 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] resizing rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.809 2 DEBUG nova.objects.instance [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'migration_context' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.832 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.833 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Ensure instance console log exists: /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.834 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.834 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.835 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.838 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='ef3c7bff170b5c0c8fa1a2079ceceb0f',container_format='bare',created_at=2025-10-14T09:18:32Z,direct_url=<?>,disk_format='raw',id=727bbed1-74a5-4c59-8492-67ee2fd49862,min_disk=0,min_ram=0,name='tempest-image-dependency-test-705407316',owner='8af92d0b8536433fbf169b1e300f923f',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-14T09:18:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '727bbed1-74a5-4c59-8492-67ee2fd49862'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.846 2 WARNING nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.855 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.856 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.863 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.864 2 DEBUG nova.virt.libvirt.host [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.866 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.867 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='ef3c7bff170b5c0c8fa1a2079ceceb0f',container_format='bare',created_at=2025-10-14T09:18:32Z,direct_url=<?>,disk_format='raw',id=727bbed1-74a5-4c59-8492-67ee2fd49862,min_disk=0,min_ram=0,name='tempest-image-dependency-test-705407316',owner='8af92d0b8536433fbf169b1e300f923f',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-14T09:18:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.868 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.869 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.869 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.870 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.871 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.872 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.873 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.874 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.874 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.875 2 DEBUG nova.virt.hardware [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.882 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:18:38 np0005486808 nova_compute[259627]: 2025-10-14 09:18:38.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.259 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.260 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.260 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:18:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320813269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.349 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.379 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.385 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:18:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193110767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.819 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.822 2 DEBUG nova.objects.instance [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'pci_devices' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.839 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <uuid>333c933a-d8e8-42b0-ab77-72546d8ab982</uuid>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <name>instance-00000070</name>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:name>instance-depend-image</nova:name>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:18:38</nova:creationTime>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:user uuid="bf09463654864456a851828eb4e88fb2">tempest-ImageDependencyTests-1811756685-project-member</nova:user>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <nova:project uuid="8af92d0b8536433fbf169b1e300f923f">tempest-ImageDependencyTests-1811756685</nova:project>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="727bbed1-74a5-4c59-8492-67ee2fd49862"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="serial">333c933a-d8e8-42b0-ab77-72546d8ab982</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="uuid">333c933a-d8e8-42b0-ab77-72546d8ab982</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/console.log" append="off"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:18:39 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:18:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:18:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:18:39 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.901 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.902 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.903 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Using config drive#033[00m
Oct 14 05:18:39 np0005486808 nova_compute[259627]: 2025-10-14 09:18:39.937 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.238 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Creating config drive at /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.246 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0amxw25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.408 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0amxw25" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.451 2 DEBUG nova.storage.rbd_utils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] rbd image 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.464 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.617 2 DEBUG oslo_concurrency.processutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config 333c933a-d8e8-42b0-ab77-72546d8ab982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.618 2 INFO nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deleting local config drive /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982/disk.config because it was imported into RBD.#033[00m
Oct 14 05:18:40 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:40 np0005486808 systemd-machined[214636]: New machine qemu-142-instance-00000070.
Oct 14 05:18:40 np0005486808 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:40.999 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.015 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.016 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.017 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.123 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.124 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.126 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:41.127 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1773c96f-97f2-419c-bbb8-308d6df8514d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.625 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433521.6247602, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.626 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.629 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.629 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.633 2 INFO nova.virt.libvirt.driver [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance spawned successfully.#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.633 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.648 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.653 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.659 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.660 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.660 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.661 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.661 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.662 2 DEBUG nova.virt.libvirt.driver [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.702 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.703 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433521.6251338, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.703 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Started (Lifecycle Event)#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.736 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.740 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.752 2 INFO nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 3.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.752 2 DEBUG nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.765 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.819 2 INFO nova.compute.manager [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 6.08 seconds to build instance.#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.838 2 DEBUG oslo_concurrency.lockutils [None req-17fbb16f-d792-410b-be5c-21d5741a2bf1 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:41 np0005486808 nova_compute[259627]: 2025-10-14 09:18:41.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.3 KiB/s wr, 133 op/s
Oct 14 05:18:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.143 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.146 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.148 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:42.149 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[09149002-dd75-45f4-9a0c-afee462a20de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108513218727616 of space, bias 1.0, pg target 0.3325539656182848 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:18:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:18:43 np0005486808 nova_compute[259627]: 2025-10-14 09:18:43.273 2 DEBUG nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:18:43 np0005486808 nova_compute[259627]: 2025-10-14 09:18:43.323 2 INFO nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] instance snapshotting#033[00m
Oct 14 05:18:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:43Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 05:18:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:43Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:0a:a6 10.100.0.23
Oct 14 05:18:43 np0005486808 nova_compute[259627]: 2025-10-14 09:18:43.843 2 INFO nova.virt.libvirt.driver [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Beginning live snapshot process#033[00m
Oct 14 05:18:43 np0005486808 nova_compute[259627]: 2025-10-14 09:18:43.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:43 np0005486808 nova_compute[259627]: 2025-10-14 09:18:43.995 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] creating snapshot(b16323738b854d8aa86180d83eca67ff) on rbd image(333c933a-d8e8-42b0-ab77-72546d8ab982_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:18:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.4 KiB/s wr, 116 op/s
Oct 14 05:18:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 14 05:18:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 14 05:18:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 14 05:18:44 np0005486808 nova_compute[259627]: 2025-10-14 09:18:44.782 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] cloning vms/333c933a-d8e8-42b0-ab77-72546d8ab982_disk@b16323738b854d8aa86180d83eca67ff to images/4ed7be1c-eb92-4aed-821e-c93c8b3e0961 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:18:44 np0005486808 nova_compute[259627]: 2025-10-14 09:18:44.922 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] flattening images/4ed7be1c-eb92-4aed-821e-c93c8b3e0961 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:18:45 np0005486808 nova_compute[259627]: 2025-10-14 09:18:45.088 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] removing snapshot(b16323738b854d8aa86180d83eca67ff) on rbd image(333c933a-d8e8-42b0-ab77-72546d8ab982_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:18:45 np0005486808 nova_compute[259627]: 2025-10-14 09:18:45.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 14 05:18:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 14 05:18:45 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 14 05:18:45 np0005486808 nova_compute[259627]: 2025-10-14 09:18:45.753 2 DEBUG nova.storage.rbd_utils [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] creating snapshot(snap) on rbd image(4ed7be1c-eb92-4aed-821e-c93c8b3e0961) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:18:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.2 MiB/s wr, 234 op/s
Oct 14 05:18:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.539 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.542 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.544 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:46.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c37f9c5d-250f-4975-8fe1-a48ceb4a58ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 14 05:18:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 14 05:18:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 14 05:18:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.403 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.405 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.406 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:47.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[648a4e4c-a346-4f59-b723-d62b95fa87e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:47 np0005486808 nova_compute[259627]: 2025-10-14 09:18:47.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 200 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 243 op/s
Oct 14 05:18:48 np0005486808 nova_compute[259627]: 2025-10-14 09:18:48.560 2 INFO nova.virt.libvirt.driver [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Snapshot image upload complete#033[00m
Oct 14 05:18:48 np0005486808 nova_compute[259627]: 2025-10-14 09:18:48.560 2 INFO nova.compute.manager [None req-b54196ed-9395-4f97-970f-53d2667eba31 bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 5.23 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:18:48 np0005486808 nova_compute[259627]: 2025-10-14 09:18:48.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.3 MiB/s wr, 249 op/s
Oct 14 05:18:50 np0005486808 nova_compute[259627]: 2025-10-14 09:18:50.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 14 05:18:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 14 05:18:50 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.425 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.426 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.427 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.428 2 INFO nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Terminating instance#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.429 2 DEBUG nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:18:51 np0005486808 kernel: tapda41936a-3d (unregistering): left promiscuous mode
Oct 14 05:18:51 np0005486808 NetworkManager[44885]: <info>  [1760433531.4932] device (tapda41936a-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:51Z|01179|binding|INFO|Releasing lport da41936a-3d81-49ed-9021-22d56f07b75b from this chassis (sb_readonly=0)
Oct 14 05:18:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:51Z|01180|binding|INFO|Setting lport da41936a-3d81-49ed-9021-22d56f07b75b down in Southbound
Oct 14 05:18:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:51Z|01181|binding|INFO|Removing iface tapda41936a-3d ovn-installed in OVS
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.523 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0a:a6 10.100.0.23'], port_security=['fa:16:3e:44:0a:a6 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '78ff2018-e6dc-4337-8a74-90e5a3963a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71257d1-6873-4759-898a-ce32e06e2fe5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9d71fdc-7fb6-4c43-8934-1eec5337be2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cdc378b-1288-436b-9821-34981ec9f7fe, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da41936a-3d81-49ed-9021-22d56f07b75b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.525 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da41936a-3d81-49ed-9021-22d56f07b75b in datapath f71257d1-6873-4759-898a-ce32e06e2fe5 unbound from our chassis#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.527 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f71257d1-6873-4759-898a-ce32e06e2fe5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb9632e-92ce-4dc6-8c1d-19e4c73ce1d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.532 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 namespace which is not needed anymore#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.537 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.537 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.538 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.539 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.540 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.541 2 INFO nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Terminating instance#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.542 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.543 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquired lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.543 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 14 05:18:51 np0005486808 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Consumed 13.058s CPU time.
Oct 14 05:18:51 np0005486808 systemd-machined[214636]: Machine qemu-141-instance-0000006f terminated.
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.666 2 INFO nova.virt.libvirt.driver [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Instance destroyed successfully.#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.667 2 DEBUG nova.objects.instance [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 78ff2018-e6dc-4337-8a74-90e5a3963a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.687 2 DEBUG nova.virt.libvirt.vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-656534791',display_name='tempest-TestNetworkBasicOps-server-656534791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-656534791',id=111,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqWOfZtqt05D/ybUIudjZC2rvrYHVQund30LctGyNDPNNhzNoX0uP/AuNuya2hh573jsTHFAB6yaR0srnVECa+MfsVPVgRLFSopv6Xf9JyA1XlxHXKy8htlY2LEMZWHEA==',key_name='tempest-TestNetworkBasicOps-500375878',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:18:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-qnjtb5gx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:18:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=78ff2018-e6dc-4337-8a74-90e5a3963a12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.687 2 DEBUG nova.network.os_vif_util [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "da41936a-3d81-49ed-9021-22d56f07b75b", "address": "fa:16:3e:44:0a:a6", "network": {"id": "f71257d1-6873-4759-898a-ce32e06e2fe5", "bridge": "br-int", "label": "tempest-network-smoke--1895773891", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda41936a-3d", "ovs_interfaceid": "da41936a-3d81-49ed-9021-22d56f07b75b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.688 2 DEBUG nova.network.os_vif_util [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.688 2 DEBUG os_vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda41936a-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.695 2 INFO os_vif [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0a:a6,bridge_name='br-int',has_traffic_filtering=True,id=da41936a-3d81-49ed-9021-22d56f07b75b,network=Network(f71257d1-6873-4759-898a-ce32e06e2fe5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda41936a-3d')#033[00m
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : haproxy version is 2.8.14-c23fe91
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [NOTICE]   (369829) : path to executable is /usr/sbin/haproxy
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : Exiting Master process...
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : Exiting Master process...
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [ALERT]    (369829) : Current worker (369831) exited with code 143 (Terminated)
Oct 14 05:18:51 np0005486808 neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5[369805]: [WARNING]  (369829) : All workers exited. Exiting... (0)
Oct 14 05:18:51 np0005486808 systemd[1]: libpod-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope: Deactivated successfully.
Oct 14 05:18:51 np0005486808 podman[370816]: 2025-10-14 09:18:51.712735881 +0000 UTC m=+0.051771866 container died c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:18:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3-userdata-shm.mount: Deactivated successfully.
Oct 14 05:18:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-06e7bed502d13946ec34b297bcd718011311b8b7aa0eaf61e9799a54475ca22e-merged.mount: Deactivated successfully.
Oct 14 05:18:51 np0005486808 podman[370816]: 2025-10-14 09:18:51.760278933 +0000 UTC m=+0.099314918 container cleanup c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.765 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:18:51 np0005486808 systemd[1]: libpod-conmon-c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3.scope: Deactivated successfully.
Oct 14 05:18:51 np0005486808 podman[370874]: 2025-10-14 09:18:51.825207503 +0000 UTC m=+0.041627647 container remove c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.831 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2da8c025-5fda-4f60-9de9-ee2f1976d91a]: (4, ('Tue Oct 14 09:18:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 (c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3)\nc4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3\nTue Oct 14 09:18:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 (c4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3)\nc4b9568bccd483fddcfbf998357d85c1b39fa3eb9a4792c4d8a7f3b13a4e0cc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.833 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e94e544-68fa-407b-a469-4f3c20d98fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.833 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf71257d1-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:51 np0005486808 kernel: tapf71257d1-60: left promiscuous mode
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 nova_compute[259627]: 2025-10-14 09:18:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca52414-b9f3-405d-8e85-0031327217dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec172a-d62b-49cc-b368-42fcd28a8a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[80c891c5-9ddc-4c52-8c31-be4874821a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.886 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.889 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc52b1c-3db9-4dd6-97a4-6d768c8af77c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728555, 'reachable_time': 35951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370889, 'error': None, 'target': 'ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 systemd[1]: run-netns-ovnmeta\x2df71257d1\x2d6873\x2d4759\x2d898a\x2dce32e06e2fe5.mount: Deactivated successfully.
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.893 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f71257d1-6873-4759-898a-ce32e06e2fe5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.893 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[3c120476-dd8b-4c87-8f1d-5ee01a38a52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.894 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.895 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:51.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eda305c5-09b3-496b-b2e6-4fb2a69b2336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 639 KiB/s wr, 140 op/s
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.080 2 INFO nova.virt.libvirt.driver [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deleting instance files /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12_del#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.081 2 INFO nova.virt.libvirt.driver [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deletion of /var/lib/nova/instances/78ff2018-e6dc-4337-8a74-90e5a3963a12_del complete#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.085 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.085 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG oslo_concurrency.lockutils [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.086 2 DEBUG nova.compute.manager [req-577832d0-ec4b-4b68-af42-1c6c7f430f82 req-7f219086-d4e7-436c-9bba-1efe0f71563a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-unplugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.147 2 INFO nova.compute.manager [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.148 2 DEBUG oslo.service.loopingcall [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.148 2 DEBUG nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.149 2 DEBUG nova.network.neutron [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.441 2 DEBUG nova.network.neutron [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.468 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Releasing lock "refresh_cache-333c933a-d8e8-42b0-ab77-72546d8ab982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.469 2 DEBUG nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:18:52 np0005486808 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 14 05:18:52 np0005486808 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 1.365s CPU time.
Oct 14 05:18:52 np0005486808 systemd-machined[214636]: Machine qemu-142-instance-00000070 terminated.
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.699 2 INFO nova.virt.libvirt.driver [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance destroyed successfully.#033[00m
Oct 14 05:18:52 np0005486808 nova_compute[259627]: 2025-10-14 09:18:52.703 2 DEBUG nova.objects.instance [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lazy-loading 'resources' on Instance uuid 333c933a-d8e8-42b0-ab77-72546d8ab982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 14 05:18:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 14 05:18:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.138 2 DEBUG nova.network.neutron [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.170 2 INFO nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.225 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.226 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.266 2 DEBUG nova.compute.manager [req-2268a940-ac40-4480-8d92-9558e99b299c req-1076ee9e-5820-40d8-8bbe-adbfe590a762 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-deleted-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.349 2 DEBUG oslo_concurrency.processutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.418 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.420 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.422 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:53.422 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[923c9886-64e8-4299-949e-3d378d2f4011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735234365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.808 2 DEBUG oslo_concurrency.processutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.817 2 DEBUG nova.compute.provider_tree [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.834 2 DEBUG nova.scheduler.client.report [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.855 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.884 2 INFO nova.scheduler.client.report [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 78ff2018-e6dc-4337-8a74-90e5a3963a12#033[00m
Oct 14 05:18:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 14 05:18:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 14 05:18:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 14 05:18:53 np0005486808 nova_compute[259627]: 2025-10-14 09:18:53.957 2 DEBUG oslo_concurrency.lockutils [None req-52ba0cf8-2078-4773-b94e-0d7114917e4e 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 200 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 30 KiB/s wr, 89 op/s
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.168 2 DEBUG nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.169 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG oslo_concurrency.lockutils [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "78ff2018-e6dc-4337-8a74-90e5a3963a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.170 2 DEBUG nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] No waiting events found dispatching network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.171 2 WARNING nova.compute.manager [req-c562ec38-a6c3-42be-8860-5a6c4fb79d74 req-1f552cd4-9475-4afb-b0ff-2f3de10e2384 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Received unexpected event network-vif-plugged-da41936a-3d81-49ed-9021-22d56f07b75b for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.312 2 INFO nova.virt.libvirt.driver [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deleting instance files /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982_del#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.313 2 INFO nova.virt.libvirt.driver [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deletion of /var/lib/nova/instances/333c933a-d8e8-42b0-ab77-72546d8ab982_del complete#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.382 2 INFO nova.compute.manager [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.383 2 DEBUG oslo.service.loopingcall [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.384 2 DEBUG nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.384 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.544 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.558 2 DEBUG nova.network.neutron [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.593 2 INFO nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.640 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.641 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:54 np0005486808 podman[370936]: 2025-10-14 09:18:54.690556435 +0000 UTC m=+0.090739428 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:18:54 np0005486808 podman[370935]: 2025-10-14 09:18:54.70337579 +0000 UTC m=+0.108070914 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 14 05:18:54 np0005486808 nova_compute[259627]: 2025-10-14 09:18:54.737 2 DEBUG oslo_concurrency.processutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:18:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:18:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536847294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.235 2 DEBUG oslo_concurrency.processutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.241 2 DEBUG nova.compute.provider_tree [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.255 2 DEBUG nova.scheduler.client.report [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.307 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.335 2 INFO nova.scheduler.client.report [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Deleted allocations for instance 333c933a-d8e8-42b0-ab77-72546d8ab982#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.404 2 DEBUG oslo_concurrency.lockutils [None req-2a7457c5-ed9d-4daa-828d-d18db46f8c8a bf09463654864456a851828eb4e88fb2 8af92d0b8536433fbf169b1e300f923f - - default default] Lock "333c933a-d8e8-42b0-ab77-72546d8ab982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:18:55 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:18:56 np0005486808 nova_compute[259627]: 2025-10-14 09:18:55.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:18:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 35 KiB/s wr, 226 op/s
Oct 14 05:18:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:56Z|01182|binding|INFO|Releasing lport 437783e2-5dd6-4d07-bfc8-e4c0a4808267 from this chassis (sb_readonly=0)
Oct 14 05:18:56 np0005486808 nova_compute[259627]: 2025-10-14 09:18:56.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:56 np0005486808 nova_compute[259627]: 2025-10-14 09:18:56.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.276 2 DEBUG nova.compute.manager [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.276 2 DEBUG nova.compute.manager [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing instance network info cache due to event network-changed-9053aa9d-2747-4b65-9480-c8d5d0c126fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.277 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Refreshing network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.386 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.388 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.388 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.389 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.389 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.391 2 INFO nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Terminating instance#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.394 2 DEBUG nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:18:57 np0005486808 kernel: tap9053aa9d-27 (unregistering): left promiscuous mode
Oct 14 05:18:57 np0005486808 NetworkManager[44885]: <info>  [1760433537.4581] device (tap9053aa9d-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:18:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:57Z|01183|binding|INFO|Releasing lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc from this chassis (sb_readonly=0)
Oct 14 05:18:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:57Z|01184|binding|INFO|Setting lport 9053aa9d-2747-4b65-9480-c8d5d0c126fc down in Southbound
Oct 14 05:18:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:18:57Z|01185|binding|INFO|Removing iface tap9053aa9d-27 ovn-installed in OVS
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.482 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:68:6d 10.100.0.12'], port_security=['fa:16:3e:e4:68:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22c6f034-c238-499b-8b07-1c0f5879297e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac21096-7818-4990-8868-baedcdcf8f83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f54993ea-ba9b-4930-8e05-f68e5e27d0b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cda2976-3075-437c-b2bd-7362e360206b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9053aa9d-2747-4b65-9480-c8d5d0c126fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.483 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9053aa9d-2747-4b65-9480-c8d5d0c126fc in datapath 6ac21096-7818-4990-8868-baedcdcf8f83 unbound from our chassis#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.484 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ac21096-7818-4990-8868-baedcdcf8f83, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.487 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2063a0e1-0346-423d-9275-252fb46cc342]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.488 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 namespace which is not needed anymore#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 14 05:18:57 np0005486808 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 16.315s CPU time.
Oct 14 05:18:57 np0005486808 systemd-machined[214636]: Machine qemu-140-instance-0000006e terminated.
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.644 2 INFO nova.virt.libvirt.driver [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Instance destroyed successfully.#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.645 2 DEBUG nova.objects.instance [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 22c6f034-c238-499b-8b07-1c0f5879297e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.663 2 DEBUG nova.virt.libvirt.vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-620217933',display_name='tempest-TestNetworkBasicOps-server-620217933',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-620217933',id=110,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFmMenfL1sUYkbBrkES3junvmmfxbYlPSkqdfU7GziKOgK8Al6Uo4AuziFU4SUs4TqWyCWrEHu5DJJsxFuoF8JeKv+GpLFEK8bJLhTgJlxj4kUg8oBgVMIWKSTqwaqC6zA==',key_name='tempest-TestNetworkBasicOps-2091857695',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:17:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ol26b3n0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:17:50Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=22c6f034-c238-499b-8b07-1c0f5879297e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.664 2 DEBUG nova.network.os_vif_util [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.665 2 DEBUG nova.network.os_vif_util [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.665 2 DEBUG os_vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9053aa9d-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.676 2 INFO os_vif [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:68:6d,bridge_name='br-int',has_traffic_filtering=True,id=9053aa9d-2747-4b65-9480-c8d5d0c126fc,network=Network(6ac21096-7818-4990-8868-baedcdcf8f83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9053aa9d-27')#033[00m
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : haproxy version is 2.8.14-c23fe91
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [NOTICE]   (368680) : path to executable is /usr/sbin/haproxy
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : Exiting Master process...
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : Exiting Master process...
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [ALERT]    (368680) : Current worker (368682) exited with code 143 (Terminated)
Oct 14 05:18:57 np0005486808 neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83[368676]: [WARNING]  (368680) : All workers exited. Exiting... (0)
Oct 14 05:18:57 np0005486808 systemd[1]: libpod-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope: Deactivated successfully.
Oct 14 05:18:57 np0005486808 podman[371028]: 2025-10-14 09:18:57.722139522 +0000 UTC m=+0.069265858 container died 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:18:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:18:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d1f2cda8d9a484751f468c0a5f14364c4cc3b93a764e75f80052eecd6130f987-merged.mount: Deactivated successfully.
Oct 14 05:18:57 np0005486808 podman[371028]: 2025-10-14 09:18:57.76831728 +0000 UTC m=+0.115443616 container cleanup 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.785 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.785 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG oslo_concurrency.lockutils [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.786 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.787 2 DEBUG nova.compute.manager [req-e290127c-4a2b-4718-8859-93ff7eeb1f3e req-3d71ee5d-dcd4-47f9-aebc-c35eb3d66913 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-unplugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:18:57 np0005486808 systemd[1]: libpod-conmon-6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd.scope: Deactivated successfully.
Oct 14 05:18:57 np0005486808 podman[371079]: 2025-10-14 09:18:57.861847216 +0000 UTC m=+0.056051823 container remove 6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.873 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33c033b2-fa29-4b51-bc6c-3025872f26a5]: (4, ('Tue Oct 14 09:18:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 (6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd)\n6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd\nTue Oct 14 09:18:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 (6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd)\n6ffd7eddbb766ef2daa7451e6ebb8fd90eea70407a8a4801d01c5517402425cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.876 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68d4483d-4762-4194-9343-5308b3c1fdde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.877 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac21096-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 kernel: tap6ac21096-70: left promiscuous mode
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da855d8f-e66b-4da8-bbd7-441fbe8147ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:18:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 14 05:18:57 np0005486808 nova_compute[259627]: 2025-10-14 09:18:57.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 14 05:18:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.917 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[12a56123-e1b7-47c4-90e0-007ea72cb551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.918 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e74769b-03f5-4ac3-9d7d-f6673de6bd9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.938 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98047527-5270-445c-a482-1b67eeba8a18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724791, 'reachable_time': 36676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371096, 'error': None, 'target': 'ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:57 np0005486808 systemd[1]: run-netns-ovnmeta\x2d6ac21096\x2d7818\x2d4990\x2d8868\x2dbaedcdcf8f83.mount: Deactivated successfully.
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.940 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ac21096-7818-4990-8868-baedcdcf8f83 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:18:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:57.940 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4b4855-aa52-48a1-9032-e933b7f3e8b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 121 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 8.0 KiB/s wr, 142 op/s
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.175 2 INFO nova.virt.libvirt.driver [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deleting instance files /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e_del#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.176 2 INFO nova.virt.libvirt.driver [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deletion of /var/lib/nova/instances/22c6f034-c238-499b-8b07-1c0f5879297e_del complete#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.226 2 INFO nova.compute.manager [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.227 2 DEBUG oslo.service.loopingcall [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.228 2 DEBUG nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.228 2 DEBUG nova.network.neutron [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:18:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.636 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 10.100.0.2 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:18:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.637 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:18:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.639 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:18:58 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:18:58.640 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e68169-40e3-48bf-bd17-b00e52371e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:18:58 np0005486808 nova_compute[259627]: 2025-10-14 09:18:58.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.383 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updated VIF entry in instance network info cache for port 9053aa9d-2747-4b65-9480-c8d5d0c126fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.384 2 DEBUG nova.network.neutron [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [{"id": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "address": "fa:16:3e:e4:68:6d", "network": {"id": "6ac21096-7818-4990-8868-baedcdcf8f83", "bridge": "br-int", "label": "tempest-network-smoke--748804487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9053aa9d-27", "ovs_interfaceid": "9053aa9d-2747-4b65-9480-c8d5d0c126fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.405 2 DEBUG nova.network.neutron [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.408 2 DEBUG oslo_concurrency.lockutils [req-36ec7e97-fa16-4582-80ba-a5c71b88a359 req-8aba9046-3106-4e0e-bbbf-bc91af6a679d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-22c6f034-c238-499b-8b07-1c0f5879297e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.423 2 INFO nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.484 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.484 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:18:59 np0005486808 nova_compute[259627]: 2025-10-14 09:18:59.540 2 DEBUG oslo_concurrency.processutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.010 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.011 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.012 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.012 2 DEBUG oslo_concurrency.lockutils [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.013 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] No waiting events found dispatching network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.013 2 WARNING nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received unexpected event network-vif-plugged-9053aa9d-2747-4b65-9480-c8d5d0c126fc for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.014 2 DEBUG nova.compute.manager [req-583e6b8f-617e-4e3a-8b1d-d2d9d9263cc0 req-d1207df6-d717-4783-9b2d-106b7f31f402 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Received event network-vif-deleted-9053aa9d-2747-4b65-9480-c8d5d0c126fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:19:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204320955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.079 2 DEBUG oslo_concurrency.processutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 96 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.089 2 DEBUG nova.compute.provider_tree [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.104 2 DEBUG nova.scheduler.client.report [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.125 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.161 2 INFO nova.scheduler.client.report [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 22c6f034-c238-499b-8b07-1c0f5879297e#033[00m
Oct 14 05:19:00 np0005486808 nova_compute[259627]: 2025-10-14 09:19:00.225 2 DEBUG oslo_concurrency.lockutils [None req-3067f7da-1468-4379-bd79-39a367cb9ff8 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "22c6f034-c238-499b-8b07-1c0f5879297e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 7.6 KiB/s wr, 145 op/s
Oct 14 05:19:02 np0005486808 nova_compute[259627]: 2025-10-14 09:19:02.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:03 np0005486808 nova_compute[259627]: 2025-10-14 09:19:03.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 6.2 KiB/s wr, 118 op/s
Oct 14 05:19:04 np0005486808 nova_compute[259627]: 2025-10-14 09:19:04.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:04 np0005486808 nova_compute[259627]: 2025-10-14 09:19:04.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:04.929 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:19:04 np0005486808 nova_compute[259627]: 2025-10-14 09:19:04.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:04.930 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:19:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:19:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:19:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:19:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2359575977' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:19:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Oct 14 05:19:06 np0005486808 nova_compute[259627]: 2025-10-14 09:19:06.664 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433531.6633725, 78ff2018-e6dc-4337-8a74-90e5a3963a12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:06 np0005486808 nova_compute[259627]: 2025-10-14 09:19:06.666 2 INFO nova.compute.manager [-] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:19:06 np0005486808 nova_compute[259627]: 2025-10-14 09:19:06.719 2 DEBUG nova.compute.manager [None req-0987e0f5-984f-4fc5-a00b-5dbb0e211e4b - - - - - -] [instance: 78ff2018-e6dc-4337-8a74-90e5a3963a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:07.034 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:07 np0005486808 nova_compute[259627]: 2025-10-14 09:19:07.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:07 np0005486808 podman[371125]: 2025-10-14 09:19:07.686764335 +0000 UTC m=+0.085807645 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 14 05:19:07 np0005486808 nova_compute[259627]: 2025-10-14 09:19:07.696 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433532.6944122, 333c933a-d8e8-42b0-ab77-72546d8ab982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:07 np0005486808 nova_compute[259627]: 2025-10-14 09:19:07.696 2 INFO nova.compute.manager [-] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:19:07 np0005486808 podman[371124]: 2025-10-14 09:19:07.721706646 +0000 UTC m=+0.126111009 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:19:07 np0005486808 nova_compute[259627]: 2025-10-14 09:19:07.734 2 DEBUG nova.compute.manager [None req-4d1458ff-3f35-4130-9dd7-69f6239af731 - - - - - -] [instance: 333c933a-d8e8-42b0-ab77-72546d8ab982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 14 05:19:08 np0005486808 nova_compute[259627]: 2025-10-14 09:19:08.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:08.933 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:19:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 05:19:12 np0005486808 nova_compute[259627]: 2025-10-14 09:19:12.642 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433537.641699, 22c6f034-c238-499b-8b07-1c0f5879297e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:12 np0005486808 nova_compute[259627]: 2025-10-14 09:19:12.643 2 INFO nova.compute.manager [-] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:19:12 np0005486808 nova_compute[259627]: 2025-10-14 09:19:12.671 2 DEBUG nova.compute.manager [None req-d01b572a-a589-4a1f-9397-6d4943a9b6b1 - - - - - -] [instance: 22c6f034-c238-499b-8b07-1c0f5879297e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:12 np0005486808 nova_compute[259627]: 2025-10-14 09:19:12.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:13 np0005486808 nova_compute[259627]: 2025-10-14 09:19:13.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:17 np0005486808 nova_compute[259627]: 2025-10-14 09:19:17.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:18 np0005486808 nova_compute[259627]: 2025-10-14 09:19:18.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:19 np0005486808 nova_compute[259627]: 2025-10-14 09:19:19.918 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:19 np0005486808 nova_compute[259627]: 2025-10-14 09:19:19.919 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:19 np0005486808 nova_compute[259627]: 2025-10-14 09:19:19.943 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.050 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.051 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.062 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.062 2 INFO nova.compute.claims [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:19:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.207 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:19:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266580968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.688 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.700 2 DEBUG nova.compute.provider_tree [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.717 2 DEBUG nova.scheduler.client.report [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.748 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.749 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.802 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.803 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.829 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.861 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.960 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.961 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.962 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating image(s)#033[00m
Oct 14 05:19:20 np0005486808 nova_compute[259627]: 2025-10-14 09:19:20.992 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.030 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.067 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.073 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.183 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.185 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.186 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.186 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.222 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.227 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.365 2 DEBUG nova.policy [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.527 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.576 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.665 2 DEBUG nova.objects.instance [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Ensure instance console log exists: /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.679 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.680 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:21 np0005486808 nova_compute[259627]: 2025-10-14 09:19:21.680 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:22 np0005486808 nova_compute[259627]: 2025-10-14 09:19:22.552 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully created port: 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:19:22 np0005486808 nova_compute[259627]: 2025-10-14 09:19:22.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.728 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully updated port: 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.745 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.745 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.746 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.869 2 DEBUG nova.compute.manager [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.870 2 DEBUG nova.compute.manager [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.870 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:19:23 np0005486808 nova_compute[259627]: 2025-10-14 09:19:23.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 737 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:19:24 np0005486808 nova_compute[259627]: 2025-10-14 09:19:24.508 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:19:25 np0005486808 podman[371362]: 2025-10-14 09:19:25.680930596 +0000 UTC m=+0.082587317 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:19:25 np0005486808 podman[371363]: 2025-10-14 09:19:25.692813888 +0000 UTC m=+0.091221449 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.707 2 DEBUG nova.network.neutron [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.731 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance network_info: |[{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.732 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.734 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Start _get_guest_xml network_info=[{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.739 2 WARNING nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.743 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.744 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.746 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.libvirt.host [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.747 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.748 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.749 2 DEBUG nova.virt.hardware [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:19:25 np0005486808 nova_compute[259627]: 2025-10-14 09:19:25.752 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:19:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:19:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240021058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.268 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.303 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.308 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:19:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183136731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.835 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.836 2 DEBUG nova.virt.libvirt.vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:19:20Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.837 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.838 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.839 2 DEBUG nova.objects.instance [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.859 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <name>instance-00000071</name>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:19:25</nova:creationTime>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="serial">e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="uuid">e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:cf:57:80"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <target dev="tap654f4bc5-29"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/console.log" append="off"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:19:26 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:19:26 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:19:26 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:19:26 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.861 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Preparing to wait for external event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.861 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.862 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.863 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.864 2 DEBUG nova.virt.libvirt.vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:19:20Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.865 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.866 2 DEBUG nova.network.os_vif_util [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.866 2 DEBUG os_vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap654f4bc5-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap654f4bc5-29, col_values=(('external_ids', {'iface-id': '654f4bc5-29db-40e4-bc4e-bdd325e98e7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:57:80', 'vm-uuid': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:26 np0005486808 NetworkManager[44885]: <info>  [1760433566.8802] manager: (tap654f4bc5-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.891 2 INFO os_vif [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:57:80,bridge_name='br-int',has_traffic_filtering=True,id=654f4bc5-29db-40e4-bc4e-bdd325e98e7a,network=Network(414bade6-3739-49f9-bce9-e93105157bbe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654f4bc5-29')#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.948 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.949 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.949 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:cf:57:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.950 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Using config drive#033[00m
Oct 14 05:19:26 np0005486808 nova_compute[259627]: 2025-10-14 09:19:26.974 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.505 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.506 2 DEBUG nova.network.neutron [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.528 2 DEBUG oslo_concurrency.lockutils [req-e5571a34-7a7c-4e37-8fbd-0bbd3a8ab1b4 req-3c616914-8cb0-4814-948b-c05cf85bdce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.617 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Creating config drive at /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.623 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdgl34lc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.771 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdgl34lc" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.812 2 DEBUG nova.storage.rbd_utils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:19:27 np0005486808 nova_compute[259627]: 2025-10-14 09:19:27.816 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.025 2 DEBUG oslo_concurrency.processutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.026 2 INFO nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Deleting local config drive /var/lib/nova/instances/e83e1125-7b9a-4ca1-9040-40dbc3e0237b/disk.config because it was imported into RBD.#033[00m
Oct 14 05:19:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:19:28 np0005486808 kernel: tap654f4bc5-29: entered promiscuous mode
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.1064] manager: (tap654f4bc5-29): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:28Z|01186|binding|INFO|Claiming lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a for this chassis.
Oct 14 05:19:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:28Z|01187|binding|INFO|654f4bc5-29db-40e4-bc4e-bdd325e98e7a: Claiming fa:16:3e:cf:57:80 10.100.0.11
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.132 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:57:80 10.100.0.11'], port_security=['fa:16:3e:cf:57:80 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-414bade6-3739-49f9-bce9-e93105157bbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885d3329-1a6a-4e62-8633-0f83200cd25a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=553b14d5-fce8-4530-b374-6c59ead23d8e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=654f4bc5-29db-40e4-bc4e-bdd325e98e7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.133 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a in datapath 414bade6-3739-49f9-bce9-e93105157bbe bound to our chassis#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.134 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 414bade6-3739-49f9-bce9-e93105157bbe#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a07615-1a1a-4927-ad36-0b6e381c1257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.147 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap414bade6-31 in ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.149 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap414bade6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.149 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e17fd673-efaa-43a3-8c97-cd47b2e4fe0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.150 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0acc3fc-64e2-4329-a8aa-47d3dbd6edc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 systemd-udevd[371538]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.161 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fe008e6a-62e1-435a-a1c6-05ad1d97c816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 systemd-machined[214636]: New machine qemu-143-instance-00000071.
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.1788] device (tap654f4bc5-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.1802] device (tap654f4bc5-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:19:28 np0005486808 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0a53117e-f4fc-4f77-bb2e-74bcdb77da33]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:28Z|01188|binding|INFO|Setting lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a ovn-installed in OVS
Oct 14 05:19:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:28Z|01189|binding|INFO|Setting lport 654f4bc5-29db-40e4-bc4e-bdd325e98e7a up in Southbound
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.225 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[986fbab3-0363-43fb-8520-e61fe8bb2e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.230 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[205217cd-0a10-4acf-9fc4-e5d636f404d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.2317] manager: (tap414bade6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 14 05:19:28 np0005486808 systemd-udevd[371541]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.276 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e955f50d-2c3a-40ca-8d31-fa824f223a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.282 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2488c25b-7e41-4b67-ae4a-21314f8b97a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.3118] device (tap414bade6-30): carrier: link connected
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.320 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef729f4d-c3d5-4101-95c1-11d864003f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[01e9963c-4438-45c4-96ae-be39de7dac37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap414bade6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:05:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734707, 'reachable_time': 22252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371569, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.360 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f27a192-92b8-4cdb-a846-4455a59d05dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:598'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734707, 'tstamp': 734707}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371570, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.387 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7e28c3-fe47-4280-ac33-cf11935ecf57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap414bade6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:05:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734707, 'reachable_time': 22252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371571, 'error': None, 'target': 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.436 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e751b3d-2ef8-4894-8b2b-c2f50c81b2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.488 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8:0:1:f816:3eff:fe43:13a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9122ea86-ed84-4854-b541-8d32cf1d8ec5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09fe180c-b874-4a50-95c9-e16dfe99d5f0) old=Port_Binding(mac=['fa:16:3e:43:13:a8 2001:db8::f816:3eff:fe43:13a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:13a8/64', 'neutron:device_id': 'ovnmeta-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef50a42-0521-4feb-8502-83f9d20484ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d0f6a07b344997b67dd4221ec5a35a', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9c9592-3069-476a-9c59-447e24af1b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.539 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap414bade6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.539 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.540 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap414bade6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 NetworkManager[44885]: <info>  [1760433568.5437] manager: (tap414bade6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 14 05:19:28 np0005486808 kernel: tap414bade6-30: entered promiscuous mode
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.548 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap414bade6-30, col_values=(('external_ids', {'iface-id': '19347ace-27ce-4391-bacc-c18e3400875e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:28Z|01190|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.580 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aa06418b-ab6f-42f8-87f4-8941be593a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.582 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-414bade6-3739-49f9-bce9-e93105157bbe
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/414bade6-3739-49f9-bce9-e93105157bbe.pid.haproxy
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 414bade6-3739-49f9-bce9-e93105157bbe
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:19:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:28.583 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe', 'env', 'PROCESS_TAG=haproxy-414bade6-3739-49f9-bce9-e93105157bbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/414bade6-3739-49f9-bce9-e93105157bbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:19:28 np0005486808 nova_compute[259627]: 2025-10-14 09:19:28.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:29 np0005486808 podman[371645]: 2025-10-14 09:19:29.020250037 +0000 UTC m=+0.064446269 container create 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:19:29 np0005486808 systemd[1]: Started libpod-conmon-3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96.scope.
Oct 14 05:19:29 np0005486808 podman[371645]: 2025-10-14 09:19:28.984483546 +0000 UTC m=+0.028679768 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:19:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ae02de95adf7e48c08e1cd0b516979c12355cab1b044706270723c27c91c4aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:29 np0005486808 podman[371645]: 2025-10-14 09:19:29.103597452 +0000 UTC m=+0.147793694 container init 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:19:29 np0005486808 podman[371645]: 2025-10-14 09:19:29.110407359 +0000 UTC m=+0.154603571 container start 3175e13606cf725dfae6bf341102bb1826c5dcac0719b588eec0fdcf66ed7b96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:19:29 np0005486808 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : New worker (371666) forked
Oct 14 05:19:29 np0005486808 neutron-haproxy-ovnmeta-414bade6-3739-49f9-bce9-e93105157bbe[371660]: [NOTICE]   (371664) : Loading success.
Oct 14 05:19:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.172 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09fe180c-b874-4a50-95c9-e16dfe99d5f0 in datapath 1ef50a42-0521-4feb-8502-83f9d20484ce updated#033[00m
Oct 14 05:19:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.173 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef50a42-0521-4feb-8502-83f9d20484ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:19:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:19:29.174 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88b0678c-aa0f-403d-aab9-67e253460672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.420 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433569.4197912, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Started (Lifecycle Event)#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.457 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.463 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433569.4199967, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.463 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.498 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.502 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:19:29 np0005486808 nova_compute[259627]: 2025-10-14 09:19:29.533 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:19:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.028 2 DEBUG nova.compute.manager [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.029 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.029 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.030 2 DEBUG oslo_concurrency.lockutils [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.030 2 DEBUG nova.compute.manager [req-cc72bfa0-bc47-4318-9ecc-f70e24ed8df8 req-a3dc718a-9450-4ae4-a228-d64096713bc2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Processing event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.031 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.036 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433571.0363295, e83e1125-7b9a-4ca1-9040-40dbc3e0237b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.037 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.041 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.046 2 INFO nova.virt.libvirt.driver [-] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Instance spawned successfully.#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.047 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.063 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.069 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.082 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.083 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.084 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.085 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.085 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.086 2 DEBUG nova.virt.libvirt.driver [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.124 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.179 2 INFO nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 10.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.180 2 DEBUG nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.262 2 INFO nova.compute.manager [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Took 11.24 seconds to build instance.#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.281 2 DEBUG oslo_concurrency.lockutils [None req-0a4ceb58-d656-4ce6-ba69-949aaf45cf91 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:31 np0005486808 nova_compute[259627]: 2025-10-14 09:19:31.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 759961b7-f397-4428-9d7a-0c600d867455 does not exist
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a7faf60-85e3-4b06-bb9a-c806743a4138 does not exist
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bf61d6d9-c2ad-4459-93a1-4639411e629e does not exist
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:19:32
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 14 05:19:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:19:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.043 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.044 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.115910601 +0000 UTC m=+0.038525981 container create 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.128 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.129 2 DEBUG oslo_concurrency.lockutils [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.129 2 DEBUG nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.129 2 WARNING nova.compute.manager [req-ef55a6ed-f1a4-46ff-a6de-16a4ea5efa77 req-815baedc-1dbe-4a78-b428-8617c0ab08cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-654f4bc5-29db-40e4-bc4e-bdd325e98e7a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:19:33 np0005486808 systemd[1]: Started libpod-conmon-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope.
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:19:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:19:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.099509367 +0000 UTC m=+0.022124797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.200689811 +0000 UTC m=+0.123305211 container init 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.206944395 +0000 UTC m=+0.129559775 container start 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.210035151 +0000 UTC m=+0.132650541 container attach 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:19:33 np0005486808 funny_northcutt[371962]: 167 167
Oct 14 05:19:33 np0005486808 systemd[1]: libpod-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope: Deactivated successfully.
Oct 14 05:19:33 np0005486808 conmon[371962]: conmon 38f5be4f79f3c45e634c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope/container/memory.events
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.218052358 +0000 UTC m=+0.140667738 container died 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:19:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-518e75aee414bf0158c1147bb795b1c9ea24eef06365d5a4f842252129ead23a-merged.mount: Deactivated successfully.
Oct 14 05:19:33 np0005486808 podman[371946]: 2025-10-14 09:19:33.258354132 +0000 UTC m=+0.180969522 container remove 38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:19:33 np0005486808 systemd[1]: libpod-conmon-38f5be4f79f3c45e634c7a6bde652581f12744026f932e559df4ed8b89caa746.scope: Deactivated successfully.
Oct 14 05:19:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:19:33 np0005486808 podman[372005]: 2025-10-14 09:19:33.463090348 +0000 UTC m=+0.052531466 container create a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:19:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:19:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646332625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:19:33 np0005486808 systemd[1]: Started libpod-conmon-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope.
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.509 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:33 np0005486808 podman[372005]: 2025-10-14 09:19:33.443905985 +0000 UTC m=+0.033347083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:33 np0005486808 podman[372005]: 2025-10-14 09:19:33.603342945 +0000 UTC m=+0.192784053 container init a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:19:33 np0005486808 podman[372005]: 2025-10-14 09:19:33.610759107 +0000 UTC m=+0.200200185 container start a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:19:33 np0005486808 podman[372005]: 2025-10-14 09:19:33.615056753 +0000 UTC m=+0.204497871 container attach a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.639 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.640 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.802 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.805 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.921 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.922 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.922 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.946 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.971 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:19:33 np0005486808 nova_compute[259627]: 2025-10-14 09:19:33.971 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.006 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.039 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.085 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:19:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:19:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:19:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817955444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.579 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.596 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.627 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.650 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:19:34 np0005486808 nova_compute[259627]: 2025-10-14 09:19:34.653 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:19:34 np0005486808 inspiring_brattain[372021]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:19:34 np0005486808 inspiring_brattain[372021]: --> relative data size: 1.0
Oct 14 05:19:34 np0005486808 inspiring_brattain[372021]: --> All data devices are unavailable
Oct 14 05:19:34 np0005486808 systemd[1]: libpod-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Deactivated successfully.
Oct 14 05:19:34 np0005486808 podman[372005]: 2025-10-14 09:19:34.791553219 +0000 UTC m=+1.380994297 container died a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:19:34 np0005486808 systemd[1]: libpod-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Consumed 1.105s CPU time.
Oct 14 05:19:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ca3b5b8955cea1640a15e8ecca601124af6f78b9c1fb39e802f6153dec09c63f-merged.mount: Deactivated successfully.
Oct 14 05:19:34 np0005486808 podman[372005]: 2025-10-14 09:19:34.841790107 +0000 UTC m=+1.431231185 container remove a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:19:34 np0005486808 systemd[1]: libpod-conmon-a8a8a80ad5e09d851fc8ab646879b47ad503de478591498dcb4f691dcc8747e3.scope: Deactivated successfully.
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.653985475 +0000 UTC m=+0.075521772 container create baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:19:35 np0005486808 systemd[1]: Started libpod-conmon-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope.
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.628254611 +0000 UTC m=+0.049790958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:35 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.742534428 +0000 UTC m=+0.164070775 container init baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.754872532 +0000 UTC m=+0.176408859 container start baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.759519916 +0000 UTC m=+0.181056303 container attach baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:19:35 np0005486808 bold_goldberg[372243]: 167 167
Oct 14 05:19:35 np0005486808 systemd[1]: libpod-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope: Deactivated successfully.
Oct 14 05:19:35 np0005486808 conmon[372243]: conmon baa1aa20673f892f0bec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope/container/memory.events
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.763986696 +0000 UTC m=+0.185523003 container died baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:19:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4ed200ab9f975706f1c58c0ea2818d60e39d7d2ff23eca0fa5a2c4dcf93517cb-merged.mount: Deactivated successfully.
Oct 14 05:19:35 np0005486808 podman[372227]: 2025-10-14 09:19:35.814369138 +0000 UTC m=+0.235905435 container remove baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:19:35 np0005486808 systemd[1]: libpod-conmon-baa1aa20673f892f0bec7b16fcb63be49bbca6c715a8d26e6d878ecc20f2236f.scope: Deactivated successfully.
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.020628622 +0000 UTC m=+0.063762533 container create a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:19:36 np0005486808 NetworkManager[44885]: <info>  [1760433576.0604] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Oct 14 05:19:36 np0005486808 NetworkManager[44885]: <info>  [1760433576.0618] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:35.996726613 +0000 UTC m=+0.039860564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 05:19:36 np0005486808 systemd[1]: Started libpod-conmon-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope.
Oct 14 05:19:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.141504111 +0000 UTC m=+0.184638032 container init a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.147883278 +0000 UTC m=+0.191017179 container start a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.151333143 +0000 UTC m=+0.194467074 container attach a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:36Z|01191|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.278 2 DEBUG nova.compute.manager [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG nova.compute.manager [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-654f4bc5-29db-40e4-bc4e-bdd325e98e7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.279 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.626 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:36 np0005486808 sad_morse[372283]: {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    "0": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "devices": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "/dev/loop3"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            ],
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_name": "ceph_lv0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_size": "21470642176",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "name": "ceph_lv0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "tags": {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_name": "ceph",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.crush_device_class": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.encrypted": "0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_id": "0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.vdo": "0"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            },
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "vg_name": "ceph_vg0"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        }
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    ],
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    "1": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "devices": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "/dev/loop4"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            ],
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_name": "ceph_lv1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_size": "21470642176",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "name": "ceph_lv1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "tags": {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_name": "ceph",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.crush_device_class": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.encrypted": "0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_id": "1",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.vdo": "0"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            },
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "vg_name": "ceph_vg1"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        }
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    ],
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    "2": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "devices": [
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "/dev/loop5"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            ],
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_name": "ceph_lv2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_size": "21470642176",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "name": "ceph_lv2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "tags": {
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.cluster_name": "ceph",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.crush_device_class": "",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.encrypted": "0",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osd_id": "2",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:                "ceph.vdo": "0"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            },
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "type": "block",
Oct 14 05:19:36 np0005486808 sad_morse[372283]:            "vg_name": "ceph_vg2"
Oct 14 05:19:36 np0005486808 sad_morse[372283]:        }
Oct 14 05:19:36 np0005486808 sad_morse[372283]:    ]
Oct 14 05:19:36 np0005486808 sad_morse[372283]: }
Oct 14 05:19:36 np0005486808 systemd[1]: libpod-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope: Deactivated successfully.
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.875067881 +0000 UTC m=+0.918201772 container died a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:19:36 np0005486808 nova_compute[259627]: 2025-10-14 09:19:36.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9e4973ae9aaecd9d49baae324fc43ec6a3f208dc5d98198f0f5e755eaece51c6-merged.mount: Deactivated successfully.
Oct 14 05:19:36 np0005486808 podman[372267]: 2025-10-14 09:19:36.953909494 +0000 UTC m=+0.997043415 container remove a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:19:36 np0005486808 systemd[1]: libpod-conmon-a0f3155c04729375fcbbf103e2483ac0758fd4c11066b1812c94abe2e5b8ac79.scope: Deactivated successfully.
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.567022805 +0000 UTC m=+0.035269490 container create 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:19:37 np0005486808 systemd[1]: Started libpod-conmon-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope.
Oct 14 05:19:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.552498277 +0000 UTC m=+0.020744982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.658000708 +0000 UTC m=+0.126247403 container init 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.669503121 +0000 UTC m=+0.137749806 container start 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.67352468 +0000 UTC m=+0.141771475 container attach 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:19:37 np0005486808 optimistic_swirles[372465]: 167 167
Oct 14 05:19:37 np0005486808 systemd[1]: libpod-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope: Deactivated successfully.
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.679108828 +0000 UTC m=+0.147355533 container died 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:19:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b5ba36a051ec9516d939916880faa296e929e048fce59ae73d097e76e7a3b76e-merged.mount: Deactivated successfully.
Oct 14 05:19:37 np0005486808 podman[372448]: 2025-10-14 09:19:37.726555227 +0000 UTC m=+0.194801912 container remove 30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:19:37 np0005486808 systemd[1]: libpod-conmon-30ce28566957a28376647b34adfe2d01c1c3b8452b0a08a19027411ac374ccfc.scope: Deactivated successfully.
Oct 14 05:19:37 np0005486808 podman[372478]: 2025-10-14 09:19:37.815876029 +0000 UTC m=+0.074436226 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 05:19:37 np0005486808 podman[372491]: 2025-10-14 09:19:37.893512792 +0000 UTC m=+0.112527404 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:19:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:37 np0005486808 podman[372529]: 2025-10-14 09:19:37.925224764 +0000 UTC m=+0.051812568 container create e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:19:37 np0005486808 systemd[1]: Started libpod-conmon-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope.
Oct 14 05:19:38 np0005486808 podman[372529]: 2025-10-14 09:19:37.907134698 +0000 UTC m=+0.033722542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:19:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:19:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:19:38 np0005486808 podman[372529]: 2025-10-14 09:19:38.024510541 +0000 UTC m=+0.151098345 container init e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:19:38 np0005486808 podman[372529]: 2025-10-14 09:19:38.037413679 +0000 UTC m=+0.164001513 container start e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:19:38 np0005486808 podman[372529]: 2025-10-14 09:19:38.041206482 +0000 UTC m=+0.167794306 container attach e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:19:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.395 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 654f4bc5-29db-40e4-bc4e-bdd325e98e7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.395 2 DEBUG nova.network.neutron [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.578 2 DEBUG oslo_concurrency.lockutils [req-5873f898-da59-46bd-b1c0-62307ef7766f req-e1c2c2c4-35ba-4e7e-8afa-2d23236fe26d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:38 np0005486808 nova_compute[259627]: 2025-10-14 09:19:38.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]: {
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_id": 2,
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "type": "bluestore"
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    },
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_id": 1,
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "type": "bluestore"
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    },
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_id": 0,
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:        "type": "bluestore"
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]:    }
Oct 14 05:19:39 np0005486808 strange_maxwell[372547]: }
Oct 14 05:19:39 np0005486808 systemd[1]: libpod-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Deactivated successfully.
Oct 14 05:19:39 np0005486808 systemd[1]: libpod-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Consumed 1.009s CPU time.
Oct 14 05:19:39 np0005486808 podman[372529]: 2025-10-14 09:19:39.048394235 +0000 UTC m=+1.174982069 container died e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:19:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1cf07f99567c387ecdd8dfb96404e2924de973a2f9866d58585a6d66e16b172f-merged.mount: Deactivated successfully.
Oct 14 05:19:39 np0005486808 podman[372529]: 2025-10-14 09:19:39.106063096 +0000 UTC m=+1.232650880 container remove e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:19:39 np0005486808 systemd[1]: libpod-conmon-e82c688b6f791f2781add349d07754e550f0d8f3800fff6732c20a73ab2e83fa.scope: Deactivated successfully.
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7a0bd132-1bae-4651-b7c8-058693ae50a2 does not exist
Oct 14 05:19:39 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 41dacb89-5a06-4c01-a644-87bf83ec71fe does not exist
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:39 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:19:39 np0005486808 nova_compute[259627]: 2025-10-14 09:19:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:39 np0005486808 nova_compute[259627]: 2025-10-14 09:19:39.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:19:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:19:40 np0005486808 nova_compute[259627]: 2025-10-14 09:19:40.179 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:19:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:40Z|01192|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 05:19:40 np0005486808 nova_compute[259627]: 2025-10-14 09:19:40.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:41 np0005486808 nova_compute[259627]: 2025-10-14 09:19:41.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 64 op/s
Oct 14 05:19:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:42Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:57:80 10.100.0.11
Oct 14 05:19:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:42Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:57:80 10.100.0.11
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.918950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582919049, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1616, "num_deletes": 253, "total_data_size": 2467508, "memory_usage": 2514056, "flush_reason": "Manual Compaction"}
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582939660, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2409933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40155, "largest_seqno": 41770, "table_properties": {"data_size": 2402445, "index_size": 4431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14804, "raw_average_key_size": 18, "raw_value_size": 2387343, "raw_average_value_size": 3048, "num_data_blocks": 197, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433434, "oldest_key_time": 1760433434, "file_creation_time": 1760433582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 20789 microseconds, and 11264 cpu microseconds.
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.939735) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2409933 bytes OK
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.939764) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944044) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944103) EVENT_LOG_v1 {"time_micros": 1760433582944089, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.944135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2460431, prev total WAL file size 2460431, number of live WAL files 2.
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.945397) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2353KB)], [89(9168KB)]
Oct 14 05:19:42 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433582945553, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11798804, "oldest_snapshot_seqno": -1}
Oct 14 05:19:42 np0005486808 nova_compute[259627]: 2025-10-14 09:19:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6720 keys, 11077497 bytes, temperature: kUnknown
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583023314, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11077497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11029602, "index_size": 29991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 171508, "raw_average_key_size": 25, "raw_value_size": 10906266, "raw_average_value_size": 1622, "num_data_blocks": 1194, "num_entries": 6720, "num_filter_entries": 6720, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.023529) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11077497 bytes
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.025005) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.6 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.6) OK, records in: 7241, records dropped: 521 output_compression: NoCompression
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.025040) EVENT_LOG_v1 {"time_micros": 1760433583025031, "job": 52, "event": "compaction_finished", "compaction_time_micros": 77808, "compaction_time_cpu_micros": 50538, "output_level": 6, "num_output_files": 1, "total_output_size": 11077497, "num_input_records": 7241, "num_output_records": 6720, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583025569, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433583028985, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:42.945221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:19:43.029143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:19:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:19:43 np0005486808 nova_compute[259627]: 2025-10-14 09:19:43.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 05:19:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:19:44Z|01193|binding|INFO|Releasing lport 19347ace-27ce-4391-bacc-c18e3400875e from this chassis (sb_readonly=0)
Oct 14 05:19:44 np0005486808 nova_compute[259627]: 2025-10-14 09:19:44.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 05:19:46 np0005486808 nova_compute[259627]: 2025-10-14 09:19:46.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:19:48 np0005486808 nova_compute[259627]: 2025-10-14 09:19:48.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:48 np0005486808 nova_compute[259627]: 2025-10-14 09:19:48.992 2 INFO nova.compute.manager [None req-95373ae1-a25b-4741-8a59-d6b25cd93b68 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Get console output#033[00m
Oct 14 05:19:49 np0005486808 nova_compute[259627]: 2025-10-14 09:19:49.000 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:19:49 np0005486808 nova_compute[259627]: 2025-10-14 09:19:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:19:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:19:51 np0005486808 nova_compute[259627]: 2025-10-14 09:19:51.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:19:52 np0005486808 nova_compute[259627]: 2025-10-14 09:19:52.707 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:19:52 np0005486808 nova_compute[259627]: 2025-10-14 09:19:52.708 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:19:52 np0005486808 nova_compute[259627]: 2025-10-14 09:19:52.708 2 DEBUG nova.objects.instance [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:19:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:53 np0005486808 nova_compute[259627]: 2025-10-14 09:19:53.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:53 np0005486808 nova_compute[259627]: 2025-10-14 09:19:53.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:54 np0005486808 nova_compute[259627]: 2025-10-14 09:19:54.058 2 DEBUG nova.objects.instance [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_requests' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:19:54 np0005486808 nova_compute[259627]: 2025-10-14 09:19:54.078 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:19:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:19:54 np0005486808 nova_compute[259627]: 2025-10-14 09:19:54.540 2 DEBUG nova.policy [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:19:55 np0005486808 nova_compute[259627]: 2025-10-14 09:19:55.486 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully created port: 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:19:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.489 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Successfully updated port: 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.509 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.510 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.510 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.674 2 DEBUG nova.compute.manager [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-changed-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.674 2 DEBUG nova.compute.manager [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing instance network info cache due to event network-changed-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.675 2 DEBUG oslo_concurrency.lockutils [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:19:56 np0005486808 podman[372650]: 2025-10-14 09:19:56.707148691 +0000 UTC m=+0.115907088 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:19:56 np0005486808 podman[372651]: 2025-10-14 09:19:56.711559939 +0000 UTC m=+0.111872478 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:19:56 np0005486808 nova_compute[259627]: 2025-10-14 09:19:56.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:57 np0005486808 nova_compute[259627]: 2025-10-14 09:19:57.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:19:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 05:19:58 np0005486808 nova_compute[259627]: 2025-10-14 09:19:58.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.906 2 DEBUG nova.network.neutron [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.929 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.931 2 DEBUG oslo_concurrency.lockutils [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e83e1125-7b9a-4ca1-9040-40dbc3e0237b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.931 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Refreshing network info cache for port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.936 2 DEBUG nova.virt.libvirt.vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.937 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.938 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.939 2 DEBUG os_vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b04a9d3-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b04a9d3-64, col_values=(('external_ids', {'iface-id': '8b04a9d3-64de-44c8-bcff-b2443ae3e9ef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:28:d8', 'vm-uuid': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:19:59 np0005486808 NetworkManager[44885]: <info>  [1760433599.9524] manager: (tap8b04a9d3-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.964 2 INFO os_vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64')#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.965 2 DEBUG nova.virt.libvirt.vif [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.966 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.968 2 DEBUG nova.network.os_vif_util [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:19:59 np0005486808 nova_compute[259627]: 2025-10-14 09:19:59.972 2 DEBUG nova.virt.libvirt.guest [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:19:59 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:eb:28:d8"/>
Oct 14 05:19:59 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:19:59 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:19:59 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:19:59 np0005486808 nova_compute[259627]:  <target dev="tap8b04a9d3-64"/>
Oct 14 05:19:59 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:19:59 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:19:59 np0005486808 NetworkManager[44885]: <info>  [1760433599.9919] manager: (tap8b04a9d3-64): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Oct 14 05:19:59 np0005486808 kernel: tap8b04a9d3-64: entered promiscuous mode
Oct 14 05:20:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:00Z|01194|binding|INFO|Claiming lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for this chassis.
Oct 14 05:20:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:00Z|01195|binding|INFO|8b04a9d3-64de-44c8-bcff-b2443ae3e9ef: Claiming fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.013 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:28:d8 10.100.0.28'], port_security=['fa:16:3e:eb:28:d8 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'e83e1125-7b9a-4ca1-9040-40dbc3e0237b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9df519b2-744d-4554-ab73-bff1872a6efb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a98a09c3-f853-4e04-9105-38927284cdec, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.015 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef in datapath 9df519b2-744d-4554-ab73-bff1872a6efb bound to our chassis#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.017 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9df519b2-744d-4554-ab73-bff1872a6efb#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.037 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef631c68-8674-4613-97c0-43678071ce38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9df519b2-71 in ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.041 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9df519b2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.041 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[641c23a0-bf9d-4263-8526-5c3b13ceef98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.044 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90b34977-3d3b-4f9f-b484-ebeb18343a8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 systemd-udevd[372699]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.061 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e5fdf-5875-4d72-88d6-78ab3fd0dcc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 NetworkManager[44885]: <info>  [1760433600.0753] device (tap8b04a9d3-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:20:00 np0005486808 NetworkManager[44885]: <info>  [1760433600.0770] device (tap8b04a9d3-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.086 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2beff4fe-029e-47b9-a896-e873a2cee0bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.101 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.102 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.102 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:cf:57:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.103 2 DEBUG nova.virt.libvirt.driver [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:eb:28:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:20:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:00Z|01196|binding|INFO|Setting lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef ovn-installed in OVS
Oct 14 05:20:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:00Z|01197|binding|INFO|Setting lport 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef up in Southbound
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.137 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8762e1-d717-45f1-ac25-b6149a759b60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.142 2 DEBUG nova.virt.libvirt.guest [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:20:00</nova:creationTime>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 05:20:00 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    <nova:port uuid="8b04a9d3-64de-44c8-bcff-b2443ae3e9ef">
Oct 14 05:20:00 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:20:00 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:20:00 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:20:00 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:20:00 np0005486808 systemd-udevd[372702]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:20:00 np0005486808 NetworkManager[44885]: <info>  [1760433600.1453] manager: (tap9df519b2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/486)
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.146 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53c8d628-5353-49db-a4ee-8a58d3913050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.182 2 DEBUG oslo_concurrency.lockutils [None req-1f27cc81-e258-4570-ad3b-264168ff2a1f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.197 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aff9d1f0-2a8a-43f3-899f-3123ae3d6d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.201 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c60688aa-2962-40bb-ba27-8e138be71c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 NetworkManager[44885]: <info>  [1760433600.2286] device (tap9df519b2-70): carrier: link connected
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.236 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3d990b59-3dc7-472e-8658-2e512ac185e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244371a6-dc6b-4b94-b97e-d497bf3069ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9df519b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:5f:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737898, 'reachable_time': 38821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372724, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.278 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d62a062-c678-4099-bf79-41e5774f8915]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:5fd0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737898, 'tstamp': 737898}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372725, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.299 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b38e581a-a1e2-4e4c-9801-4f1cf908e1ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9df519b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:5f:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737898, 'reachable_time': 38821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372726, 'error': None, 'target': 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[365a7af7-c78c-49a5-a62a-a71d9610277b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.426 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4093359-dc90-4692-aad5-da857db59d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.428 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9df519b2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.429 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.429 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9df519b2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 kernel: tap9df519b2-70: entered promiscuous mode
Oct 14 05:20:00 np0005486808 NetworkManager[44885]: <info>  [1760433600.4335] manager: (tap9df519b2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.438 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9df519b2-70, col_values=(('external_ids', {'iface-id': 'bfea9a1c-62f2-4502-94c0-26cc3839339f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:00Z|01198|binding|INFO|Releasing lport bfea9a1c-62f2-4502-94c0-26cc3839339f from this chassis (sb_readonly=0)
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.444 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70140fe3-c3e2-4395-912b-51847b59c8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.446 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-9df519b2-744d-4554-ab73-bff1872a6efb
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/9df519b2-744d-4554-ab73-bff1872a6efb.pid.haproxy
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 9df519b2-744d-4554-ab73-bff1872a6efb
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:20:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:20:00.447 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb', 'env', 'PROCESS_TAG=haproxy-9df519b2-744d-4554-ab73-bff1872a6efb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9df519b2-744d-4554-ab73-bff1872a6efb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:20:00 np0005486808 nova_compute[259627]: 2025-10-14 09:20:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:20:00 np0005486808 podman[372756]: 2025-10-14 09:20:00.863427959 +0000 UTC m=+0.066772217 container create 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:20:00 np0005486808 podman[372756]: 2025-10-14 09:20:00.823764351 +0000 UTC m=+0.027108669 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:20:00 np0005486808 systemd[1]: Started libpod-conmon-1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808.scope.
Oct 14 05:20:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:20:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ace653ee5c7b9417ef3ee5867719a97fbc60cb7a1e7d9da2e335f4a61a3aa9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:20:00 np0005486808 podman[372756]: 2025-10-14 09:20:00.981107019 +0000 UTC m=+0.184451317 container init 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:20:00 np0005486808 podman[372756]: 2025-10-14 09:20:00.991793472 +0000 UTC m=+0.195137720 container start 1a460edd2e21d655dc8f118796281cfe6eb8622fca34e89abbc4afd51f1b6808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:20:01 np0005486808 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : New worker (372777) forked
Oct 14 05:20:01 np0005486808 neutron-haproxy-ovnmeta-9df519b2-744d-4554-ab73-bff1872a6efb[372771]: [NOTICE]   (372775) : Loading success.
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.794 2 DEBUG nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.794 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.795 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.795 2 DEBUG oslo_concurrency.lockutils [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e83e1125-7b9a-4ca1-9040-40dbc3e0237b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.796 2 DEBUG nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] No waiting events found dispatching network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:20:01 np0005486808 nova_compute[259627]: 2025-10-14 09:20:01.796 2 WARNING nova.compute.manager [req-40251ef3-6a4b-4348-b27d-59faafea0de6 req-e9d2edd6-65d0-412e-8a4e-cc08c33013ef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Received unexpected event network-vif-plugged-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef for instance with vm_state active and task_state None.#033[00m
Oct 14 05:20:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:01Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 05:20:01 np0005486808 ovn_controller[152662]: 2025-10-14T09:20:01Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:28:d8 10.100.0.28
Oct 14 05:20:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.154 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.154 2 DEBUG oslo_concurrency.lockutils [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-e83e1125-7b9a-4ca1-9040-40dbc3e0237b-8b04a9d3-64de-44c8-bcff-b2443ae3e9ef" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.173 2 DEBUG nova.objects.instance [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid e83e1125-7b9a-4ca1-9040-40dbc3e0237b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.201 2 DEBUG nova.virt.libvirt.vif [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-424606386',display_name='tempest-TestNetworkBasicOps-server-424606386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-424606386',id=113,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ09dxd7Bak+6//QP8udxIh6uhpPchcDdl5+GxU18yI74s0WuknXeiKCI2lJf73VzQCZgNgqmgyCCsXfAOE13LYP90WL3qgd3ZF20sa84Uh+OXGCZ9Hmict7wvQuSqAzQg==',key_name='tempest-TestNetworkBasicOps-918756716',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-watt440w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:19:31Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=e83e1125-7b9a-4ca1-9040-40dbc3e0237b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.202 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.204 2 DEBUG nova.network.os_vif_util [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:28:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b04a9d3-64de-44c8-bcff-b2443ae3e9ef,network=Network(9df519b2-744d-4554-ab73-bff1872a6efb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b04a9d3-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.207 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updated VIF entry in instance network info cache for port 8b04a9d3-64de-44c8-bcff-b2443ae3e9ef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.208 2 DEBUG nova.network.neutron [req-eb0d4925-266c-4b08-8b84-04278396b500 req-3b6255c2-1905-40dd-b55c-806a2fe5021c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e83e1125-7b9a-4ca1-9040-40dbc3e0237b] Updating instance_info_cache with network_info: [{"id": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "address": "fa:16:3e:cf:57:80", "network": {"id": "414bade6-3739-49f9-bce9-e93105157bbe", "bridge": "br-int", "label": "tempest-network-smoke--754120486", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654f4bc5-29", "ovs_interfaceid": "654f4bc5-29db-40e4-bc4e-bdd325e98e7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "address": "fa:16:3e:eb:28:d8", "network": {"id": "9df519b2-744d-4554-ab73-bff1872a6efb", "bridge": "br-int", "label": "tempest-network-smoke--418730794", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b04a9d3-64", "ovs_interfaceid": "8b04a9d3-64de-44c8-bcff-b2443ae3e9ef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.211 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.214 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.216 2 DEBUG nova.virt.libvirt.driver [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Attempting to detach device tap8b04a9d3-64 from instance e83e1125-7b9a-4ca1-9040-40dbc3e0237b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.216 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:eb:28:d8"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <target dev="tap8b04a9d3-64"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:20:02 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.222 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:20:02 np0005486808 nova_compute[259627]: 2025-10-14 09:20:02.224 2 DEBUG nova.virt.libvirt.guest [None req-7649cfc5-b022-489c-ba31-c3690c0ab86f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:eb:28:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8b04a9d3-64"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <name>instance-00000071</name>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <uuid>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</uuid>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-424606386</nova:name>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:20:00</nova:creationTime>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:port uuid="654f4bc5-29db-40e4-bc4e-bdd325e98e7a">
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <nova:port uuid="8b04a9d3-64de-44c8-bcff-b2443ae3e9ef">
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:20:02 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='serial'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='uuid'>e83e1125-7b9a-4ca1-9040-40dbc3e0237b</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk' index='2'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/e83e1125-7b9a-4ca1-9040-40dbc3e0237b_disk.config' index='1'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:20:02 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:21:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct 14 05:21:46 np0005486808 rsyslogd[1002]: imjournal: 4274 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 05:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:21:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9457 writes, 43K keys, 9457 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9457 writes, 9457 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1520 writes, 7571 keys, 1520 commit groups, 1.0 writes per commit group, ingest: 9.20 MB, 0.02 MB/s#012Interval WAL: 1520 writes, 1520 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     97.0      0.53              0.19        29    0.018       0      0       0.0       0.0#012  L6      1/0    7.30 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    188.2    157.0      1.41              0.71        28    0.050    156K    15K       0.0       0.0#012 Sum      1/0    7.30 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    136.9    140.6      1.94              0.90        57    0.034    156K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0    168.9    164.5      0.49              0.27        16    0.030     55K   4087       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    188.2    157.0      1.41              0.71        28    0.050    156K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     98.3      0.52              0.19        28    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 1.9 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 29.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000245 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1931,28.27 MB,9.29864%) FilterBlock(58,438.55 KB,0.140878%) IndexBlock(58,773.72 KB,0.248548%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:21:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:21:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 05:21:48 np0005486808 nova_compute[259627]: 2025-10-14 09:21:48.446 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433693.444861, fd5669ba-0261-423e-8586-66c91ff570a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:21:48 np0005486808 nova_compute[259627]: 2025-10-14 09:21:48.447 2 INFO nova.compute.manager [-] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:21:48 np0005486808 nova_compute[259627]: 2025-10-14 09:21:48.478 2 DEBUG nova.compute.manager [None req-d219257a-8994-4aa3-b68f-e0a7021014ed - - - - - -] [instance: fd5669ba-0261-423e-8586-66c91ff570a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a3ad5f0-0e44-46a9-8158-3a5ad46a99df does not exist
Oct 14 05:21:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f9091d2b-7962-47ee-962d-19d7b38c6ca9 does not exist
Oct 14 05:21:48 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 105f1c60-3cbb-40e9-ad92-ae7cb059b7e3 does not exist
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:21:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:21:49 np0005486808 nova_compute[259627]: 2025-10-14 09:21:49.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:21:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:21:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:49 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.643858533 +0000 UTC m=+0.070345725 container create be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 05:21:49 np0005486808 systemd[1]: Started libpod-conmon-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope.
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.613438143 +0000 UTC m=+0.039925345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.752349477 +0000 UTC m=+0.178836669 container init be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.765929472 +0000 UTC m=+0.192416654 container start be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.770384441 +0000 UTC m=+0.196871683 container attach be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:21:49 np0005486808 epic_mclean[376938]: 167 167
Oct 14 05:21:49 np0005486808 systemd[1]: libpod-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope: Deactivated successfully.
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.776371999 +0000 UTC m=+0.202859191 container died be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-63313f046fa4cc23b8d405fcab9e77d718b771e2cf86835fe2f9b5486289ee5d-merged.mount: Deactivated successfully.
Oct 14 05:21:49 np0005486808 podman[376922]: 2025-10-14 09:21:49.835734252 +0000 UTC m=+0.262221434 container remove be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclean, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:21:49 np0005486808 systemd[1]: libpod-conmon-be5e8312f51451e4836625a0a0b96c2ee0ffc4f957b5fa6511fd3203fdf0c82f.scope: Deactivated successfully.
Oct 14 05:21:50 np0005486808 podman[376963]: 2025-10-14 09:21:50.066752676 +0000 UTC m=+0.056633027 container create e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:21:50 np0005486808 systemd[1]: Started libpod-conmon-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope.
Oct 14 05:21:50 np0005486808 podman[376963]: 2025-10-14 09:21:50.04787629 +0000 UTC m=+0.037756681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:50 np0005486808 nova_compute[259627]: 2025-10-14 09:21:50.140 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433695.139951, 6a505551-bc3f-4254-966f-ca344358f8ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:21:50 np0005486808 nova_compute[259627]: 2025-10-14 09:21:50.142 2 INFO nova.compute.manager [-] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:21:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Oct 14 05:21:50 np0005486808 podman[376963]: 2025-10-14 09:21:50.174884511 +0000 UTC m=+0.164764862 container init e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:21:50 np0005486808 nova_compute[259627]: 2025-10-14 09:21:50.226 2 DEBUG nova.compute.manager [None req-45f589d6-5b61-4495-bac6-3619818765c8 - - - - - -] [instance: 6a505551-bc3f-4254-966f-ca344358f8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:21:50 np0005486808 nova_compute[259627]: 2025-10-14 09:21:50.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:21:50 np0005486808 podman[376963]: 2025-10-14 09:21:50.227981869 +0000 UTC m=+0.217862230 container start e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:21:50 np0005486808 podman[376963]: 2025-10-14 09:21:50.231742482 +0000 UTC m=+0.221622853 container attach e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 05:21:50 np0005486808 nova_compute[259627]: 2025-10-14 09:21:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:21:51 np0005486808 mystifying_murdock[376980]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:21:51 np0005486808 mystifying_murdock[376980]: --> relative data size: 1.0
Oct 14 05:21:51 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:21:51 np0005486808 mystifying_murdock[376980]: --> All data devices are unavailable
Oct 14 05:21:51 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:21:51 np0005486808 systemd[1]: libpod-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Deactivated successfully.
Oct 14 05:21:51 np0005486808 podman[376963]: 2025-10-14 09:21:51.331061056 +0000 UTC m=+1.320941467 container died e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:21:51 np0005486808 systemd[1]: libpod-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Consumed 1.069s CPU time.
Oct 14 05:21:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d8350a76783983bf4565acef5cf4ffa7c0d8733c63cf551a2ebbbb92dd63aa36-merged.mount: Deactivated successfully.
Oct 14 05:21:51 np0005486808 podman[376963]: 2025-10-14 09:21:51.422989861 +0000 UTC m=+1.412870242 container remove e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:51 np0005486808 systemd[1]: libpod-conmon-e0aa8769144460c15e82699bdd6270277bfe9db7edf9af1a9e2c24e28da4ea9d.scope: Deactivated successfully.
Oct 14 05:21:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 767 B/s wr, 7 op/s
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.300176081 +0000 UTC m=+0.069569686 container create 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:21:52 np0005486808 systemd[1]: Started libpod-conmon-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope.
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.271831932 +0000 UTC m=+0.041225547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.401542749 +0000 UTC m=+0.170936374 container init 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.415596806 +0000 UTC m=+0.184990421 container start 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.419739498 +0000 UTC m=+0.189133113 container attach 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:52 np0005486808 romantic_dhawan[377179]: 167 167
Oct 14 05:21:52 np0005486808 systemd[1]: libpod-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope: Deactivated successfully.
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.424355102 +0000 UTC m=+0.193748717 container died 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:21:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3241a7f67c2513bd7baf8b28bfb5bdef625c4e76c838cb9d33f4cb140690547c-merged.mount: Deactivated successfully.
Oct 14 05:21:52 np0005486808 podman[377162]: 2025-10-14 09:21:52.477601444 +0000 UTC m=+0.246995029 container remove 71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:21:52 np0005486808 systemd[1]: libpod-conmon-71e234e318450113edb4f7dd36acb729b98f5e6dce84b8432603d71d2d2dea26.scope: Deactivated successfully.
Oct 14 05:21:52 np0005486808 podman[377203]: 2025-10-14 09:21:52.653333095 +0000 UTC m=+0.057426916 container create b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:52 np0005486808 systemd[1]: Started libpod-conmon-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope.
Oct 14 05:21:52 np0005486808 podman[377203]: 2025-10-14 09:21:52.627330814 +0000 UTC m=+0.031424705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:52 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:52 np0005486808 podman[377203]: 2025-10-14 09:21:52.740331979 +0000 UTC m=+0.144425820 container init b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:52 np0005486808 podman[377203]: 2025-10-14 09:21:52.748946182 +0000 UTC m=+0.153040003 container start b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:21:52 np0005486808 podman[377203]: 2025-10-14 09:21:52.752783366 +0000 UTC m=+0.156877177 container attach b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:21:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]: {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    "0": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "devices": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "/dev/loop3"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            ],
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_name": "ceph_lv0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_size": "21470642176",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "name": "ceph_lv0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "tags": {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_name": "ceph",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.crush_device_class": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.encrypted": "0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_id": "0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.vdo": "0"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            },
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "vg_name": "ceph_vg0"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        }
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    ],
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    "1": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "devices": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "/dev/loop4"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            ],
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_name": "ceph_lv1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_size": "21470642176",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "name": "ceph_lv1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "tags": {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_name": "ceph",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.crush_device_class": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.encrypted": "0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_id": "1",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.vdo": "0"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            },
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "vg_name": "ceph_vg1"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        }
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    ],
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    "2": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "devices": [
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "/dev/loop5"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            ],
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_name": "ceph_lv2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_size": "21470642176",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "name": "ceph_lv2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "tags": {
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.cluster_name": "ceph",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.crush_device_class": "",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.encrypted": "0",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osd_id": "2",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:                "ceph.vdo": "0"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            },
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "type": "block",
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:            "vg_name": "ceph_vg2"
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:        }
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]:    ]
Oct 14 05:21:53 np0005486808 tender_lederberg[377219]: }
Oct 14 05:21:53 np0005486808 systemd[1]: libpod-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope: Deactivated successfully.
Oct 14 05:21:53 np0005486808 podman[377203]: 2025-10-14 09:21:53.494317923 +0000 UTC m=+0.898411754 container died b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:21:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-17298dd6484fc724c322f93fea0f9114704d79e8d698b3f1ae6e103e789b53b1-merged.mount: Deactivated successfully.
Oct 14 05:21:53 np0005486808 podman[377203]: 2025-10-14 09:21:53.566857 +0000 UTC m=+0.970950851 container remove b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:21:53 np0005486808 systemd[1]: libpod-conmon-b6b34fcb120756b6ee6b03664a6a0991e2648536ef50c349da491a304b9cb8e3.scope: Deactivated successfully.
Oct 14 05:21:54 np0005486808 nova_compute[259627]: 2025-10-14 09:21:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:21:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.368091837 +0000 UTC m=+0.053180481 container create 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:21:54 np0005486808 systemd[1]: Started libpod-conmon-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope.
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.346659749 +0000 UTC m=+0.031748423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.456829924 +0000 UTC m=+0.141918628 container init 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.464867692 +0000 UTC m=+0.149956366 container start 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:21:54 np0005486808 keen_gauss[377397]: 167 167
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.469372553 +0000 UTC m=+0.154461217 container attach 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:21:54 np0005486808 systemd[1]: libpod-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope: Deactivated successfully.
Oct 14 05:21:54 np0005486808 conmon[377397]: conmon 38d540e198a710cce11b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope/container/memory.events
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.471427534 +0000 UTC m=+0.156516178 container died 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:21:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c636ac04bc16ef9dccbdb091087a7f37b227139acea4409a6b827c14cd237ad6-merged.mount: Deactivated successfully.
Oct 14 05:21:54 np0005486808 podman[377381]: 2025-10-14 09:21:54.523696112 +0000 UTC m=+0.208784776 container remove 38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:21:54 np0005486808 systemd[1]: libpod-conmon-38d540e198a710cce11b0507f14a76adf24108e90b7d6ff5023653fea3b72ecc.scope: Deactivated successfully.
Oct 14 05:21:54 np0005486808 podman[377422]: 2025-10-14 09:21:54.754729457 +0000 UTC m=+0.057225922 container create efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:21:54 np0005486808 systemd[1]: Started libpod-conmon-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope.
Oct 14 05:21:54 np0005486808 podman[377422]: 2025-10-14 09:21:54.727568057 +0000 UTC m=+0.030064572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:21:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:21:54 np0005486808 podman[377422]: 2025-10-14 09:21:54.869476805 +0000 UTC m=+0.171973270 container init efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:21:54 np0005486808 podman[377422]: 2025-10-14 09:21:54.877105573 +0000 UTC m=+0.179602008 container start efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:21:54 np0005486808 podman[377422]: 2025-10-14 09:21:54.881144972 +0000 UTC m=+0.183641407 container attach efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 05:21:55 np0005486808 nova_compute[259627]: 2025-10-14 09:21:55.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]: {
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_id": 2,
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "type": "bluestore"
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    },
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_id": 1,
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "type": "bluestore"
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    },
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_id": 0,
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:        "type": "bluestore"
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]:    }
Oct 14 05:21:55 np0005486808 festive_zhukovsky[377439]: }
Oct 14 05:21:55 np0005486808 systemd[1]: libpod-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Deactivated successfully.
Oct 14 05:21:55 np0005486808 systemd[1]: libpod-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Consumed 1.003s CPU time.
Oct 14 05:21:55 np0005486808 podman[377422]: 2025-10-14 09:21:55.875342286 +0000 UTC m=+1.177838721 container died efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:21:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b85e9246d6b284098734ab4a89dc23e8b46e912246b7feef55993df40d5624cb-merged.mount: Deactivated successfully.
Oct 14 05:21:55 np0005486808 podman[377422]: 2025-10-14 09:21:55.928399474 +0000 UTC m=+1.230895899 container remove efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:21:55 np0005486808 systemd[1]: libpod-conmon-efee419ac373ad0694558800a6cbedafd3223d604fe281adc5fb341eb298909d.scope: Deactivated successfully.
Oct 14 05:21:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:21:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:21:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:55 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d3823705-c579-45fa-8cb9-2b2409dbed64 does not exist
Oct 14 05:21:55 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ea13efe0-036c-42ba-98b4-3f93ecf1e63b does not exist
Oct 14 05:21:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:21:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:56 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:21:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:21:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 783 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.364 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.364 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.378 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.526 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.527 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.535 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.536 2 INFO nova.compute.claims [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:21:58 np0005486808 nova_compute[259627]: 2025-10-14 09:21:58.651 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:21:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:21:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854884346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.121 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.131 2 DEBUG nova.compute.provider_tree [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.155 2 DEBUG nova.scheduler.client.report [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.180 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.181 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.238 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.238 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.264 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.289 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.397 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.399 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.400 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating image(s)#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.437 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.475 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.512 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.517 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.570 2 DEBUG nova.policy [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.631 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.632 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.633 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.634 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.675 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:21:59 np0005486808 podman[377614]: 2025-10-14 09:21:59.67806973 +0000 UTC m=+0.081512920 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.685 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 50c83173-31e3-4f7a-8836-26e52affd0f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:21:59 np0005486808 podman[377615]: 2025-10-14 09:21:59.691032209 +0000 UTC m=+0.089629010 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 14 05:21:59 np0005486808 nova_compute[259627]: 2025-10-14 09:21:59.983 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 50c83173-31e3-4f7a-8836-26e52affd0f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.062 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:22:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 69 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.188 2 DEBUG nova.objects.instance [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.206 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.207 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Ensure instance console log exists: /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.207 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.208 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.208 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:00 np0005486808 nova_compute[259627]: 2025-10-14 09:22:00.394 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully created port: 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:22:01 np0005486808 nova_compute[259627]: 2025-10-14 09:22:01.950 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully updated port: 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:22:01 np0005486808 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:01 np0005486808 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:01 np0005486808 nova_compute[259627]: 2025-10-14 09:22:01.976 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:02 np0005486808 nova_compute[259627]: 2025-10-14 09:22:02.156 2 DEBUG nova.compute.manager [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:02 np0005486808 nova_compute[259627]: 2025-10-14 09:22:02.157 2 DEBUG nova.compute.manager [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:02 np0005486808 nova_compute[259627]: 2025-10-14 09:22:02.158 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 05:22:02 np0005486808 nova_compute[259627]: 2025-10-14 09:22:02.494 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:03 np0005486808 nova_compute[259627]: 2025-10-14 09:22:03.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:04 np0005486808 nova_compute[259627]: 2025-10-14 09:22:04.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.332 2 DEBUG nova.network.neutron [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.366 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.367 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance network_info: |[{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.368 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.369 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.373 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start _get_guest_xml network_info=[{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.380 2 WARNING nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.387 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.388 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.400 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.401 2 DEBUG nova.virt.libvirt.host [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.401 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.402 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.403 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.403 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.404 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.405 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.405 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.406 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.406 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.407 2 DEBUG nova.virt.hardware [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.411 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206756417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:22:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:05.843 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:05.844 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235126163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.872 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.895 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:05 np0005486808 nova_compute[259627]: 2025-10-14 09:22:05.899 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:22:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606223629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.335 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.337 2 DEBUG nova.virt.libvirt.vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:59Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.338 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.338 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.340 2 DEBUG nova.objects.instance [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.356 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <name>instance-00000075</name>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:22:05</nova:creationTime>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="serial">50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="uuid">50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:32:f2:3f"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <target dev="tap81977d79-f7"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log" append="off"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:22:06 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:22:06 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:22:06 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:22:06 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.358 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Preparing to wait for external event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.359 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.359 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.360 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.361 2 DEBUG nova.virt.libvirt.vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:21:59Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.361 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.362 2 DEBUG nova.network.os_vif_util [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.362 2 DEBUG os_vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81977d79-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81977d79-f7, col_values=(('external_ids', {'iface-id': '81977d79-f754-42ba-8b3c-c4eb2f9651d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:f2:3f', 'vm-uuid': '50c83173-31e3-4f7a-8836-26e52affd0f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:06 np0005486808 NetworkManager[44885]: <info>  [1760433726.4094] manager: (tap81977d79-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.417 2 INFO os_vif [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7')#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.470 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:32:f2:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.471 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Using config drive#033[00m
Oct 14 05:22:06 np0005486808 nova_compute[259627]: 2025-10-14 09:22:06.502 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.037 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.296 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.297 2 DEBUG nova.network.neutron [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.307 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Creating config drive at /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.315 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0_cipb2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.361 2 DEBUG oslo_concurrency.lockutils [req-867bbe83-6936-459c-91de-816f103eaf58 req-9abd66d3-e251-474e-9ba0-283a3eed48cc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.471 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0_cipb2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.511 2 DEBUG nova.storage.rbd_utils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.516 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.697 2 DEBUG oslo_concurrency.processutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config 50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.700 2 INFO nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deleting local config drive /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/disk.config because it was imported into RBD.#033[00m
Oct 14 05:22:07 np0005486808 kernel: tap81977d79-f7: entered promiscuous mode
Oct 14 05:22:07 np0005486808 NetworkManager[44885]: <info>  [1760433727.7694] manager: (tap81977d79-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Oct 14 05:22:07 np0005486808 systemd-udevd[377897]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:07Z|01238|binding|INFO|Claiming lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 for this chassis.
Oct 14 05:22:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:07Z|01239|binding|INFO|81977d79-f754-42ba-8b3c-c4eb2f9651d2: Claiming fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.820 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f2:3f 10.100.0.10'], port_security=['fa:16:3e:32:f2:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1981aa60-63c9-49df-94e5-0874b5ab31e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=547c8605-a609-4b00-82f5-2d938c7ab8e7, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=81977d79-f754-42ba-8b3c-c4eb2f9651d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.822 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 in datapath 99e78054-f9f4-417c-a942-d4f9dd534ef7 bound to our chassis#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e78054-f9f4-417c-a942-d4f9dd534ef7#033[00m
Oct 14 05:22:07 np0005486808 NetworkManager[44885]: <info>  [1760433727.8258] device (tap81977d79-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:07 np0005486808 NetworkManager[44885]: <info>  [1760433727.8282] device (tap81977d79-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:07 np0005486808 systemd-machined[214636]: New machine qemu-148-instance-00000075.
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.840 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5690eba1-49fe-4ff4-89d9-96a34560e7a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.841 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e78054-f1 in ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.844 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e78054-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e8d7a2-bace-46b1-91a6-3fd8fe653277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1db3e2-7226-42ed-b024-ccfadfea424c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.863 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f202e370-eda2-4f0a-8eb4-ea7b5ca9bd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 systemd[1]: Started Virtual Machine qemu-148-instance-00000075.
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a58da886-63ba-4b1e-be16-3a54d29086bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:07Z|01240|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 ovn-installed in OVS
Oct 14 05:22:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:07Z|01241|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 up in Southbound
Oct 14 05:22:07 np0005486808 nova_compute[259627]: 2025-10-14 09:22:07.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[18a0e0ca-cda3-4cb7-a469-8631b38d5259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.944 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6d147e-185f-42a2-abcb-974d0a7829d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:07 np0005486808 NetworkManager[44885]: <info>  [1760433727.9461] manager: (tap99e78054-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Oct 14 05:22:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.992 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[40613ea7-4454-44c4-98a9-04816932f5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:07.997 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[85fd5427-28bd-4881-a829-e03903df0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 NetworkManager[44885]: <info>  [1760433728.0253] device (tap99e78054-f0): carrier: link connected
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.033 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff119bcc-e186-4fb1-a263-7330cc385a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.058 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6195f786-24cf-431b-9c9e-893627b557e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e78054-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:e4:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750678, 'reachable_time': 42953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377933, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.079 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[866acf7e-3bdd-43b9-869e-dcfdc7fc07c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:e4be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750678, 'tstamp': 750678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377934, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.100 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dba65036-f907-432e-be5b-2501ae34605e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e78054-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:e4:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750678, 'reachable_time': 42953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377935, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.145 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9be413-3e4d-41da-b9cc-00eeb471eb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54ee4751-a059-408b-b370-f2a36ac57e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.228 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e78054-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.229 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.229 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e78054-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:08 np0005486808 NetworkManager[44885]: <info>  [1760433728.2319] manager: (tap99e78054-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct 14 05:22:08 np0005486808 kernel: tap99e78054-f0: entered promiscuous mode
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.238 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e78054-f0, col_values=(('external_ids', {'iface-id': '16a7cbd0-b25e-4461-8725-c92979b01f53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:08Z|01242|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.264 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.265 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f36040-488e-47e2-b6c7-5e4b8574e796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.266 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-99e78054-f9f4-417c-a942-d4f9dd534ef7
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/99e78054-f9f4-417c-a942-d4f9dd534ef7.pid.haproxy
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 99e78054-f9f4-417c-a942-d4f9dd534ef7
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:22:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:08.268 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'env', 'PROCESS_TAG=haproxy-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e78054-f9f4-417c-a942-d4f9dd534ef7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:22:08 np0005486808 podman[378008]: 2025-10-14 09:22:08.688325408 +0000 UTC m=+0.040782946 container create 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:22:08 np0005486808 systemd[1]: Started libpod-conmon-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope.
Oct 14 05:22:08 np0005486808 podman[378008]: 2025-10-14 09:22:08.666131671 +0000 UTC m=+0.018589239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:22:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:22:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46afafd803e5df435ff3608e3076ec93ae7e5d3420b992909b0759f480792e61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:08 np0005486808 podman[378008]: 2025-10-14 09:22:08.79471081 +0000 UTC m=+0.147168388 container init 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 05:22:08 np0005486808 podman[378008]: 2025-10-14 09:22:08.801812235 +0000 UTC m=+0.154269794 container start 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:22:08 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : New worker (378030) forked
Oct 14 05:22:08 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : Loading success.
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.893 2 DEBUG nova.compute.manager [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.893 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.894 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.894 2 DEBUG oslo_concurrency.lockutils [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.895 2 DEBUG nova.compute.manager [req-d63805d2-11d2-487e-9252-a831d3f18566 req-56a3d114-24ef-4589-bb42-9a5aab2aaf68 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Processing event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.963 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.964 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9628048, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.964 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.969 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.972 2 INFO nova.virt.libvirt.driver [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance spawned successfully.#033[00m
Oct 14 05:22:08 np0005486808 nova_compute[259627]: 2025-10-14 09:22:08.973 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.009 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.014 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.066 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.067 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9638407, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.067 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.074 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.074 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.075 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.076 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.076 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.077 2 DEBUG nova.virt.libvirt.driver [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.083 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.087 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433728.9693036, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.087 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.103 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.107 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.127 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.137 2 INFO nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 9.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.137 2 DEBUG nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.198 2 INFO nova.compute.manager [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 10.70 seconds to build instance.#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.222 2 DEBUG oslo_concurrency.lockutils [None req-f028b124-3c9d-4fa0-b1d0-658fb12c5ae4 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.540 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.541 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.559 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.646 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.647 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.657 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.658 2 INFO nova.compute.claims [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:22:09 np0005486808 nova_compute[259627]: 2025-10-14 09:22:09.823 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:09.846 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 978 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 14 05:22:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529970925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.315 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.323 2 DEBUG nova.compute.provider_tree [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.351 2 DEBUG nova.scheduler.client.report [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.378 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.379 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.431 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.431 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.456 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.477 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.566 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.567 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.567 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating image(s)#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.589 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.610 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.630 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.633 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.688 2 DEBUG nova.policy [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7629c3d96333470aa7d7ed5cabfc7e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.700 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.701 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.702 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.702 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.732 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:10 np0005486808 nova_compute[259627]: 2025-10-14 09:22:10.737 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.008 2 DEBUG nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG oslo_concurrency.lockutils [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.009 2 DEBUG nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.009 2 WARNING nova.compute.manager [req-c12e7a1c-4e9e-4b4e-9eef-76380de5e5ef req-b0eff78d-28bb-45ed-a270-a20ddaaf0a25 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.043 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.099 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] resizing rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.197 2 DEBUG nova.objects.instance [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'migration_context' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.215 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.215 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Ensure instance console log exists: /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.216 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.216 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.217 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:11 np0005486808 nova_compute[259627]: 2025-10-14 09:22:11.968 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Successfully created port: c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:22:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 738 KiB/s wr, 80 op/s
Oct 14 05:22:12 np0005486808 podman[378228]: 2025-10-14 09:22:12.694337002 +0000 UTC m=+0.099021891 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:22:12 np0005486808 NetworkManager[44885]: <info>  [1760433732.7308] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct 14 05:22:12 np0005486808 NetworkManager[44885]: <info>  [1760433732.7323] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct 14 05:22:12 np0005486808 nova_compute[259627]: 2025-10-14 09:22:12.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:12 np0005486808 podman[378227]: 2025-10-14 09:22:12.750666581 +0000 UTC m=+0.148992363 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 05:22:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:12Z|01243|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:22:12 np0005486808 nova_compute[259627]: 2025-10-14 09:22:12.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:12 np0005486808 nova_compute[259627]: 2025-10-14 09:22:12.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.109 2 DEBUG nova.compute.manager [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.109 2 DEBUG nova.compute.manager [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.110 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.110 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.111 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.702 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Successfully updated port: c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.743 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.744 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.744 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.849 2 DEBUG nova.compute.manager [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-changed-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.850 2 DEBUG nova.compute.manager [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Refreshing instance network info cache due to event network-changed-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:13 np0005486808 nova_compute[259627]: 2025-10-14 09:22:13.850 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:14 np0005486808 nova_compute[259627]: 2025-10-14 09:22:14.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 67 op/s
Oct 14 05:22:14 np0005486808 nova_compute[259627]: 2025-10-14 09:22:14.603 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:22:14 np0005486808 nova_compute[259627]: 2025-10-14 09:22:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.819 2 DEBUG nova.network.neutron [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.823 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.824 2 DEBUG nova.network.neutron [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.850 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.850 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance network_info: |[{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.851 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.851 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Refreshing network info cache for port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.857 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start _get_guest_xml network_info=[{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.859 2 DEBUG oslo_concurrency.lockutils [req-ca58edd4-7c23-49a8-8d87-133a5dfe5ed8 req-c1db2ff9-bb50-4e87-938e-227a25f7a34b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.864 2 WARNING nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.877 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.878 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.883 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.884 2 DEBUG nova.virt.libvirt.host [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.885 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.885 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.886 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.886 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.887 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.887 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.888 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.888 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.889 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.889 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.890 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.891 2 DEBUG nova.virt.hardware [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:22:15 np0005486808 nova_compute[259627]: 2025-10-14 09:22:15.896 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 05:22:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2665617067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.402 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.426 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.432 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120005278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.865 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.867 2 DEBUG nova.virt.libvirt.vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:10Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.867 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.868 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.870 2 DEBUG nova.objects.instance [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.885 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <uuid>725ed629-f7d5-4a69-be5e-4cae3eef2e2e</uuid>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <name>instance-00000076</name>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestServerAdvancedOps-server-1481398530</nova:name>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:22:15</nova:creationTime>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:user uuid="7629c3d96333470aa7d7ed5cabfc7e2c">tempest-TestServerAdvancedOps-20904479-project-member</nova:user>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:project uuid="9a71d13aebeb4969b1877a33505f3dc4">tempest-TestServerAdvancedOps-20904479</nova:project>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <nova:port uuid="c47025b4-9051-4cc4-9fb7-70cd59d6c5c5">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="serial">725ed629-f7d5-4a69-be5e-4cae3eef2e2e</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="uuid">725ed629-f7d5-4a69-be5e-4cae3eef2e2e</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:39:7a:fe"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <target dev="tapc47025b4-90"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/console.log" append="off"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:22:16 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:22:16 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:22:16 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:22:16 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.887 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Preparing to wait for external event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.887 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.888 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.888 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.889 2 DEBUG nova.virt.libvirt.vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:10Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.889 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.890 2 DEBUG nova.network.os_vif_util [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.890 2 DEBUG os_vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.896 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:16 np0005486808 NetworkManager[44885]: <info>  [1760433736.8985] manager: (tapc47025b4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.905 2 INFO os_vif [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.959 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.960 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.960 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] No VIF found with MAC fa:16:3e:39:7a:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.960 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Using config drive#033[00m
Oct 14 05:22:16 np0005486808 nova_compute[259627]: 2025-10-14 09:22:16.979 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.408 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updated VIF entry in instance network info cache for port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.409 2 DEBUG nova.network.neutron [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.427 2 DEBUG oslo_concurrency.lockutils [req-34f24064-90b8-41a8-b1b0-d074335622a5 req-b0dfe08d-b5c0-4a2a-be26-a8bee21a7363 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.471 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Creating config drive at /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.476 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kwluz70 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.641 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_kwluz70" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.683 2 DEBUG nova.storage.rbd_utils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] rbd image 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.687 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.897 2 DEBUG oslo_concurrency.processutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config 725ed629-f7d5-4a69-be5e-4cae3eef2e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.899 2 INFO nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deleting local config drive /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e/disk.config because it was imported into RBD.#033[00m
Oct 14 05:22:17 np0005486808 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 05:22:17 np0005486808 NetworkManager[44885]: <info>  [1760433737.9502] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/512)
Oct 14 05:22:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:17 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:17Z|01244|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 05:22:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:17Z|01245|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 05:22:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.970 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.971 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis#033[00m
Oct 14 05:22:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.972 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:17.973 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[002bf8fb-8b0a-48bf-99f4-d51bdaeb31ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:17 np0005486808 systemd-udevd[378407]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:18 np0005486808 NetworkManager[44885]: <info>  [1760433738.0006] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:18 np0005486808 NetworkManager[44885]: <info>  [1760433738.0016] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:18 np0005486808 systemd-machined[214636]: New machine qemu-149-instance-00000076.
Oct 14 05:22:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:18Z|01246|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 05:22:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:18Z|01247|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:17.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:18 np0005486808 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Oct 14 05:22:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 134 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG nova.compute.manager [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.842 2 DEBUG oslo_concurrency.lockutils [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:18 np0005486808 nova_compute[259627]: 2025-10-14 09:22:18.843 2 DEBUG nova.compute.manager [req-5f1980b0-9ad1-404f-b16a-f825530ff255 req-f709dd0b-73b6-48a7-abb6-7d4aa5162aba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Processing event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.053 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0525393, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.053 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.056 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.060 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.064 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance spawned successfully.#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.064 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.087 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.092 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.096 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.097 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.098 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.098 2 DEBUG nova.virt.libvirt.driver [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.133 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.134 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0560546, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.134 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.172 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433739.0601325, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.172 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.178 2 INFO nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 8.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.179 2 DEBUG nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.189 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.192 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.215 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.230 2 INFO nova.compute.manager [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 9.62 seconds to build instance.#033[00m
Oct 14 05:22:19 np0005486808 nova_compute[259627]: 2025-10-14 09:22:19.247 2 DEBUG oslo_concurrency.lockutils [None req-43575c03-686e-45b6-9b5c-72bb0a85db9f 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 146 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 140 op/s
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.959 2 DEBUG nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.960 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG oslo_concurrency.lockutils [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.961 2 DEBUG nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:20 np0005486808 nova_compute[259627]: 2025-10-14 09:22:20.962 2 WARNING nova.compute.manager [req-0374ea22-21f9-4813-b823-e221ded226e0 req-2101ff57-4b4a-4332-aea5-1226ff6eeef4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:22:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:21Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 05:22:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:21Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:f2:3f 10.100.0.10
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.869 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.870 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.890 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.897 2 DEBUG nova.objects.instance [None req-664a07a7-66ad-437a-a9fd-81f3cdee4e83 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.924 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433741.923434, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.924 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.954 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.965 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.983 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.987 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.988 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:21 np0005486808 nova_compute[259627]: 2025-10-14 09:22:21.999 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.000 2 INFO nova.compute.claims [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:22:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 149 op/s
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.181 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:22 np0005486808 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 05:22:22 np0005486808 NetworkManager[44885]: <info>  [1760433742.2978] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:22:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:22Z|01248|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 05:22:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:22Z|01249|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 05:22:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:22Z|01250|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.354 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis#033[00m
Oct 14 05:22:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.357 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:22.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[870e9281-a935-4a86-8ef8-3399b886042a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:22 np0005486808 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 05:22:22 np0005486808 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 4.041s CPU time.
Oct 14 05:22:22 np0005486808 systemd-machined[214636]: Machine qemu-149-instance-00000076 terminated.
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.478 2 DEBUG nova.compute.manager [None req-664a07a7-66ad-437a-a9fd-81f3cdee4e83 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830324951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.683 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.691 2 DEBUG nova.compute.provider_tree [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.711 2 DEBUG nova.scheduler.client.report [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.733 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.734 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.779 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.780 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.797 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.813 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.913 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.915 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.915 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating image(s)#033[00m
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.941 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:22 np0005486808 nova_compute[259627]: 2025-10-14 09:22:22.970 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.000 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.006 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.060 2 DEBUG nova.policy [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f232ab535af04111bf570569aa293116', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4112adc84657452aa0e117ac5999054a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.064 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.064 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.065 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.067 2 WARNING nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.067 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.068 2 DEBUG oslo_concurrency.lockutils [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.069 2 DEBUG nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.069 2 WARNING nova.compute.manager [req-f1c69fe4-f1ac-4d6b-a4d3-b46d22a7ab2c req-e0ad0b5c-88c6-413c-a669-350b11a86f3e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.109 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.110 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.135 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.139 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6810b29b-088f-441b-8a6a-02eaafada0c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.465 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 6810b29b-088f-441b-8a6a-02eaafada0c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.547 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] resizing rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.655 2 DEBUG nova.objects.instance [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'migration_context' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.674 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.674 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Ensure instance console log exists: /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.675 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.676 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:23 np0005486808 nova_compute[259627]: 2025-10-14 09:22:23.676 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 156 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.200 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Successfully created port: 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.651 2 INFO nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Resuming#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.652 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'flavor' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.684 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.684 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.685 2 DEBUG nova.network.neutron [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.802 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Successfully updated port: 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.823 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.824 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.824 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.913 2 DEBUG nova.compute.manager [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.913 2 DEBUG nova.compute.manager [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:24 np0005486808 nova_compute[259627]: 2025-10-14 09:22:24.914 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:25 np0005486808 nova_compute[259627]: 2025-10-14 09:22:25.006 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.104 2 DEBUG nova.network.neutron [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.130 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.130 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance network_info: |[{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.131 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.132 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.137 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start _get_guest_xml network_info=[{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.144 2 WARNING nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.149 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.150 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.157 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.libvirt.host [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.158 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.159 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.159 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.160 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.161 2 DEBUG nova.virt.hardware [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.165 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Oct 14 05:22:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711984747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.625 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.661 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.665 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.711 2 DEBUG nova.network.neutron [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.730 2 DEBUG oslo_concurrency.lockutils [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.739 2 DEBUG nova.virt.libvirt.vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.739 2 DEBUG nova.network.os_vif_util [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.741 2 DEBUG nova.network.os_vif_util [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.741 2 DEBUG os_vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.749 2 INFO os_vif [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.780 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:26 np0005486808 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 05:22:26 np0005486808 NetworkManager[44885]: <info>  [1760433746.8696] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Oct 14 05:22:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:26Z|01251|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 05:22:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:26Z|01252|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.882 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.884 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis#033[00m
Oct 14 05:22:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.885 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:26.888 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb71dd9f-5dad-4094-aae5-b39881f80186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:26Z|01253|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 05:22:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:26Z|01254|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 nova_compute[259627]: 2025-10-14 09:22:26.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:26 np0005486808 systemd-machined[214636]: New machine qemu-150-instance-00000076.
Oct 14 05:22:26 np0005486808 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Oct 14 05:22:26 np0005486808 systemd-udevd[378743]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:26 np0005486808 NetworkManager[44885]: <info>  [1760433746.9647] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:26 np0005486808 NetworkManager[44885]: <info>  [1760433746.9667] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.072 2 DEBUG nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.073 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.073 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.074 2 DEBUG oslo_concurrency.lockutils [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.074 2 DEBUG nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.075 2 WARNING nova.compute.manager [req-6808029d-df3f-4d0f-925b-8eef989dd50c req-fd71bba4-8ecf-4349-aa31-7cb4eb67a0aa 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:22:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028329480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.146 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.147 2 DEBUG nova.virt.libvirt.vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.148 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.148 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.149 2 DEBUG nova.objects.instance [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.164 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <uuid>6810b29b-088f-441b-8a6a-02eaafada0c5</uuid>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <name>instance-00000077</name>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestSnapshotPattern-server-1951271072</nova:name>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:22:26</nova:creationTime>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:user uuid="f232ab535af04111bf570569aa293116">tempest-TestSnapshotPattern-70687399-project-member</nova:user>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:project uuid="4112adc84657452aa0e117ac5999054a">tempest-TestSnapshotPattern-70687399</nova:project>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <nova:port uuid="6d5e10b7-5c07-4389-8916-e7c277cb2c88">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="serial">6810b29b-088f-441b-8a6a-02eaafada0c5</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="uuid">6810b29b-088f-441b-8a6a-02eaafada0c5</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:7c:22:e5"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <target dev="tap6d5e10b7-5c"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/console.log" append="off"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:22:27 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:22:27 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:22:27 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:22:27 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Preparing to wait for external event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.170 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.171 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.171 2 DEBUG nova.virt.libvirt.vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:22Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.172 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.172 2 DEBUG nova.network.os_vif_util [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG os_vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d5e10b7-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d5e10b7-5c, col_values=(('external_ids', {'iface-id': '6d5e10b7-5c07-4389-8916-e7c277cb2c88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:22:e5', 'vm-uuid': '6810b29b-088f-441b-8a6a-02eaafada0c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:27 np0005486808 NetworkManager[44885]: <info>  [1760433747.1794] manager: (tap6d5e10b7-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.185 2 INFO os_vif [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c')#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.226 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.227 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.227 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No VIF found with MAC fa:16:3e:7c:22:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.227 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Using config drive#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.247 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.618 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.619 2 DEBUG nova.network.neutron [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.642 2 DEBUG oslo_concurrency.lockutils [req-d74d26cf-e749-488c-a1d8-d4fb5806a39a req-7a612672-f0e0-478e-a83c-d2268b8875a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.758 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Creating config drive at /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.766 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk82t929i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.828 2 INFO nova.compute.manager [None req-5fdcc3e5-4265-4647-a757-02b7bc666d62 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Get console output#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.837 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.938 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk82t929i" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.975 2 DEBUG nova.storage.rbd_utils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:27 np0005486808 nova_compute[259627]: 2025-10-14 09:22:27.980 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.145 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 725ed629-f7d5-4a69-be5e-4cae3eef2e2e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.147 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433748.1448653, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.147 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.164 2 DEBUG nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.165 2 DEBUG nova.objects.instance [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.168 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.173 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.179 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance running successfully.#033[00m
Oct 14 05:22:28 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.184 2 DEBUG nova.virt.libvirt.guest [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.185 2 DEBUG nova.compute.manager [None req-af8d440b-5fad-4877-9853-108bd3e1e878 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.190 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.191 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433748.1503003, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.191 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.192 2 DEBUG oslo_concurrency.processutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config 6810b29b-088f-441b-8a6a-02eaafada0c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.193 2 INFO nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deleting local config drive /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5/disk.config because it was imported into RBD.#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.221 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.224 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:28 np0005486808 kernel: tap6d5e10b7-5c: entered promiscuous mode
Oct 14 05:22:28 np0005486808 systemd-udevd[378745]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.2507] manager: (tap6d5e10b7-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.259 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:28Z|01255|binding|INFO|Claiming lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 for this chassis.
Oct 14 05:22:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:28Z|01256|binding|INFO|6d5e10b7-5c07-4389-8916-e7c277cb2c88: Claiming fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.2702] device (tap6d5e10b7-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.2716] device (tap6d5e10b7-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.273 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:22:e5 10.100.0.12'], port_security=['fa:16:3e:7c:22:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6810b29b-088f-441b-8a6a-02eaafada0c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6d5e10b7-5c07-4389-8916-e7c277cb2c88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.274 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 bound to our chassis#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.276 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.287 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1d502e11-6a82-4c04-80eb-686fa8035106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.288 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fc37d66-11 in ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.289 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fc37d66-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.289 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[936ee26b-10e0-4d62-a6d2-3543b29fa1b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b06f47-9c9b-4dd0-b4a7-68790d2b4024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:28Z|01257|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 ovn-installed in OVS
Oct 14 05:22:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:28Z|01258|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 up in Southbound
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 systemd-machined[214636]: New machine qemu-151-instance-00000077.
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.309 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[da1dc5d3-0b62-4e67-a3e1-5b9fe22ee88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.324 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98b1d6a9-a549-4f28-97d8-579f9d9de3de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.364 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b370313a-1013-42fd-95ce-a4c6a1bc1897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.3703] manager: (tap4fc37d66-10): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf30a98-4c70-480b-95fc-185673066865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.414 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a74dc1-6086-4327-8006-e044ef2facb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.417 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa9e2a2-4330-4e36-a30a-79d743f67021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.4423] device (tap4fc37d66-10): carrier: link connected
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.454 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c58cfa19-b05b-4163-9780-7e3c21aaeea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[acf42113-100e-481b-9e6c-05da349a22b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378901, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.489 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3aca4b12-e207-49f8-95e9-b3671046ea9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:1e16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752720, 'tstamp': 752720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378902, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.509 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[674d4976-020c-4ebe-8b4e-d983b944b958]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378903, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.544 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7ac8c3-ff2b-4310-b18b-162dcc27bca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6be66286-f4d5-404e-a769-ed9267a34036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.620 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.620 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.621 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:28 np0005486808 kernel: tap4fc37d66-10: entered promiscuous mode
Oct 14 05:22:28 np0005486808 NetworkManager[44885]: <info>  [1760433748.6236] manager: (tap4fc37d66-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.628 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:28Z|01259|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.634 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.636 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85e35188-f373-45e5-9fa6-c02fc5beaadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.637 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/4fc37d66-193b-4ab7-80e3-58e26dc76e47.pid.haproxy
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 4fc37d66-193b-4ab7-80e3-58e26dc76e47
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:22:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:28.640 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'env', 'PROCESS_TAG=haproxy-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fc37d66-193b-4ab7-80e3-58e26dc76e47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:22:28 np0005486808 nova_compute[259627]: 2025-10-14 09:22:28.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:29 np0005486808 podman[378978]: 2025-10-14 09:22:29.02334929 +0000 UTC m=+0.050979851 container create a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:22:29 np0005486808 systemd[1]: Started libpod-conmon-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope.
Oct 14 05:22:29 np0005486808 podman[378978]: 2025-10-14 09:22:28.99544381 +0000 UTC m=+0.023074411 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:22:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:22:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de954708f8f3fdfb0e31660e7186517bbce587741a2e68b98ff8eba35c2ae10b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:29 np0005486808 podman[378978]: 2025-10-14 09:22:29.124896139 +0000 UTC m=+0.152526800 container init a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:22:29 np0005486808 podman[378978]: 2025-10-14 09:22:29.141578291 +0000 UTC m=+0.169208862 container start a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.164 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 WARNING nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.165 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Processing event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG oslo_concurrency.lockutils [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.166 2 DEBUG nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.167 2 WARNING nova.compute.manager [req-a6dda00a-f8bc-4376-8713-c6506f0105e9 req-5871274a-2b33-4993-9fb9-9dd440183739 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received unexpected event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:29 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : New worker (379000) forked
Oct 14 05:22:29 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : Loading success.
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.319 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.319 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.3186138, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.320 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.324 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.328 2 INFO nova.virt.libvirt.driver [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance spawned successfully.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.328 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.341 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.350 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.353 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.353 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.354 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.355 2 DEBUG nova.virt.libvirt.driver [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.383 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.383 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.319528, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.384 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.422 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.427 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433749.3241954, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.427 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.448 2 INFO nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 6.53 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.448 2 DEBUG nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.455 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.458 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.499 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.524 2 INFO nova.compute.manager [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 7.58 seconds to build instance.#033[00m
Oct 14 05:22:29 np0005486808 nova_compute[259627]: 2025-10-14 09:22:29.543 2 DEBUG oslo_concurrency.lockutils [None req-7112ea7f-e7ac-4904-8f9f-4724ec55897d f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 213 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.649 2 DEBUG nova.objects.instance [None req-0c1638d0-0dff-4b85-b950-232854edefaa 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:30 np0005486808 podman[379010]: 2025-10-14 09:22:30.677041541 +0000 UTC m=+0.077984508 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.679 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433750.6793401, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.679 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.707 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.711 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:30 np0005486808 podman[379009]: 2025-10-14 09:22:30.728598095 +0000 UTC m=+0.126157408 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:22:30 np0005486808 nova_compute[259627]: 2025-10-14 09:22:30.749 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct 14 05:22:31 np0005486808 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 05:22:31 np0005486808 NetworkManager[44885]: <info>  [1760433751.0477] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:31Z|01260|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 05:22:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:31Z|01261|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 05:22:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:31Z|01262|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.074 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.077 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis#033[00m
Oct 14 05:22:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.078 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:31.079 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e579a2e0-4c68-4494-9f74-1569b113eb16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:31 np0005486808 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 05:22:31 np0005486808 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 3.606s CPU time.
Oct 14 05:22:31 np0005486808 systemd-machined[214636]: Machine qemu-150-instance-00000076 terminated.
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.225 2 DEBUG nova.compute.manager [None req-0c1638d0-0dff-4b85-b950-232854edefaa 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.355 2 DEBUG nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.357 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.357 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.358 2 DEBUG oslo_concurrency.lockutils [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.358 2 DEBUG nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:31 np0005486808 nova_compute[259627]: 2025-10-14 09:22:31.358 2 WARNING nova.compute.manager [req-87080679-9e03-4cb5-963f-fb2ea7b25edd req-7da96a39-7710-4de5-b914-0286c00caa88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state None.#033[00m
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.391 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.392 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.392 2 DEBUG nova.objects.instance [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:22:32
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root']
Oct 14 05:22:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.944 2 DEBUG nova.objects.instance [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_requests' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:32 np0005486808 nova_compute[259627]: 2025-10-14 09:22:32.959 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.085 2 INFO nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Resuming#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.086 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'flavor' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquired lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.117 2 DEBUG nova.network.neutron [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:33 np0005486808 nova_compute[259627]: 2025-10-14 09:22:33.166 2 DEBUG nova.policy [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:22:33 np0005486808 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.108 2 DEBUG nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.108 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG oslo_concurrency.lockutils [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.109 2 DEBUG nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.110 2 WARNING nova.compute.manager [req-b2051e75-52ef-4d3c-a257-ef2af26e0c60 req-0eceeda7-48d8-4676-bead-07a5051a5740 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Oct 14 05:22:34 np0005486808 nova_compute[259627]: 2025-10-14 09:22:34.276 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully created port: ff2b9b74-a6fc-4774-89d2-9c010f121d65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.028 2 DEBUG nova.network.neutron [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [{"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.044 2 DEBUG oslo_concurrency.lockutils [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Releasing lock "refresh_cache-725ed629-f7d5-4a69-be5e-4cae3eef2e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.052 2 DEBUG nova.virt.libvirt.vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:31Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.053 2 DEBUG nova.network.os_vif_util [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.054 2 DEBUG nova.network.os_vif_util [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.055 2 DEBUG os_vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc47025b4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc47025b4-90, col_values=(('external_ids', {'iface-id': 'c47025b4-9051-4cc4-9fb7-70cd59d6c5c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7a:fe', 'vm-uuid': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.067 2 INFO os_vif [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.087 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:35 np0005486808 kernel: tapc47025b4-90: entered promiscuous mode
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:35 np0005486808 NetworkManager[44885]: <info>  [1760433755.1717] manager: (tapc47025b4-90): new Tun device (/org/freedesktop/NetworkManager/Devices/518)
Oct 14 05:22:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:35Z|01263|binding|INFO|Claiming lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for this chassis.
Oct 14 05:22:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:35Z|01264|binding|INFO|c47025b4-9051-4cc4-9fb7-70cd59d6c5c5: Claiming fa:16:3e:39:7a:fe 10.100.0.13
Oct 14 05:22:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.179 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.181 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 bound to our chassis#033[00m
Oct 14 05:22:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.182 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:35.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57685871-7e3f-42cb-8662-a66f58956c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:35 np0005486808 systemd-udevd[379080]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:35Z|01265|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 ovn-installed in OVS
Oct 14 05:22:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:35Z|01266|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 up in Southbound
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:35 np0005486808 NetworkManager[44885]: <info>  [1760433755.2072] device (tapc47025b4-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:35 np0005486808 NetworkManager[44885]: <info>  [1760433755.2080] device (tapc47025b4-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:35 np0005486808 systemd-machined[214636]: New machine qemu-152-instance-00000076.
Oct 14 05:22:35 np0005486808 systemd[1]: Started Virtual Machine qemu-152-instance-00000076.
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.860 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Successfully updated port: ff2b9b74-a6fc-4774-89d2-9c010f121d65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.901 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.902 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.902 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.962 2 DEBUG nova.compute.manager [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.963 2 DEBUG nova.compute.manager [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:35 np0005486808 nova_compute[259627]: 2025-10-14 09:22:35.964 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.196 2 DEBUG nova.virt.libvirt.host [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Removed pending event for 725ed629-f7d5-4a69-be5e-4cae3eef2e2e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.197 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433756.1961772, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.198 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.205 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.205 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.206 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 155 op/s
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.220 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.229 2 DEBUG nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.229 2 DEBUG nova.objects.instance [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.232 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.254 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.255 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433756.205775, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.255 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.257 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance running successfully.#033[00m
Oct 14 05:22:36 np0005486808 virtqemud[259351]: argument unsupported: QEMU guest agent is not configured
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.260 2 DEBUG nova.virt.libvirt.guest [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.261 2 DEBUG nova.compute.manager [None req-fd16ab36-f690-44ef-ab63-7c7d9212b839 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.272 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.276 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.298 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:36 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:36.999 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.001 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271480247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.473 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.524 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.525 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.525 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.526 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.526 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.528 2 INFO nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Terminating instance#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.529 2 DEBUG nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:22:37 np0005486808 kernel: tapc47025b4-90 (unregistering): left promiscuous mode
Oct 14 05:22:37 np0005486808 NetworkManager[44885]: <info>  [1760433757.5616] device (tapc47025b4-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.571 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.571 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:37Z|01267|binding|INFO|Releasing lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 from this chassis (sb_readonly=0)
Oct 14 05:22:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:37Z|01268|binding|INFO|Setting lport c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 down in Southbound
Oct 14 05:22:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:37Z|01269|binding|INFO|Removing iface tapc47025b4-90 ovn-installed in OVS
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.588 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7a:fe 10.100.0.13'], port_security=['fa:16:3e:39:7a:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725ed629-f7d5-4a69-be5e-4cae3eef2e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15c99679-b8ca-4f31-bf52-be40d7b4f023', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a71d13aebeb4969b1877a33505f3dc4', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'eaea1bee-fc2e-4983-8a2c-80f8f52b9e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27060d93-efe1-4dd7-9738-9f62e5f1629d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.589 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 in datapath 15c99679-b8ca-4f31-bf52-be40d7b4f023 unbound from our chassis#033[00m
Oct 14 05:22:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.590 162547 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 15c99679-b8ca-4f31-bf52-be40d7b4f023 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct 14 05:22:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:37.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[826f5606-1a0c-4e59-bcf3-344e0b0a0b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 14 05:22:37 np0005486808 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000076.scope: Consumed 2.140s CPU time.
Oct 14 05:22:37 np0005486808 systemd-machined[214636]: Machine qemu-152-instance-00000076 terminated.
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.737 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.738 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.742 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.742 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.775 2 INFO nova.virt.libvirt.driver [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Instance destroyed successfully.#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.776 2 DEBUG nova.objects.instance [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lazy-loading 'resources' on Instance uuid 725ed629-f7d5-4a69-be5e-4cae3eef2e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.786 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.787 2 DEBUG nova.network.neutron [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.790 2 DEBUG nova.virt.libvirt.vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1481398530',display_name='tempest-TestServerAdvancedOps-server-1481398530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1481398530',id=118,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a71d13aebeb4969b1877a33505f3dc4',ramdisk_id='',reservation_id='r-okkb6j7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-20904479',owner_user_name='tempest-TestServerAdvancedOps-20904479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:36Z,user_data=None,user_id='7629c3d96333470aa7d7ed5cabfc7e2c',uuid=725ed629-f7d5-4a69-be5e-4cae3eef2e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.790 2 DEBUG nova.network.os_vif_util [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converting VIF {"id": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "address": "fa:16:3e:39:7a:fe", "network": {"id": "15c99679-b8ca-4f31-bf52-be40d7b4f023", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-427975149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "9a71d13aebeb4969b1877a33505f3dc4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc47025b4-90", "ovs_interfaceid": "c47025b4-9051-4cc4-9fb7-70cd59d6c5c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.791 2 DEBUG nova.network.os_vif_util [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.791 2 DEBUG os_vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc47025b4-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.802 2 INFO os_vif [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7a:fe,bridge_name='br-int',has_traffic_filtering=True,id=c47025b4-9051-4cc4-9fb7-70cd59d6c5c5,network=Network(15c99679-b8ca-4f31-bf52-be40d7b4f023),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc47025b4-90')#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.823 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.824 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.824 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.825 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.825 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.826 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.826 2 WARNING nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.826 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.827 2 DEBUG oslo_concurrency.lockutils [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.828 2 DEBUG nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:37 np0005486808 nova_compute[259627]: 2025-10-14 09:22:37.828 2 WARNING nova.compute.manager [req-ef5720d7-6aed-4be3-8c0d-1d8ab912d685 req-59f1dd6b-1cca-41b0-8c5f-51a874324f02 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state suspended and task_state resuming.#033[00m
Oct 14 05:22:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.049 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3279MB free_disk=59.90092849731445GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.151 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 50c83173-31e3-4f7a-8836-26e52affd0f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 725ed629-f7d5-4a69-be5e-4cae3eef2e2e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6810b29b-088f-441b-8a6a-02eaafada0c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.152 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.200 2 INFO nova.virt.libvirt.driver [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deleting instance files /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_del#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.201 2 INFO nova.virt.libvirt.driver [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deletion of /var/lib/nova/instances/725ed629-f7d5-4a69-be5e-4cae3eef2e2e_del complete#033[00m
Oct 14 05:22:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 214 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 81 op/s
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.225 2 DEBUG nova.network.neutron [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.229 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.267 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.269 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.270 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.273 2 DEBUG nova.virt.libvirt.vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.273 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.274 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.274 2 DEBUG os_vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff2b9b74-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff2b9b74-a6, col_values=(('external_ids', {'iface-id': 'ff2b9b74-a6fc-4774-89d2-9c010f121d65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:98:a5', 'vm-uuid': '50c83173-31e3-4f7a-8836-26e52affd0f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.283 2 INFO nova.compute.manager [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.283 2 DEBUG oslo.service.loopingcall [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.2845] manager: (tapff2b9b74-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.284 2 DEBUG nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.285 2 DEBUG nova.network.neutron [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.289 2 INFO os_vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.290 2 DEBUG nova.virt.libvirt.vif [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.290 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.291 2 DEBUG nova.network.os_vif_util [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.296 2 DEBUG nova.virt.libvirt.guest [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] attach device xml: <interface type="ethernet">
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:17:98:a5"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <target dev="tapff2b9b74-a6"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:22:38 np0005486808 nova_compute[259627]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 14 05:22:38 np0005486808 kernel: tapff2b9b74-a6: entered promiscuous mode
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.3104] manager: (tapff2b9b74-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct 14 05:22:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:38Z|01270|binding|INFO|Claiming lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 for this chassis.
Oct 14 05:22:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:38Z|01271|binding|INFO|ff2b9b74-a6fc-4774-89d2-9c010f121d65: Claiming fa:16:3e:17:98:a5 10.100.0.18
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.324 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:98:a5 10.100.0.18'], port_security=['fa:16:3e:17:98:a5 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ff2b9b74-a6fc-4774-89d2-9c010f121d65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.325 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c bound to our chassis#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.327 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.335 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.335 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.336 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-unplugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.337 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.338 2 DEBUG oslo_concurrency.lockutils [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.338 2 DEBUG nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] No waiting events found dispatching network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.338 2 WARNING nova.compute.manager [req-e887eb07-e53f-4f2c-8d18-830167cac808 req-853f2de4-72b5-4740-a490-97e3ecce561d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received unexpected event network-vif-plugged-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:22:38 np0005486808 systemd-udevd[379191]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc968678-2f35-4616-bf80-3e324cd55e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.345 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39c21153-41 in ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.347 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39c21153-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30c3fa5d-af83-497a-88d1-e104ecf67fab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[44d49f0c-b7ab-4931-9794-6efdaed5bb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.3575] device (tapff2b9b74-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.3613] device (tapff2b9b74-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.377 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[94348f75-ba8e-4033-b4f1-e3bbce8bdc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.393 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86053419-b36a-47cb-bb84-059609102392]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:38Z|01272|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 ovn-installed in OVS
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:38Z|01273|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 up in Southbound
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.404 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:32:f2:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.405 2 DEBUG nova.virt.libvirt.driver [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:17:98:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.431 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7744c7-ea8e-4281-866a-5dc2f8143c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.435 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fcba2d-3665-4a3c-94cd-24b3d19d8543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.4363] manager: (tap39c21153-40): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.448 2 DEBUG nova.virt.libvirt.guest [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:22:38 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 05:22:38 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:22:38 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:22:38 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:22:38 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.470 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[567f6530-5157-4a70-abe4-f32d6b043d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.476 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c420d173-d172-4281-a669-db4a877c61f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.485 2 DEBUG oslo_concurrency.lockutils [None req-52761b6c-4d90-4bd8-9878-7a606d255833 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.4971] device (tap39c21153-40): carrier: link connected
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.502 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[eec99a91-3c0b-4662-b0d0-ea4412565cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.533 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f97cf2b-dbca-463d-bd55-3fa11aadc5db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379236, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.555 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87f8b90b-8f6a-4ffa-85d7-e6d9222cbd1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:7815'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753725, 'tstamp': 753725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379237, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9dea4c8a-cd17-4a01-8660-9ac25e1575e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379238, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.605 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[29fffa17-3d6e-4dcc-867e-db6e35172d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d559cafb-7084-4523-adce-0a985d904600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.689 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.690 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.690 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 NetworkManager[44885]: <info>  [1760433758.6936] manager: (tap39c21153-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 kernel: tap39c21153-40: entered promiscuous mode
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:38Z|01274|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 05:22:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573623770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.717 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.718 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[106762d1-d9cc-466b-bfb1-a509ee132152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.719 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.pid.haproxy
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 39c21153-4a3d-40fd-91df-ae7d5dae4d8c
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:22:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:38.719 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'env', 'PROCESS_TAG=haproxy-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39c21153-4a3d-40fd-91df-ae7d5dae4d8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.724 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.728 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.746 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.764 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:22:38 np0005486808 nova_compute[259627]: 2025-10-14 09:22:38.765 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:39 np0005486808 podman[379272]: 2025-10-14 09:22:39.086976126 +0000 UTC m=+0.055276567 container create 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:22:39 np0005486808 systemd[1]: Started libpod-conmon-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope.
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.141 2 DEBUG nova.network.neutron [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:39 np0005486808 podman[379272]: 2025-10-14 09:22:39.056080302 +0000 UTC m=+0.024380803 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.156 2 INFO nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:22:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c287ab2c96f243d6a33b107299142f79d3dafdd9e98b859faafdb8e51bef5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:39 np0005486808 podman[379272]: 2025-10-14 09:22:39.196622095 +0000 UTC m=+0.164922596 container init 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.198 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.198 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:39 np0005486808 podman[379272]: 2025-10-14 09:22:39.209506614 +0000 UTC m=+0.177807065 container start 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:22:39 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : New worker (379293) forked
Oct 14 05:22:39 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : Loading success.
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.243 2 DEBUG nova.compute.manager [req-3f39a907-fe25-41b6-8391-c1195aae0f2c req-1e182db1-0b7a-438b-ad54-575c0dab4eae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Received event network-vif-deleted-c47025b4-9051-4cc4-9fb7-70cd59d6c5c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.281 2 DEBUG oslo_concurrency.processutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437203333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.763 2 DEBUG oslo_concurrency.processutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.765 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.767 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.774 2 DEBUG nova.compute.provider_tree [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.800 2 DEBUG nova.scheduler.client.report [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.834 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.865 2 INFO nova.scheduler.client.report [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Deleted allocations for instance 725ed629-f7d5-4a69-be5e-4cae3eef2e2e#033[00m
Oct 14 05:22:39 np0005486808 nova_compute[259627]: 2025-10-14 09:22:39.936 2 DEBUG oslo_concurrency.lockutils [None req-34cc8f78-f5c9-4fbd-b5b7-21e3bb6dcd25 7629c3d96333470aa7d7ed5cabfc7e2c 9a71d13aebeb4969b1877a33505f3dc4 - - default default] Lock "725ed629-f7d5-4a69-be5e-4cae3eef2e2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.182 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.182 2 DEBUG nova.network.neutron [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 188 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 330 KiB/s wr, 93 op/s
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.217 2 DEBUG oslo_concurrency.lockutils [req-4208c145-34ab-4200-a4bd-58cc3c10af95 req-692c48f1-b7a3-4ff7-a30e-7c6ba01d4333 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.458 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.459 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.459 2 WARNING nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.460 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.461 2 DEBUG oslo_concurrency.lockutils [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.461 2 DEBUG nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.461 2 WARNING nova.compute.manager [req-eed961d1-bb54-44b4-a684-6e7b04a0b38f req-8a4681fb-7421-49c0-b2a9-b8d09e09feb2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:22:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:40Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 05:22:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:40Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:22:e5 10.100.0.12
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:40 np0005486808 nova_compute[259627]: 2025-10-14 09:22:40.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:22:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:41Z|01275|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:22:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:41Z|01276|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 05:22:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:41Z|01277|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 05:22:41 np0005486808 nova_compute[259627]: 2025-10-14 09:22:41.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:41Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:98:a5 10.100.0.18
Oct 14 05:22:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:41Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:98:a5 10.100.0.18
Oct 14 05:22:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct 14 05:22:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:42 np0005486808 nova_compute[259627]: 2025-10-14 09:22:42.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:42 np0005486808 nova_compute[259627]: 2025-10-14 09:22:42.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:22:42 np0005486808 nova_compute[259627]: 2025-10-14 09:22:42.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:22:43 np0005486808 nova_compute[259627]: 2025-10-14 09:22:43.212 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:43 np0005486808 nova_compute[259627]: 2025-10-14 09:22:43.212 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:43 np0005486808 nova_compute[259627]: 2025-10-14 09:22:43.213 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:22:43 np0005486808 nova_compute[259627]: 2025-10-14 09:22:43.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015069725939311064 of space, bias 1.0, pg target 0.45209177817933194 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:22:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:22:43 np0005486808 nova_compute[259627]: 2025-10-14 09:22:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:43 np0005486808 podman[379327]: 2025-10-14 09:22:43.670779951 +0000 UTC m=+0.076209295 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 14 05:22:43 np0005486808 podman[379326]: 2025-10-14 09:22:43.68774356 +0000 UTC m=+0.101692414 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:22:44 np0005486808 nova_compute[259627]: 2025-10-14 09:22:44.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 196 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.654 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.655 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.671 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.746 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.747 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.752 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.753 2 INFO nova.compute.claims [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:22:45 np0005486808 nova_compute[259627]: 2025-10-14 09:22:45.906 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.069 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.092 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.093 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:22:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 05:22:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148067217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.370 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.378 2 DEBUG nova.compute.provider_tree [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.394 2 DEBUG nova.scheduler.client.report [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.425 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.425 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.518 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.519 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.539 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.568 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.674 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.675 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.676 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating image(s)#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.712 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.743 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.762 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.765 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.852 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.853 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.854 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.855 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.878 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.881 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:46 np0005486808 nova_compute[259627]: 2025-10-14 09:22:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.139 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.222 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.341 2 DEBUG nova.policy [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.349 2 DEBUG nova.objects.instance [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.375 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.376 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Ensure instance console log exists: /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:47 np0005486808 nova_compute[259627]: 2025-10-14 09:22:47.377 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:48 np0005486808 nova_compute[259627]: 2025-10-14 09:22:48.104 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Successfully created port: 0c7d56b8-701e-431d-8f3f-4682c684a719 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:22:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 05:22:48 np0005486808 nova_compute[259627]: 2025-10-14 09:22:48.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:49 np0005486808 nova_compute[259627]: 2025-10-14 09:22:49.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 219 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.237 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Successfully updated port: 0c7d56b8-701e-431d-8f3f-4682c684a719 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.256 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.257 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.257 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.468 2 DEBUG nova.compute.manager [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-changed-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.469 2 DEBUG nova.compute.manager [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Refreshing instance network info cache due to event network-changed-0c7d56b8-701e-431d-8f3f-4682c684a719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.470 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.671 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.838 2 DEBUG nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.896 2 INFO nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] instance snapshotting#033[00m
Oct 14 05:22:50 np0005486808 nova_compute[259627]: 2025-10-14 09:22:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:22:51 np0005486808 nova_compute[259627]: 2025-10-14 09:22:51.207 2 INFO nova.virt.libvirt.driver [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Beginning live snapshot process#033[00m
Oct 14 05:22:51 np0005486808 nova_compute[259627]: 2025-10-14 09:22:51.409 2 DEBUG nova.virt.libvirt.imagebackend [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No parent info for a4789543-f429-47d7-9f79-80a9d90a59f9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 14 05:22:51 np0005486808 nova_compute[259627]: 2025-10-14 09:22:51.627 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(c62c4c34216e4b708b452a02674b9e40) on rbd image(6810b29b-088f-441b-8a6a-02eaafada0c5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:22:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.224 2 DEBUG nova.network.neutron [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.243 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.243 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance network_info: |[{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.244 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.244 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Refreshing network info cache for port 0c7d56b8-701e-431d-8f3f-4682c684a719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.247 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start _get_guest_xml network_info=[{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.253 2 WARNING nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.260 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.261 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.273 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.274 2 DEBUG nova.virt.libvirt.host [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.274 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.275 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.276 2 DEBUG nova.virt.hardware [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.279 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.559 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning vms/6810b29b-088f-441b-8a6a-02eaafada0c5_disk@c62c4c34216e4b708b452a02674b9e40 to images/d594f64a-1811-45da-92c9-566107aad012 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.665 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] flattening images/d594f64a-1811-45da-92c9-566107aad012 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.772 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433757.7711184, 725ed629-f7d5-4a69-be5e-4cae3eef2e2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.772 2 INFO nova.compute.manager [-] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.793 2 DEBUG nova.compute.manager [None req-524fdd00-5dd1-48fd-8e7b-860c3fa36a20 - - - - - -] [instance: 725ed629-f7d5-4a69-be5e-4cae3eef2e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3229893015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.841 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.879 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:52 np0005486808 nova_compute[259627]: 2025-10-14 09:22:52.890 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.049 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] removing snapshot(c62c4c34216e4b708b452a02674b9e40) on rbd image(6810b29b-088f-441b-8a6a-02eaafada0c5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:22:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10162848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.372 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.374 2 DEBUG nova.virt.libvirt.vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:46Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.374 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.376 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.377 2 DEBUG nova.objects.instance [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.395 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <uuid>ef3d76bf-9763-4405-8e48-c2c4405a2a3b</uuid>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <name>instance-00000078</name>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-74163694</nova:name>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:22:52</nova:creationTime>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <nova:port uuid="0c7d56b8-701e-431d-8f3f-4682c684a719">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="serial">ef3d76bf-9763-4405-8e48-c2c4405a2a3b</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="uuid">ef3d76bf-9763-4405-8e48-c2c4405a2a3b</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b4:88:3d"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <target dev="tap0c7d56b8-70"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/console.log" append="off"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:22:53 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:22:53 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:22:53 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:22:53 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.396 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Preparing to wait for external event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.397 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.397 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.398 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.399 2 DEBUG nova.virt.libvirt.vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:22:46Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.400 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.402 2 DEBUG nova.network.os_vif_util [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.402 2 DEBUG os_vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.405 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c7d56b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c7d56b8-70, col_values=(('external_ids', {'iface-id': '0c7d56b8-701e-431d-8f3f-4682c684a719', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:88:3d', 'vm-uuid': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:53 np0005486808 NetworkManager[44885]: <info>  [1760433773.4147] manager: (tap0c7d56b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.421 2 INFO os_vif [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70')#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.498 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.498 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.499 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:b4:88:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.499 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Using config drive#033[00m
Oct 14 05:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 14 05:22:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 14 05:22:53 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.533 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.555 2 DEBUG nova.storage.rbd_utils [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(snap) on rbd image(d594f64a-1811-45da-92c9-566107aad012) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.913 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updated VIF entry in instance network info cache for port 0c7d56b8-701e-431d-8f3f-4682c684a719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.913 2 DEBUG nova.network.neutron [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [{"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:22:53 np0005486808 nova_compute[259627]: 2025-10-14 09:22:53.930 2 DEBUG oslo_concurrency.lockutils [req-a3b38dd8-5aab-484b-a869-4b462b39f20e req-477ae2e9-7156-4363-93f2-fea177b599e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-ef3d76bf-9763-4405-8e48-c2c4405a2a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 246 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 45 op/s
Oct 14 05:22:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 14 05:22:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 14 05:22:54 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.670 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Creating config drive at /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config#033[00m
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.679 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6m6hpj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.830 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6m6hpj7" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.862 2 DEBUG nova.storage.rbd_utils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:22:54 np0005486808 nova_compute[259627]: 2025-10-14 09:22:54.866 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.041 2 DEBUG oslo_concurrency.processutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config ef3d76bf-9763-4405-8e48-c2c4405a2a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.042 2 INFO nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deleting local config drive /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b/disk.config because it was imported into RBD.#033[00m
Oct 14 05:22:55 np0005486808 NetworkManager[44885]: <info>  [1760433775.1253] manager: (tap0c7d56b8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Oct 14 05:22:55 np0005486808 kernel: tap0c7d56b8-70: entered promiscuous mode
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:55Z|01278|binding|INFO|Claiming lport 0c7d56b8-701e-431d-8f3f-4682c684a719 for this chassis.
Oct 14 05:22:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:55Z|01279|binding|INFO|0c7d56b8-701e-431d-8f3f-4682c684a719: Claiming fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.147 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:88:3d 10.100.0.22'], port_security=['fa:16:3e:b4:88:3d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e990e92a-384a-47c4-be5e-d58c231a3275', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0c7d56b8-701e-431d-8f3f-4682c684a719) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.149 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0c7d56b8-701e-431d-8f3f-4682c684a719 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c bound to our chassis#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.152 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c#033[00m
Oct 14 05:22:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:55Z|01280|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 ovn-installed in OVS
Oct 14 05:22:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:22:55Z|01281|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 up in Southbound
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.178 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[629e9bef-1e42-4959-ad04-5c6d402b7038]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 systemd-udevd[379838]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:22:55 np0005486808 systemd-machined[214636]: New machine qemu-153-instance-00000078.
Oct 14 05:22:55 np0005486808 NetworkManager[44885]: <info>  [1760433775.2090] device (tap0c7d56b8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:22:55 np0005486808 NetworkManager[44885]: <info>  [1760433775.2098] device (tap0c7d56b8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:22:55 np0005486808 systemd[1]: Started Virtual Machine qemu-153-instance-00000078.
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.223 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[251c6ca1-996e-4740-944d-9e71eb8cde54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.228 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0893192f-5f08-4f58-8616-ac8f456af18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.256 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a33e44ec-9c52-4232-9e18-58bf8c2ca7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.277 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99c0e95e-79f5-4357-802e-9af2d29b71e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379846, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.296 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72bcfc4e-d7b0-480d-9cf1-65b57bd05290]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753740, 'tstamp': 753740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379851, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753743, 'tstamp': 753743}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379851, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.298 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.363 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:22:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:22:55.365 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:22:55 np0005486808 nova_compute[259627]: 2025-10-14 09:22:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.041 2 INFO nova.virt.libvirt.driver [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Snapshot image upload complete#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.042 2 INFO nova.compute.manager [None req-148de39c-eb77-4499-ac2b-f94baf357c76 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 5.14 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.153 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433776.152919, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.154 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Started (Lifecycle Event)#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.173 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.178 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433776.1562665, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.178 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.193 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.199 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:56 np0005486808 nova_compute[259627]: 2025-10-14 09:22:56.211 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 9.8 MiB/s wr, 220 op/s
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:22:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 82c8e937-95e1-4d0c-ade5-1b6762f6ae32 does not exist
Oct 14 05:22:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9343065b-6755-4cc9-9c67-708125294f21 does not exist
Oct 14 05:22:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ecc601aa-6f2f-4000-9d4d-01566355ca74 does not exist
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:22:57 np0005486808 podman[380169]: 2025-10-14 09:22:57.914803612 +0000 UTC m=+0.047977637 container create 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:22:57 np0005486808 systemd[1]: Started libpod-conmon-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope.
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 14 05:22:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 14 05:22:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:22:57 np0005486808 podman[380169]: 2025-10-14 09:22:57.895642358 +0000 UTC m=+0.028816413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:22:58 np0005486808 podman[380169]: 2025-10-14 09:22:58.003545095 +0000 UTC m=+0.136719210 container init 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:22:58 np0005486808 podman[380169]: 2025-10-14 09:22:58.010689751 +0000 UTC m=+0.143863776 container start 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:22:58 np0005486808 podman[380169]: 2025-10-14 09:22:58.013712286 +0000 UTC m=+0.146886331 container attach 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:22:58 np0005486808 dazzling_hopper[380185]: 167 167
Oct 14 05:22:58 np0005486808 systemd[1]: libpod-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope: Deactivated successfully.
Oct 14 05:22:58 np0005486808 podman[380169]: 2025-10-14 09:22:58.019234892 +0000 UTC m=+0.152408927 container died 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:22:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-10087adeb801d8f20640d4d9a94690c8144be0c8f497e81c5d37abdb5900d91a-merged.mount: Deactivated successfully.
Oct 14 05:22:58 np0005486808 podman[380169]: 2025-10-14 09:22:58.074665592 +0000 UTC m=+0.207839657 container remove 15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:22:58 np0005486808 systemd[1]: libpod-conmon-15dc01e6fcb8dbeb0e7eda45f2daf95263561c505de24ccf7f34c2468d342c08.scope: Deactivated successfully.
Oct 14 05:22:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 326 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.2 MiB/s wr, 194 op/s
Oct 14 05:22:58 np0005486808 podman[380207]: 2025-10-14 09:22:58.333711873 +0000 UTC m=+0.061064900 container create 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:22:58 np0005486808 systemd[1]: Started libpod-conmon-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope.
Oct 14 05:22:58 np0005486808 podman[380207]: 2025-10-14 09:22:58.317267687 +0000 UTC m=+0.044620724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:22:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:22:58 np0005486808 podman[380207]: 2025-10-14 09:22:58.454259532 +0000 UTC m=+0.181612589 container init 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:22:58 np0005486808 podman[380207]: 2025-10-14 09:22:58.467678593 +0000 UTC m=+0.195031610 container start 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:22:58 np0005486808 podman[380207]: 2025-10-14 09:22:58.471538429 +0000 UTC m=+0.198891486 container attach 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG nova.compute.manager [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.680 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.681 2 DEBUG oslo_concurrency.lockutils [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.681 2 DEBUG nova.compute.manager [req-3acb27f1-b973-44fd-87bd-376e1a0a264a req-408baddd-19f9-473c-8549-4a0f760b9f71 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Processing event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.682 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.688 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433778.6874824, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.688 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.690 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.693 2 INFO nova.virt.libvirt.driver [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance spawned successfully.#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.694 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.708 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.716 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.720 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.721 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.721 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.722 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.722 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.723 2 DEBUG nova.virt.libvirt.driver [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.744 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.777 2 INFO nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 12.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.778 2 DEBUG nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.850 2 INFO nova.compute.manager [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 13.13 seconds to build instance.#033[00m
Oct 14 05:22:58 np0005486808 nova_compute[259627]: 2025-10-14 09:22:58.872 2 DEBUG oslo_concurrency.lockutils [None req-3f1668c0-c519-4a44-bfb9-dfd23c883e3a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.041 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.041 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.060 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.163 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.164 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.179 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.179 2 INFO nova.compute.claims [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.374 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:22:59 np0005486808 zen_mirzakhani[380224]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:22:59 np0005486808 zen_mirzakhani[380224]: --> relative data size: 1.0
Oct 14 05:22:59 np0005486808 zen_mirzakhani[380224]: --> All data devices are unavailable
Oct 14 05:22:59 np0005486808 systemd[1]: libpod-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Deactivated successfully.
Oct 14 05:22:59 np0005486808 systemd[1]: libpod-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Consumed 1.046s CPU time.
Oct 14 05:22:59 np0005486808 podman[380207]: 2025-10-14 09:22:59.6566003 +0000 UTC m=+1.383953337 container died 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:22:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d665a4ebab82d306a5da2b013f76906758735e874cc24ad1ac79daee3fce0913-merged.mount: Deactivated successfully.
Oct 14 05:22:59 np0005486808 podman[380207]: 2025-10-14 09:22:59.725217336 +0000 UTC m=+1.452570363 container remove 28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:22:59 np0005486808 systemd[1]: libpod-conmon-28e884683834510670ccd1f2f09e1edff1e91f5468c4bc17638ea4ca5dcdf252.scope: Deactivated successfully.
Oct 14 05:22:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:22:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269823853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.907 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.915 2 DEBUG nova.compute.provider_tree [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.934 2 DEBUG nova.scheduler.client.report [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.963 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:22:59 np0005486808 nova_compute[259627]: 2025-10-14 09:22:59.965 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.037 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.037 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.211 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:23:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.0 MiB/s wr, 220 op/s
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.228 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.333 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.334 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.334 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating image(s)#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.358 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.406975981 +0000 UTC m=+0.084030017 container create 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.423 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:00 np0005486808 systemd[1]: Started libpod-conmon-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope.
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.356906924 +0000 UTC m=+0.033960980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.453 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.456 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "00eb9ed082be65aa60123c7b340067da781aa0fa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.457 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "00eb9ed082be65aa60123c7b340067da781aa0fa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.491741186 +0000 UTC m=+0.168795222 container init 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.499791945 +0000 UTC m=+0.176845971 container start 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.503405054 +0000 UTC m=+0.180459070 container attach 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:23:00 np0005486808 vigilant_pike[380493]: 167 167
Oct 14 05:23:00 np0005486808 systemd[1]: libpod-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope: Deactivated successfully.
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.508172832 +0000 UTC m=+0.185226878 container died 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:23:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-66111c2a9c9d578137af4a5fd1d87e2bea74640e5cd0166ffd724415571b3e0d-merged.mount: Deactivated successfully.
Oct 14 05:23:00 np0005486808 podman[380425]: 2025-10-14 09:23:00.559302696 +0000 UTC m=+0.236356742 container remove 231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:23:00 np0005486808 systemd[1]: libpod-conmon-231795364bab558a7d19e24b59542f0bb2e8e8e23529247f02c60b82909557d6.scope: Deactivated successfully.
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.639 2 DEBUG nova.policy [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f232ab535af04111bf570569aa293116', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4112adc84657452aa0e117ac5999054a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.782 2 DEBUG nova.virt.libvirt.imagebackend [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image locations are: [{'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.838 2 DEBUG nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG oslo_concurrency.lockutils [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.839 2 DEBUG nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.839 2 WARNING nova.compute.manager [req-ed8b0e64-b789-4db0-8d72-d31a7495351b req-59a1e423-006e-435d-97e7-c42c94abda16 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received unexpected event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:23:00 np0005486808 podman[380517]: 2025-10-14 09:23:00.842099273 +0000 UTC m=+0.050570741 container create 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.853 2 DEBUG nova.virt.libvirt.imagebackend [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Selected location: {'url': 'rbd://c49aadb6-9b04-5cb1-8f5f-4c91676c568e/images/d594f64a-1811-45da-92c9-566107aad012/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 14 05:23:00 np0005486808 nova_compute[259627]: 2025-10-14 09:23:00.854 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning images/d594f64a-1811-45da-92c9-566107aad012@snap to None/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:23:00 np0005486808 systemd[1]: Started libpod-conmon-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope.
Oct 14 05:23:00 np0005486808 podman[380517]: 2025-10-14 09:23:00.819262139 +0000 UTC m=+0.027716406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:23:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:00 np0005486808 podman[380517]: 2025-10-14 09:23:00.955308211 +0000 UTC m=+0.163762458 container init 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:23:00 np0005486808 podman[380574]: 2025-10-14 09:23:00.955822333 +0000 UTC m=+0.080750326 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:23:00 np0005486808 podman[380563]: 2025-10-14 09:23:00.959461453 +0000 UTC m=+0.083729060 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:23:00 np0005486808 podman[380517]: 2025-10-14 09:23:00.964431236 +0000 UTC m=+0.172885483 container start 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:23:00 np0005486808 podman[380517]: 2025-10-14 09:23:00.977232833 +0000 UTC m=+0.185687080 container attach 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.156 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "00eb9ed082be65aa60123c7b340067da781aa0fa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.266 2 DEBUG nova.objects.instance [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'migration_context' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.282 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Ensure instance console log exists: /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:01 np0005486808 nova_compute[259627]: 2025-10-14 09:23:01.283 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]: {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    "0": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "devices": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "/dev/loop3"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            ],
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_name": "ceph_lv0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_size": "21470642176",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "name": "ceph_lv0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "tags": {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_name": "ceph",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.crush_device_class": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.encrypted": "0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_id": "0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.vdo": "0"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            },
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "vg_name": "ceph_vg0"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        }
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    ],
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    "1": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "devices": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "/dev/loop4"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            ],
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_name": "ceph_lv1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_size": "21470642176",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "name": "ceph_lv1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "tags": {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_name": "ceph",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.crush_device_class": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.encrypted": "0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_id": "1",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.vdo": "0"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            },
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "vg_name": "ceph_vg1"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        }
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    ],
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    "2": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "devices": [
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "/dev/loop5"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            ],
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_name": "ceph_lv2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_size": "21470642176",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "name": "ceph_lv2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "tags": {
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.cluster_name": "ceph",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.crush_device_class": "",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.encrypted": "0",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osd_id": "2",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:                "ceph.vdo": "0"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            },
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "type": "block",
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:            "vg_name": "ceph_vg2"
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:        }
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]:    ]
Oct 14 05:23:01 np0005486808 naughty_maxwell[380615]: }
Oct 14 05:23:01 np0005486808 systemd[1]: libpod-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope: Deactivated successfully.
Oct 14 05:23:01 np0005486808 podman[380701]: 2025-10-14 09:23:01.844918713 +0000 UTC m=+0.024749743 container died 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 05:23:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8e2409cc6b1dd0ffbb963127b107c5f048d115ecf74f08e4106a3e5f5d79c2b5-merged.mount: Deactivated successfully.
Oct 14 05:23:01 np0005486808 podman[380701]: 2025-10-14 09:23:01.981925548 +0000 UTC m=+0.161756558 container remove 87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:23:01 np0005486808 systemd[1]: libpod-conmon-87817f05ce2a3eb09fc787a0e6ef26a6e0613358b46752310e798dd9e8893a7b.scope: Deactivated successfully.
Oct 14 05:23:02 np0005486808 nova_compute[259627]: 2025-10-14 09:23:02.022 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Successfully created port: 169fcf13-d616-47ef-8558-362361f16f03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 237 op/s
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.751007221 +0000 UTC m=+0.051646867 container create 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:23:02 np0005486808 systemd[1]: Started libpod-conmon-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope.
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.827208944 +0000 UTC m=+0.127848600 container init 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.733385015 +0000 UTC m=+0.034024681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.836139454 +0000 UTC m=+0.136779140 container start 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.840564984 +0000 UTC m=+0.141204650 container attach 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:23:02 np0005486808 funny_liskov[380870]: 167 167
Oct 14 05:23:02 np0005486808 systemd[1]: libpod-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope: Deactivated successfully.
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.843710982 +0000 UTC m=+0.144350668 container died 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:23:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8bc75e951dd04e8fed14665f7ea05663315f5b7ef1dca6b8d5a340a2e2a3b8d5-merged.mount: Deactivated successfully.
Oct 14 05:23:02 np0005486808 podman[380854]: 2025-10-14 09:23:02.890779185 +0000 UTC m=+0.191418871 container remove 0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:23:02 np0005486808 systemd[1]: libpod-conmon-0d31a599a01f4b9bed7a4ad254889d9e98b4fb91200126ad60b392312a3cf7b1.scope: Deactivated successfully.
Oct 14 05:23:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:03 np0005486808 podman[380895]: 2025-10-14 09:23:03.138514986 +0000 UTC m=+0.061706876 container create c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:23:03 np0005486808 systemd[1]: Started libpod-conmon-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope.
Oct 14 05:23:03 np0005486808 podman[380895]: 2025-10-14 09:23:03.113019556 +0000 UTC m=+0.036211506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:23:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:23:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:03 np0005486808 podman[380895]: 2025-10-14 09:23:03.244235388 +0000 UTC m=+0.167427268 container init c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:23:03 np0005486808 podman[380895]: 2025-10-14 09:23:03.25848565 +0000 UTC m=+0.181677510 container start c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:23:03 np0005486808 podman[380895]: 2025-10-14 09:23:03.261987197 +0000 UTC m=+0.185179097 container attach c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.647 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Successfully updated port: 169fcf13-d616-47ef-8558-362361f16f03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.663 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.757 2 DEBUG nova.compute.manager [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.758 2 DEBUG nova.compute.manager [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.758 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.936 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.977 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.978 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:03 np0005486808 nova_compute[259627]: 2025-10-14 09:23:03.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.012 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.032 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.032 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id d594f64a-1811-45da-92c9-566107aad012 yields fingerprint 00eb9ed082be65aa60123c7b340067da781aa0fa _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Image id a4789543-f429-47d7-9f79-80a9d90a59f9 yields fingerprint 342c3cf69558783c61e2fc446ea836becb687963 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.034 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] image a4789543-f429-47d7-9f79-80a9d90a59f9 at (/var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963): checking#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.034 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] image a4789543-f429-47d7-9f79-80a9d90a59f9 at (/var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.036 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 50c83173-31e3-4f7a-8836-26e52affd0f2 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.036 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 6810b29b-088f-441b-8a6a-02eaafada0c5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.037 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] ef3d76bf-9763-4405-8e48-c2c4405a2a3b is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.037 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.037 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.037 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Active base files: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.037 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Removable base files: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.038 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.038 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.039 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]: {
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_id": 2,
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "type": "bluestore"
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    },
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_id": 1,
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "type": "bluestore"
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    },
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_id": 0,
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:        "type": "bluestore"
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]:    }
Oct 14 05:23:04 np0005486808 unruffled_volhard[380912]: }
Oct 14 05:23:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Oct 14 05:23:04 np0005486808 systemd[1]: libpod-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope: Deactivated successfully.
Oct 14 05:23:04 np0005486808 conmon[380912]: conmon c4328c265c8ea1a8a320 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope/container/memory.events
Oct 14 05:23:04 np0005486808 podman[380895]: 2025-10-14 09:23:04.259836094 +0000 UTC m=+1.183027984 container died c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:23:04 np0005486808 nova_compute[259627]: 2025-10-14 09:23:04.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b05da768ac4b6ef8c99dabbe9710587f286d72e05815d80aa40bbd1f9903402a-merged.mount: Deactivated successfully.
Oct 14 05:23:04 np0005486808 podman[380895]: 2025-10-14 09:23:04.318530004 +0000 UTC m=+1.241721904 container remove c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:23:04 np0005486808 systemd[1]: libpod-conmon-c4328c265c8ea1a8a320b62ad001ed252f4c398c5addbb0123ecbd5377275508.scope: Deactivated successfully.
Oct 14 05:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:23:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:23:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:23:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 73447796-0991-4fb4-bd4d-94225c535c27 does not exist
Oct 14 05:23:04 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e0068f8c-a12d-49f3-8c54-b4236d25bd45 does not exist
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:23:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254305706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.698 2 DEBUG nova.network.neutron [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance network_info: |[{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.726 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.729 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start _get_guest_xml network_info=[{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:22:50Z,direct_url=<?>,disk_format='raw',id=d594f64a-1811-45da-92c9-566107aad012,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-397603188',owner='4112adc84657452aa0e117ac5999054a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:22:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'd594f64a-1811-45da-92c9-566107aad012'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.733 2 WARNING nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.740 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.740 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.744 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.libvirt.host [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-14T09:22:50Z,direct_url=<?>,disk_format='raw',id=d594f64a-1811-45da-92c9-566107aad012,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-397603188',owner='4112adc84657452aa0e117ac5999054a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-14T09:22:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.745 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.746 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.747 2 DEBUG nova.virt.hardware [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:23:05 np0005486808 nova_compute[259627]: 2025-10-14 09:23:05.749 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777986676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 KiB/s wr, 116 op/s
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.235 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.276 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.283 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:06.587 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:06.589 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:23:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:23:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969605750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.823 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.825 2 DEBUG nova.virt.libvirt.vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:00Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.825 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.826 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.828 2 DEBUG nova.objects.instance [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.855 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <uuid>8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</uuid>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <name>instance-00000079</name>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestSnapshotPattern-server-1820866852</nova:name>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:23:05</nova:creationTime>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:user uuid="f232ab535af04111bf570569aa293116">tempest-TestSnapshotPattern-70687399-project-member</nova:user>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:project uuid="4112adc84657452aa0e117ac5999054a">tempest-TestSnapshotPattern-70687399</nova:project>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="d594f64a-1811-45da-92c9-566107aad012"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <nova:port uuid="169fcf13-d616-47ef-8558-362361f16f03">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="serial">8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="uuid">8f5e63fb-23c2-4f15-acca-bc5fbeb0729b</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:46:4e:45"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <target dev="tap169fcf13-d6"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/console.log" append="off"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <input type="keyboard" bus="usb"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:23:06 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:23:06 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:23:06 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:23:06 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.857 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Preparing to wait for external event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.857 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.858 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.858 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.virt.libvirt.vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:00Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.859 2 DEBUG nova.network.os_vif_util [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.860 2 DEBUG os_vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap169fcf13-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap169fcf13-d6, col_values=(('external_ids', {'iface-id': '169fcf13-d616-47ef-8558-362361f16f03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:4e:45', 'vm-uuid': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:06 np0005486808 NetworkManager[44885]: <info>  [1760433786.8660] manager: (tap169fcf13-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.873 2 INFO os_vif [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6')#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.944 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.944 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.945 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] No VIF found with MAC fa:16:3e:46:4e:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.945 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Using config drive#033[00m
Oct 14 05:23:06 np0005486808 nova_compute[259627]: 2025-10-14 09:23:06.964 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.038 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.500 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Creating config drive at /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.504 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2aae2s_0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.663 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2aae2s_0" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.694 2 DEBUG nova.storage.rbd_utils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] rbd image 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.700 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.739 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.740 2 DEBUG nova.network.neutron [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.764 2 DEBUG oslo_concurrency.lockutils [req-767f6e60-b3e0-4208-b05f-f59e6e5d8d23 req-863328d4-01f0-40e1-97bf-c1cd52e03a2f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.848 2 DEBUG oslo_concurrency.processutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.849 2 INFO nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deleting local config drive /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b/disk.config because it was imported into RBD.#033[00m
Oct 14 05:23:07 np0005486808 kernel: tap169fcf13-d6: entered promiscuous mode
Oct 14 05:23:07 np0005486808 NetworkManager[44885]: <info>  [1760433787.9041] manager: (tap169fcf13-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/526)
Oct 14 05:23:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:07Z|01282|binding|INFO|Claiming lport 169fcf13-d616-47ef-8558-362361f16f03 for this chassis.
Oct 14 05:23:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:07Z|01283|binding|INFO|169fcf13-d616-47ef-8558-362361f16f03: Claiming fa:16:3e:46:4e:45 10.100.0.4
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.914 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:4e:45 10.100.0.4'], port_security=['fa:16:3e:46:4e:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=169fcf13-d616-47ef-8558-362361f16f03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.915 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 169fcf13-d616-47ef-8558-362361f16f03 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 bound to our chassis#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.916 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47#033[00m
Oct 14 05:23:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:07Z|01284|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 ovn-installed in OVS
Oct 14 05:23:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:07Z|01285|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 up in Southbound
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.935 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[711bc90d-676c-4917-8037-2c01301baafc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:07 np0005486808 nova_compute[259627]: 2025-10-14 09:23:07.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:07 np0005486808 systemd-udevd[381143]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:23:07 np0005486808 systemd-machined[214636]: New machine qemu-154-instance-00000079.
Oct 14 05:23:07 np0005486808 NetworkManager[44885]: <info>  [1760433787.9565] device (tap169fcf13-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:23:07 np0005486808 NetworkManager[44885]: <info>  [1760433787.9574] device (tap169fcf13-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:23:07 np0005486808 systemd[1]: Started Virtual Machine qemu-154-instance-00000079.
Oct 14 05:23:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.967 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ed442531-b1b6-4fd5-b19e-9d9611abcac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:07.970 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8926cf4-fae2-463b-810f-e050a1637aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.012 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[54056637-90af-4b4f-a64b-2df38d32788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f12a082-5dce-4d38-a507-240929d06595]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381154, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f46f6522-b263-4559-a27a-886035d74dbb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752733, 'tstamp': 752733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381157, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752737, 'tstamp': 752737}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381157, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.054 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.057 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.058 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:08.059 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 KiB/s wr, 113 op/s
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.730 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.7293375, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.732 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Started (Lifecycle Event)#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.760 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.763 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.7307906, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.763 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.788 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.791 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.930 2 DEBUG nova.compute.manager [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.930 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG oslo_concurrency.lockutils [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.931 2 DEBUG nova.compute.manager [req-7f818578-8107-4a38-8261-284b1f30e882 req-7679cecc-667d-41bf-9e90-c7bc5c90b944 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Processing event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.932 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.936 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433788.935517, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.936 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.938 2 DEBUG nova.virt.libvirt.driver [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.943 2 INFO nova.virt.libvirt.driver [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance spawned successfully.#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.944 2 INFO nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 8.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.944 2 DEBUG nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.984 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:08 np0005486808 nova_compute[259627]: 2025-10-14 09:23:08.989 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:23:09 np0005486808 nova_compute[259627]: 2025-10-14 09:23:09.027 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:23:09 np0005486808 nova_compute[259627]: 2025-10-14 09:23:09.034 2 INFO nova.compute.manager [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 9.91 seconds to build instance.#033[00m
Oct 14 05:23:09 np0005486808 nova_compute[259627]: 2025-10-14 09:23:09.059 2 DEBUG oslo_concurrency.lockutils [None req-fd1af243-6c42-45ac-a843-c27554fe1998 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:09 np0005486808 nova_compute[259627]: 2025-10-14 09:23:09.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:09.593 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.4 KiB/s wr, 113 op/s
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.049 2 DEBUG nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.050 2 DEBUG oslo_concurrency.lockutils [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.051 2 DEBUG nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.051 2 WARNING nova.compute.manager [req-ba65a57e-54ae-4183-98dc-c9ca4d408c42 req-2214bf05-c643-4d68-b589-8b53ae7903da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received unexpected event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:23:11 np0005486808 nova_compute[259627]: 2025-10-14 09:23:11.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:12Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 05:23:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:12Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:88:3d 10.100.0.22
Oct 14 05:23:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 740 KiB/s wr, 129 op/s
Oct 14 05:23:12 np0005486808 nova_compute[259627]: 2025-10-14 09:23:12.405 2 DEBUG nova.compute.manager [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:12 np0005486808 nova_compute[259627]: 2025-10-14 09:23:12.406 2 DEBUG nova.compute.manager [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:12 np0005486808 nova_compute[259627]: 2025-10-14 09:23:12.407 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:12 np0005486808 nova_compute[259627]: 2025-10-14 09:23:12.407 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:12 np0005486808 nova_compute[259627]: 2025-10-14 09:23:12.408 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.713 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.713 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.737 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.808 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.809 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.818 2 INFO nova.compute.claims [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.964 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.965 2 DEBUG nova.network.neutron [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.988 2 DEBUG oslo_concurrency.lockutils [req-57599e62-e81f-4e8d-8d59-a6e2b04d5692 req-193c100a-90b9-479d-bf49-09b9e454e5e3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:13 np0005486808 nova_compute[259627]: 2025-10-14 09:23:13.997 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 739 KiB/s wr, 94 op/s
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:23:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512835621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.461 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.467 2 DEBUG nova.compute.provider_tree [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.488 2 DEBUG nova.scheduler.client.report [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.514 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.515 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.587 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.589 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.609 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.633 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:23:14 np0005486808 podman[381223]: 2025-10-14 09:23:14.669948814 +0000 UTC m=+0.068690279 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:23:14 np0005486808 podman[381222]: 2025-10-14 09:23:14.751214903 +0000 UTC m=+0.158588940 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.778 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.780 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.780 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating image(s)#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.802 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.824 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.846 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.852 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.894 2 DEBUG nova.policy [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c3638538fa6347dc95b6e30735bf0e83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9afc0bee75634a4cb284babbfba8d601', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.933 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.934 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.935 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.935 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.958 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:14 np0005486808 nova_compute[259627]: 2025-10-14 09:23:14.962 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.227 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.289 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] resizing rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.401 2 DEBUG nova.objects.instance [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'migration_context' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.418 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Ensure instance console log exists: /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.419 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:15 np0005486808 nova_compute[259627]: 2025-10-14 09:23:15.420 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:16 np0005486808 nova_compute[259627]: 2025-10-14 09:23:16.156 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Successfully created port: aff3a259-5908-4491-83a2-9aa0430d46e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:23:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Oct 14 05:23:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.709 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:ef:a5 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d988bdc4-1ce3-4ffd-8a2d-6d82ccf0df6c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=285c7ed0-64cf-4192-b665-658fbdfac746) old=Port_Binding(mac=['fa:16:3e:6c:ef:a5 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f430dff-3f6d-47bc-b9b6-a9119c33360a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.712 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 285c7ed0-64cf-4192-b665-658fbdfac746 in datapath 9f430dff-3f6d-47bc-b9b6-a9119c33360a updated#033[00m
Oct 14 05:23:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.716 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f430dff-3f6d-47bc-b9b6-a9119c33360a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:23:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:16.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc7a8f3-b316-4cab-a1dd-60bc4571c5af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:16 np0005486808 nova_compute[259627]: 2025-10-14 09:23:16.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.418 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Successfully updated port: aff3a259-5908-4491-83a2-9aa0430d46e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.436 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.437 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.437 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.560 2 DEBUG nova.compute.manager [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.561 2 DEBUG nova.compute.manager [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing instance network info cache due to event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.562 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:17 np0005486808 nova_compute[259627]: 2025-10-14 09:23:17.615 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:23:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 389 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 158 op/s
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.758 2 DEBUG nova.network.neutron [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.777 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.777 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance network_info: |[{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.779 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.779 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.786 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start _get_guest_xml network_info=[{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.795 2 WARNING nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.807 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.808 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.814 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.814 2 DEBUG nova.virt.libvirt.host [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.815 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.816 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.817 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.818 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.818 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.819 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.819 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.820 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.820 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.821 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.821 2 DEBUG nova.virt.hardware [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:23:18 np0005486808 nova_compute[259627]: 2025-10-14 09:23:18.827 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:23:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664240466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.392 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.415 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.419 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.757 2 DEBUG nova.compute.manager [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.757 2 DEBUG nova.compute.manager [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-ff2b9b74-a6fc-4774-89d2-9c010f121d65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.758 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.758 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.759 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:23:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128251632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.884 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.885 2 DEBUG nova.virt.libvirt.vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.886 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.887 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.888 2 DEBUG nova.objects.instance [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'pci_devices' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.905 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <uuid>060db45d-e2f9-4bf6-bcc0-c72e479bfae1</uuid>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <name>instance-0000007a</name>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestServerBasicOps-server-769140514</nova:name>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:23:18</nova:creationTime>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:user uuid="c3638538fa6347dc95b6e30735bf0e83">tempest-TestServerBasicOps-1877578104-project-member</nova:user>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:project uuid="9afc0bee75634a4cb284babbfba8d601">tempest-TestServerBasicOps-1877578104</nova:project>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <nova:port uuid="aff3a259-5908-4491-83a2-9aa0430d46e0">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="serial">060db45d-e2f9-4bf6-bcc0-c72e479bfae1</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="uuid">060db45d-e2f9-4bf6-bcc0-c72e479bfae1</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:56:31:2d"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <target dev="tapaff3a259-59"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/console.log" append="off"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:23:19 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:23:19 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:23:19 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:23:19 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Preparing to wait for external event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.907 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.908 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.908 2 DEBUG nova.virt.libvirt.vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.909 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.909 2 DEBUG nova.network.os_vif_util [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.910 2 DEBUG os_vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaff3a259-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaff3a259-59, col_values=(('external_ids', {'iface-id': 'aff3a259-5908-4491-83a2-9aa0430d46e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:31:2d', 'vm-uuid': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:23:19 np0005486808 NetworkManager[44885]: <info>  [1760433799.9216] manager: (tapaff3a259-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:19 np0005486808 nova_compute[259627]: 2025-10-14 09:23:19.925 2 INFO os_vif [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59')#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.015 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.015 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.016 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] No VIF found with MAC fa:16:3e:56:31:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.016 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Using config drive#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.040 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.075 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updated VIF entry in instance network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.075 2 DEBUG nova.network.neutron [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.096 2 DEBUG oslo_concurrency.lockutils [req-724cc8e1-7edd-43e6-8ca2-7f2e691c8e30 req-b5a4f2b8-dafe-42ee-84c9-ec4f4c2ef9e2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 405 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.566 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Creating config drive at /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.572 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jh12e4u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.719 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jh12e4u" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.744 2 DEBUG nova.storage.rbd_utils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] rbd image 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:23:20 np0005486808 nova_compute[259627]: 2025-10-14 09:23:20.747 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.265 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port ff2b9b74-a6fc-4774-89d2-9c010f121d65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.267 2 DEBUG nova.network.neutron [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.297 2 DEBUG oslo_concurrency.lockutils [req-d08dae91-cb76-4d03-a6c6-c82f137fdef7 req-aac826a4-272e-4316-843d-e919b0ac7459 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.377 2 DEBUG oslo_concurrency.processutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config 060db45d-e2f9-4bf6-bcc0-c72e479bfae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.378 2 INFO nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deleting local config drive /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1/disk.config because it was imported into RBD.#033[00m
Oct 14 05:23:21 np0005486808 kernel: tapaff3a259-59: entered promiscuous mode
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.4389] manager: (tapaff3a259-59): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:21Z|01286|binding|INFO|Claiming lport aff3a259-5908-4491-83a2-9aa0430d46e0 for this chassis.
Oct 14 05:23:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:21Z|01287|binding|INFO|aff3a259-5908-4491-83a2-9aa0430d46e0: Claiming fa:16:3e:56:31:2d 10.100.0.14
Oct 14 05:23:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:21Z|01288|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 ovn-installed in OVS
Oct 14 05:23:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:21Z|01289|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 up in Southbound
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.461 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:31:2d 10.100.0.14'], port_security=['fa:16:3e:56:31:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afc0bee75634a4cb284babbfba8d601', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fc47b78-9d93-43fe-9a5c-3ace8d9a4e46 df77c32e-f13f-4a03-9ec7-b5592ef3d091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b027bd6-06b6-47af-868f-e9e43d4de27f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=aff3a259-5908-4491-83a2-9aa0430d46e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.462 162547 INFO neutron.agent.ovn.metadata.agent [-] Port aff3a259-5908-4491-83a2-9aa0430d46e0 in datapath 9ed30564-d15d-43a5-8374-94d3c4f31dec bound to our chassis#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.464 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ed30564-d15d-43a5-8374-94d3c4f31dec#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:21 np0005486808 systemd-udevd[381572]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[194359a6-923f-4864-8b69-01f0f57315eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.537 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ed30564-d1 in ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:23:21 np0005486808 systemd-machined[214636]: New machine qemu-155-instance-0000007a.
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.540 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ed30564-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8f44f8b9-9b1b-4e11-afaf-6ef43b4dfccd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c76417d-8290-41f5-a877-e928f6f3b1ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.5508] device (tapaff3a259-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:23:21 np0005486808 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.5519] device (tapaff3a259-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.560 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6ec1eb-0649-4960-a983-627320e5ba65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.588 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5d4dd-8717-4ea8-9f5f-a458a85fc5e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aca55e03-af00-4121-b22f-755b3472910e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 systemd-udevd[381577]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.6366] manager: (tap9ed30564-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/529)
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.634 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efdfb0bf-603b-4a89-a50c-932287c953b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.676 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[622e7227-b2ce-47d0-8d2b-eac557caf0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.679 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ffab4722-4837-4bd2-84e9-2e0b5791fd37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.7105] device (tap9ed30564-d0): carrier: link connected
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.717 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95063acf-cc0b-4c2c-add9-5862f26c2d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.756 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[661cfa59-7dbd-4e2b-9eab-6187ae579ffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed30564-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:96:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758047, 'reachable_time': 34673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381606, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.776 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[473894f3-0e91-481c-a7c8-0d96f917c5ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:96eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758047, 'tstamp': 758047}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381607, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.802 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e685a46-0a4b-4b0b-88ed-1b0668effc78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ed30564-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:96:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758047, 'reachable_time': 34673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381608, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.845 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88e7ef51-1da6-4c55-839d-5297625b93a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.859 2 DEBUG nova.compute.manager [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.860 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG oslo_concurrency.lockutils [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.861 2 DEBUG nova.compute.manager [req-7d37a4c6-dbbf-4d83-93b4-ab8cdb3e93e4 req-380d2531-9162-4ef2-8be6-6a7b9e987bf5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Processing event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e04dfcd-146b-465c-b392-aea58aa3629b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed30564-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.930 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.931 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ed30564-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:21 np0005486808 NetworkManager[44885]: <info>  [1760433801.9341] manager: (tap9ed30564-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Oct 14 05:23:21 np0005486808 kernel: tap9ed30564-d0: entered promiscuous mode
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.943 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ed30564-d0, col_values=(('external_ids', {'iface-id': 'aa7df6c9-2599-4ac0-b318-2cea87a1558f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:21Z|01290|binding|INFO|Releasing lport aa7df6c9-2599-4ac0-b318-2cea87a1558f from this chassis (sb_readonly=0)
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:21 np0005486808 nova_compute[259627]: 2025-10-14 09:23:21.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.969 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.970 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d249fc-fd90-4bba-adbf-f7270e7f574c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.971 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/9ed30564-d15d-43a5-8374-94d3c4f31dec.pid.haproxy
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 9ed30564-d15d-43a5-8374-94d3c4f31dec
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:23:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:21.974 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'env', 'PROCESS_TAG=haproxy-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ed30564-d15d-43a5-8374-94d3c4f31dec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:23:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 14 05:23:22 np0005486808 podman[381681]: 2025-10-14 09:23:22.408256904 +0000 UTC m=+0.050775956 container create 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:23:22 np0005486808 systemd[1]: Started libpod-conmon-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope.
Oct 14 05:23:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:23:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d50de940caad5298fe95e1dc9e7763caa796f2db746a3b136ae2e99fd99b9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:23:22 np0005486808 podman[381681]: 2025-10-14 09:23:22.380822216 +0000 UTC m=+0.023341288 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:23:22 np0005486808 podman[381681]: 2025-10-14 09:23:22.483934764 +0000 UTC m=+0.126453836 container init 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:23:22 np0005486808 podman[381681]: 2025-10-14 09:23:22.489991593 +0000 UTC m=+0.132510645 container start 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 05:23:22 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : New worker (381703) forked
Oct 14 05:23:22 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : Loading success.
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.689 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.690 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.6889188, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.690 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Started (Lifecycle Event)#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.694 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.697 2 INFO nova.virt.libvirt.driver [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance spawned successfully.#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.698 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.736 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.739 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.740 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.740 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.741 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.741 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.742 2 DEBUG nova.virt.libvirt.driver [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.779 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.779 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.6904864, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.779 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.807 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.809 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433802.694312, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.810 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.820 2 INFO nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 8.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.821 2 DEBUG nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.847 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.849 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.884 2 INFO nova.compute.manager [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 9.10 seconds to build instance.#033[00m
Oct 14 05:23:22 np0005486808 nova_compute[259627]: 2025-10-14 09:23:22.899 2 DEBUG oslo_concurrency.lockutils [None req-3a314694-44bd-45ac-a490-bb80771db53b c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.002 2 DEBUG nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.003 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.005 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.005 2 DEBUG oslo_concurrency.lockutils [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.006 2 DEBUG nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.006 2 WARNING nova.compute.manager [req-84724c37-1d99-4f15-b879-dd6feb54b4ff req-3231104e-f7c0-492a-b54a-550a6d1788be 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received unexpected event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:23:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 405 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.2 MiB/s wr, 105 op/s
Oct 14 05:23:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:24Z|00137|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.4
Oct 14 05:23:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:24Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:24 np0005486808 nova_compute[259627]: 2025-10-14 09:23:24.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:25 np0005486808 nova_compute[259627]: 2025-10-14 09:23:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:25 np0005486808 nova_compute[259627]: 2025-10-14 09:23:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:23:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct 14 05:23:26 np0005486808 nova_compute[259627]: 2025-10-14 09:23:26.383 2 DEBUG nova.compute.manager [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:26 np0005486808 nova_compute[259627]: 2025-10-14 09:23:26.384 2 DEBUG nova.compute.manager [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing instance network info cache due to event network-changed-aff3a259-5908-4491-83a2-9aa0430d46e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:26 np0005486808 nova_compute[259627]: 2025-10-14 09:23:26.384 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:26 np0005486808 nova_compute[259627]: 2025-10-14 09:23:26.385 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:26 np0005486808 nova_compute[259627]: 2025-10-14 09:23:26.385 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Refreshing network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:28 np0005486808 nova_compute[259627]: 2025-10-14 09:23:28.098 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updated VIF entry in instance network info cache for port aff3a259-5908-4491-83a2-9aa0430d46e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:28 np0005486808 nova_compute[259627]: 2025-10-14 09:23:28.098 2 DEBUG nova.network.neutron [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [{"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:28 np0005486808 nova_compute[259627]: 2025-10-14 09:23:28.189 2 DEBUG oslo_concurrency.lockutils [req-7659ded8-f435-49a9-9612-a202b317f26e req-0f04b7db-582e-4f8b-bbc1-2ebbef7952c5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-060db45d-e2f9-4bf6-bcc0-c72e479bfae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Oct 14 05:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:23:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.77 writes per sync, written: 0.14 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7447 writes, 29K keys, 7447 commit groups, 1.0 writes per commit group, ingest: 33.01 MB, 0.06 MB/s#012Interval WAL: 7447 writes, 2948 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:23:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:29Z|00139|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.4
Oct 14 05:23:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:29Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 05:23:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:29Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:4e:45 10.100.0.4
Oct 14 05:23:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:29Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:4e:45 10.100.0.4
Oct 14 05:23:29 np0005486808 nova_compute[259627]: 2025-10-14 09:23:29.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:29 np0005486808 nova_compute[259627]: 2025-10-14 09:23:29.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 419 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 14 05:23:31 np0005486808 podman[381715]: 2025-10-14 09:23:31.68396977 +0000 UTC m=+0.081652508 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 05:23:31 np0005486808 podman[381714]: 2025-10-14 09:23:31.709000689 +0000 UTC m=+0.101769076 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd)
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 575 KiB/s wr, 129 op/s
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:23:32
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups']
Oct 14 05:23:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:23:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:23:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:23:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 423 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 573 KiB/s wr, 129 op/s
Oct 14 05:23:34 np0005486808 nova_compute[259627]: 2025-10-14 09:23:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:23:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.76 writes per sync, written: 0.14 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7870 writes, 29K keys, 7870 commit groups, 1.0 writes per commit group, ingest: 34.14 MB, 0.06 MB/s#012Interval WAL: 7870 writes, 3099 syncs, 2.54 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:23:34 np0005486808 nova_compute[259627]: 2025-10-14 09:23:34.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:35Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:31:2d 10.100.0.14
Oct 14 05:23:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:35Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:31:2d 10.100.0.14
Oct 14 05:23:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 14 05:23:36 np0005486808 nova_compute[259627]: 2025-10-14 09:23:36.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:36 np0005486808 nova_compute[259627]: 2025-10-14 09:23:36.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.039 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.040 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.041 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.041 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.042 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:23:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951794241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.510 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.611 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.611 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.618 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.618 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.626 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.626 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.632 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.632 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.638 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.638 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.892 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.893 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2754MB free_disk=59.8003044128418GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.894 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:37 np0005486808 nova_compute[259627]: 2025-10-14 09:23:37.894 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.063 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 50c83173-31e3-4f7a-8836-26e52affd0f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 6810b29b-088f-441b-8a6a-02eaafada0c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance ef3d76bf-9763-4405-8e48-c2c4405a2a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.064 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:23:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:23:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099643206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.740 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.749 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.772 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.813 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:23:38 np0005486808 nova_compute[259627]: 2025-10-14 09:23:38.814 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.213 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf824143-865f-4777-be8c-e192803f85fe, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20) old=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.215 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20 in datapath 17206a37-8263-4403-aaa6-3b6fe9255608 updated#033[00m
Oct 14 05:23:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.219 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17206a37-8263-4403-aaa6-3b6fe9255608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:23:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:39.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4803fb-2e6c-4203-8399-61738b686425]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:39 np0005486808 nova_compute[259627]: 2025-10-14 09:23:39.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:23:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.77 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5834 writes, 23K keys, 5834 commit groups, 1.0 writes per commit group, ingest: 24.66 MB, 0.04 MB/s#012Interval WAL: 5834 writes, 2337 syncs, 2.50 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:23:39 np0005486808 nova_compute[259627]: 2025-10-14 09:23:39.798 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:39 np0005486808 nova_compute[259627]: 2025-10-14 09:23:39.798 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:39 np0005486808 nova_compute[259627]: 2025-10-14 09:23:39.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Oct 14 05:23:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:23:40 np0005486808 nova_compute[259627]: 2025-10-14 09:23:40.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 05:23:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:42 np0005486808 nova_compute[259627]: 2025-10-14 09:23:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:42 np0005486808 nova_compute[259627]: 2025-10-14 09:23:42.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031436066411082895 of space, bias 1.0, pg target 0.9430819923324868 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001424050310933125 of space, bias 1.0, pg target 0.4272150932799375 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:23:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:44.004 162744 DEBUG eventlet.wsgi.server [-] (162744) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:44.007 162744 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: Accept: */*#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: Connection: close#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: Content-Type: text/plain#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: Host: 169.254.169.254#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: User-Agent: curl/7.84.0#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: X-Forwarded-For: 10.100.0.14#015
Oct 14 05:23:44 np0005486808 ovn_metadata_agent[162542]: X-Ovn-Network-Id: 9ed30564-d15d-43a5-8374-94d3c4f31dec __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct 14 05:23:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 14 05:23:44 np0005486808 nova_compute[259627]: 2025-10-14 09:23:44.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:44 np0005486808 nova_compute[259627]: 2025-10-14 09:23:44.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:44 np0005486808 nova_compute[259627]: 2025-10-14 09:23:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:44 np0005486808 nova_compute[259627]: 2025-10-14 09:23:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.157 2 DEBUG nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.183 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.221 2 INFO nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] instance snapshotting#033[00m
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.274 162744 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.275 162744 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.2680297#033[00m
Oct 14 05:23:45 np0005486808 haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec[381703]: 10.100.0.14:60340 [14/Oct/2025:09:23:44.001] listener listener/metadata 0/0/0/1273/1273 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.411 162744 DEBUG eventlet.wsgi.server [-] (162744) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.413 162744 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: Accept: */*#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: Connection: close#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: Content-Length: 100#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: Content-Type: application/x-www-form-urlencoded#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: Host: 169.254.169.254#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: User-Agent: curl/7.84.0#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: X-Forwarded-For: 10.100.0.14#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: X-Ovn-Network-Id: 9ed30564-d15d-43a5-8374-94d3c4f31dec#015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: #015
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.476 2 INFO nova.virt.libvirt.driver [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Beginning live snapshot process#033[00m
Oct 14 05:23:45 np0005486808 nova_compute[259627]: 2025-10-14 09:23:45.646 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(9133787a58494c31ada8906404467086) on rbd image(8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:23:45 np0005486808 haproxy-metadata-proxy-9ed30564-d15d-43a5-8374-94d3c4f31dec[381703]: 10.100.0.14:60356 [14/Oct/2025:09:23:45.409] listener listener/metadata 0/0/0/273/273 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.682 162744 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct 14 05:23:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:45.682 162744 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2693579#033[00m
Oct 14 05:23:45 np0005486808 podman[381815]: 2025-10-14 09:23:45.749709957 +0000 UTC m=+0.138968165 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:23:45 np0005486808 podman[381799]: 2025-10-14 09:23:45.766868461 +0000 UTC m=+0.165353517 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 05:23:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Oct 14 05:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 14 05:23:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 14 05:23:46 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 14 05:23:46 np0005486808 nova_compute[259627]: 2025-10-14 09:23:46.717 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] cloning vms/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk@9133787a58494c31ada8906404467086 to images/68eb8740-bd8e-4270-a8b8-e37a3ed71f7a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 14 05:23:46 np0005486808 nova_compute[259627]: 2025-10-14 09:23:46.824 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] flattening images/68eb8740-bd8e-4270-a8b8-e37a3ed71f7a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 14 05:23:46 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.365 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] removing snapshot(9133787a58494c31ada8906404467086) on rbd image(8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.564 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.581 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.582 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:23:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 14 05:23:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 14 05:23:47 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.717 2 DEBUG nova.storage.rbd_utils [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] creating snapshot(snap) on rbd image(68eb8740-bd8e-4270-a8b8-e37a3ed71f7a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.802 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.803 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.804 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.806 2 INFO nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Terminating instance#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.808 2 DEBUG nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:23:47 np0005486808 kernel: tapaff3a259-59 (unregistering): left promiscuous mode
Oct 14 05:23:47 np0005486808 NetworkManager[44885]: <info>  [1760433827.8721] device (tapaff3a259-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:47Z|01291|binding|INFO|Releasing lport aff3a259-5908-4491-83a2-9aa0430d46e0 from this chassis (sb_readonly=0)
Oct 14 05:23:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:47Z|01292|binding|INFO|Setting lport aff3a259-5908-4491-83a2-9aa0430d46e0 down in Southbound
Oct 14 05:23:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:47Z|01293|binding|INFO|Removing iface tapaff3a259-59 ovn-installed in OVS
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.938 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:31:2d 10.100.0.14'], port_security=['fa:16:3e:56:31:2d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '060db45d-e2f9-4bf6-bcc0-c72e479bfae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9afc0bee75634a4cb284babbfba8d601', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fc47b78-9d93-43fe-9a5c-3ace8d9a4e46 df77c32e-f13f-4a03-9ec7-b5592ef3d091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b027bd6-06b6-47af-868f-e9e43d4de27f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=aff3a259-5908-4491-83a2-9aa0430d46e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.940 162547 INFO neutron.agent.ovn.metadata.agent [-] Port aff3a259-5908-4491-83a2-9aa0430d46e0 in datapath 9ed30564-d15d-43a5-8374-94d3c4f31dec unbound from our chassis#033[00m
Oct 14 05:23:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.943 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ed30564-d15d-43a5-8374-94d3c4f31dec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:23:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.948 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c090c8a-928d-4a4e-b8d2-03f6a6101394]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:47.950 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec namespace which is not needed anymore#033[00m
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:47 np0005486808 nova_compute[259627]: 2025-10-14 09:23:47.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:47 np0005486808 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 14 05:23:48 np0005486808 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 14.718s CPU time.
Oct 14 05:23:48 np0005486808 systemd-machined[214636]: Machine qemu-155-instance-0000007a terminated.
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.045 2 INFO nova.virt.libvirt.driver [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Instance destroyed successfully.#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.045 2 DEBUG nova.objects.instance [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lazy-loading 'resources' on Instance uuid 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.061 2 DEBUG nova.virt.libvirt.vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-769140514',display_name='tempest-TestServerBasicOps-server-769140514',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-769140514',id=122,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEF69xZ4MwycAnhRnKcfi5zKJsp1M1x2Cnuyy56sKit+Vi9Xj59LaAw8ViNKigCRTmbkHi0zC7jNFwbXf3v+6JnEKBlDXNW4AxSHy39++Bl7O4v2nk74xukLS+FgnBMAYQ==',key_name='tempest-TestServerBasicOps-2077025303',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:23:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9afc0bee75634a4cb284babbfba8d601',ramdisk_id='',reservation_id='r-lxozf8j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1877578104',owner_user_name='tempest-TestServerBasicOps-1877578104-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:23:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c3638538fa6347dc95b6e30735bf0e83',uuid=060db45d-e2f9-4bf6-bcc0-c72e479bfae1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.062 2 DEBUG nova.network.os_vif_util [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converting VIF {"id": "aff3a259-5908-4491-83a2-9aa0430d46e0", "address": "fa:16:3e:56:31:2d", "network": {"id": "9ed30564-d15d-43a5-8374-94d3c4f31dec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1620506525-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9afc0bee75634a4cb284babbfba8d601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaff3a259-59", "ovs_interfaceid": "aff3a259-5908-4491-83a2-9aa0430d46e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.063 2 DEBUG nova.network.os_vif_util [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.064 2 DEBUG os_vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaff3a259-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.074 2 INFO os_vif [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:31:2d,bridge_name='br-int',has_traffic_filtering=True,id=aff3a259-5908-4491-83a2-9aa0430d46e0,network=Network(9ed30564-d15d-43a5-8374-94d3c4f31dec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaff3a259-59')#033[00m
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : haproxy version is 2.8.14-c23fe91
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [NOTICE]   (381701) : path to executable is /usr/sbin/haproxy
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : Exiting Master process...
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : Exiting Master process...
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [ALERT]    (381701) : Current worker (381703) exited with code 143 (Terminated)
Oct 14 05:23:48 np0005486808 neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec[381697]: [WARNING]  (381701) : All workers exited. Exiting... (0)
Oct 14 05:23:48 np0005486808 systemd[1]: libpod-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope: Deactivated successfully.
Oct 14 05:23:48 np0005486808 podman[382017]: 2025-10-14 09:23:48.129821349 +0000 UTC m=+0.054510268 container died 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:23:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb-userdata-shm.mount: Deactivated successfully.
Oct 14 05:23:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-44d50de940caad5298fe95e1dc9e7763caa796f2db746a3b136ae2e99fd99b9d-merged.mount: Deactivated successfully.
Oct 14 05:23:48 np0005486808 podman[382017]: 2025-10-14 09:23:48.182189633 +0000 UTC m=+0.106878522 container cleanup 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:23:48 np0005486808 systemd[1]: libpod-conmon-09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb.scope: Deactivated successfully.
Oct 14 05:23:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 456 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 75 KiB/s wr, 27 op/s
Oct 14 05:23:48 np0005486808 podman[382065]: 2025-10-14 09:23:48.261194055 +0000 UTC m=+0.052084898 container remove 09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.267 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2367df-ef21-4eb3-a1bf-6b5990de479a]: (4, ('Tue Oct 14 09:23:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec (09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb)\n09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb\nTue Oct 14 09:23:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec (09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb)\n09a8e3d34f01e6f5939b94a17e2fe4802cbb39ba89dc5fc5b89186600b6429bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.268 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2ebcf9-8f33-40d1-8122-d5c503d65e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.269 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ed30564-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:48 np0005486808 kernel: tap9ed30564-d0: left promiscuous mode
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc27c73-548b-4f4c-af01-8be6667365d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.325 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8713bd8d-8145-4842-95b9-e59f7299802d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.326 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dffb64f8-447e-465e-8b21-0c3304c4d8c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ca9ee2-8243-4914-abd1-df82a9c37d53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758038, 'reachable_time': 25287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382081, 'error': None, 'target': 'ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.349 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ed30564-d15d-43a5-8374-94d3c4f31dec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:23:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:48.349 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[042ceeb5-52bd-4d7d-bf5f-1183acc2c1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:48 np0005486808 systemd[1]: run-netns-ovnmeta\x2d9ed30564\x2dd15d\x2d43a5\x2d8374\x2d94d3c4f31dec.mount: Deactivated successfully.
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.478 2 INFO nova.virt.libvirt.driver [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deleting instance files /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_del#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.480 2 INFO nova.virt.libvirt.driver [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deletion of /var/lib/nova/instances/060db45d-e2f9-4bf6-bcc0-c72e479bfae1_del complete#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.542 2 INFO nova.compute.manager [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.542 2 DEBUG oslo.service.loopingcall [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.543 2 DEBUG nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.543 2 DEBUG nova.network.neutron [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:23:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 14 05:23:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 14 05:23:48 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.835 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.836 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.836 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG oslo_concurrency.lockutils [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:48 np0005486808 nova_compute[259627]: 2025-10-14 09:23:48.837 2 DEBUG nova.compute.manager [req-3de66736-2380-4aeb-80ac-9016aaf84839 req-af2d6f4c-047b-4a0a-b8ef-4b9f0cdac2fb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-unplugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:23:49 np0005486808 nova_compute[259627]: 2025-10-14 09:23:49.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:49 np0005486808 nova_compute[259627]: 2025-10-14 09:23:49.904 2 DEBUG nova.network.neutron [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:49 np0005486808 nova_compute[259627]: 2025-10-14 09:23:49.932 2 INFO nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct 14 05:23:49 np0005486808 nova_compute[259627]: 2025-10-14 09:23:49.984 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:49 np0005486808 nova_compute[259627]: 2025-10-14 09:23:49.985 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.140 2 DEBUG oslo_concurrency.processutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.201 2 INFO nova.virt.libvirt.driver [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Snapshot image upload complete#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.202 2 INFO nova.compute.manager [None req-e33a79ed-118a-443f-b510-485e1e20e1c4 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 4.98 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 14 05:23:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 14 05:23:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:23:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846297174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.621 2 DEBUG oslo_concurrency.processutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.627 2 DEBUG nova.compute.provider_tree [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.653 2 DEBUG nova.scheduler.client.report [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.680 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.718 2 INFO nova.scheduler.client.report [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Deleted allocations for instance 060db45d-e2f9-4bf6-bcc0-c72e479bfae1#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.813 2 DEBUG oslo_concurrency.lockutils [None req-51be574b-75a3-4ba0-bc49-e26d65b350ed c3638538fa6347dc95b6e30735bf0e83 9afc0bee75634a4cb284babbfba8d601 - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf824143-865f-4777-be8c-e192803f85fe, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20) old=Port_Binding(mac=['fa:16:3e:63:e2:5d 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17206a37-8263-4403-aaa6-3b6fe9255608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f46887bb96d483788cde4e37af9f799', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.841 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 36d9c2d2-c941-4fd1-a63a-9e6fee3a0d20 in datapath 17206a37-8263-4403-aaa6-3b6fe9255608 updated#033[00m
Oct 14 05:23:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.843 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17206a37-8263-4403-aaa6-3b6fe9255608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:23:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:50.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8806db02-2098-4536-8570-2ee907bca734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.949 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.950 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG oslo_concurrency.lockutils [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "060db45d-e2f9-4bf6-bcc0-c72e479bfae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.951 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] No waiting events found dispatching network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.952 2 WARNING nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received unexpected event network-vif-plugged-aff3a259-5908-4491-83a2-9aa0430d46e0 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:23:50 np0005486808 nova_compute[259627]: 2025-10-14 09:23:50.952 2 DEBUG nova.compute.manager [req-73c2de8d-3453-4c28-800c-14f350f93bc1 req-058dca5c-bb8a-4dad-9e19-70649b89f709 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Received event network-vif-deleted-aff3a259-5908-4491-83a2-9aa0430d46e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 14 05:23:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 14 05:23:51 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 14 05:23:51 np0005486808 nova_compute[259627]: 2025-10-14 09:23:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 16 MiB/s wr, 271 op/s
Oct 14 05:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 14 05:23:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 14 05:23:52 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.053 2 DEBUG nova.compute.manager [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-changed-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.053 2 DEBUG nova.compute.manager [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing instance network info cache due to event network-changed-169fcf13-d616-47ef-8558-362361f16f03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.054 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.054 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.055 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Refreshing network info cache for port 169fcf13-d616-47ef-8558-362361f16f03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.117 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.118 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.118 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.119 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.119 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.121 2 INFO nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Terminating instance#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.122 2 DEBUG nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 kernel: tap169fcf13-d6 (unregistering): left promiscuous mode
Oct 14 05:23:53 np0005486808 NetworkManager[44885]: <info>  [1760433833.1780] device (tap169fcf13-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:53Z|01294|binding|INFO|Releasing lport 169fcf13-d616-47ef-8558-362361f16f03 from this chassis (sb_readonly=0)
Oct 14 05:23:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:53Z|01295|binding|INFO|Setting lport 169fcf13-d616-47ef-8558-362361f16f03 down in Southbound
Oct 14 05:23:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:53Z|01296|binding|INFO|Removing iface tap169fcf13-d6 ovn-installed in OVS
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.205 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:4e:45 10.100.0.4'], port_security=['fa:16:3e:46:4e:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f5e63fb-23c2-4f15-acca-bc5fbeb0729b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=169fcf13-d616-47ef-8558-362361f16f03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.207 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 169fcf13-d616-47ef-8558-362361f16f03 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 unbound from our chassis#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.211 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb40a17-42bd-4e58-ae0b-abfb07c2f6c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 14 05:23:53 np0005486808 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Consumed 16.763s CPU time.
Oct 14 05:23:53 np0005486808 systemd-machined[214636]: Machine qemu-154-instance-00000079 terminated.
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.274 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7361a15d-bd8a-4bb0-b01e-7b43d7536dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.277 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1ada0a64-6dfd-465a-9c35-5ad6027b7ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.303 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fe32a5-b2e3-4d48-a450-58480da21e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bebd7151-d253-44cb-a513-62a1d13fbc9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fc37d66-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:1e:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752720, 'reachable_time': 16651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382115, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.338 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377c66f8-775f-49a5-831e-d57d59f7d82f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752733, 'tstamp': 752733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382116, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fc37d66-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752737, 'tstamp': 752737}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382116, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.340 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.350 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fc37d66-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fc37d66-10, col_values=(('external_ids', {'iface-id': '04719e6c-d55b-4ad7-a45c-52e6e59101ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:53.351 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.353 2 INFO nova.virt.libvirt.driver [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Instance destroyed successfully.#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.353 2 DEBUG nova.objects.instance [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'resources' on Instance uuid 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.378 2 DEBUG nova.virt.libvirt.vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1820866852',display_name='tempest-TestSnapshotPattern-server-1820866852',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1820866852',id=121,image_ref='d594f64a-1811-45da-92c9-566107aad012',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:23:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-z8e26amr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6810b29b-088f-441b-8a6a-02eaafada0c5',image_min_disk='1',image_min_ram='0',image_owner_id='4112adc84657452aa0e117ac5999054a',image_owner_project_name='tempest-TestSnapshotPattern-70687399',image_owner_user_name='tempest-TestSnapshotPattern-70687399-project-member',image_user_id='f232ab535af04111bf570569aa293116',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:23:50Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=8f5e63fb-23c2-4f15-acca-bc5fbeb0729b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.378 2 DEBUG nova.network.os_vif_util [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.379 2 DEBUG nova.network.os_vif_util [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.379 2 DEBUG os_vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169fcf13-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.385 2 INFO os_vif [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=169fcf13-d616-47ef-8558-362361f16f03,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap169fcf13-d6')#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.549 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.550 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.550 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.551 2 DEBUG oslo_concurrency.lockutils [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.551 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.552 2 DEBUG nova.compute.manager [req-74080d83-54ad-440b-8009-21964404e068 req-9cb61210-4e7d-42b0-aeb9-7e65d015911d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-unplugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.765 2 INFO nova.virt.libvirt.driver [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deleting instance files /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_del#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.767 2 INFO nova.virt.libvirt.driver [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deletion of /var/lib/nova/instances/8f5e63fb-23c2-4f15-acca-bc5fbeb0729b_del complete#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.821 2 INFO nova.compute.manager [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.822 2 DEBUG oslo.service.loopingcall [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.823 2 DEBUG nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.823 2 DEBUG nova.network.neutron [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:23:53 np0005486808 nova_compute[259627]: 2025-10-14 09:23:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 252 op/s
Oct 14 05:23:54 np0005486808 nova_compute[259627]: 2025-10-14 09:23:54.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.480 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updated VIF entry in instance network info cache for port 169fcf13-d616-47ef-8558-362361f16f03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.481 2 DEBUG nova.network.neutron [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [{"id": "169fcf13-d616-47ef-8558-362361f16f03", "address": "fa:16:3e:46:4e:45", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap169fcf13-d6", "ovs_interfaceid": "169fcf13-d616-47ef-8558-362361f16f03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.506 2 DEBUG oslo_concurrency.lockutils [req-2564feb9-e8cc-4819-b47e-250be2be7cc9 req-2e8592df-85ac-444b-896c-a65405ecd002 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.516 2 DEBUG nova.network.neutron [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.536 2 INFO nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Took 1.71 seconds to deallocate network for instance.#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.595 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.595 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.619 2 DEBUG nova.compute.manager [req-c7e402a0-c8e6-439f-887a-0c6eb5939b89 req-380a5c73-1e38-4a29-920f-2e2c44419681 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-deleted-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.652 2 DEBUG nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG oslo_concurrency.lockutils [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.653 2 DEBUG nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] No waiting events found dispatching network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.654 2 WARNING nova.compute.manager [req-fde2c328-7df5-4409-9929-106f80044f52 req-4512236a-ea4c-447e-94f9-fc8fcbf94034 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Received unexpected event network-vif-plugged-169fcf13-d616-47ef-8558-362361f16f03 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:23:55 np0005486808 nova_compute[259627]: 2025-10-14 09:23:55.731 2 DEBUG oslo_concurrency.processutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:23:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:23:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943956470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.210 2 DEBUG oslo_concurrency.processutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.219 2 DEBUG nova.compute.provider_tree [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.235 2 DEBUG nova.scheduler.client.report [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:23:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 284 op/s
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.259 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.288 2 INFO nova.scheduler.client.report [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Deleted allocations for instance 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b#033[00m
Oct 14 05:23:56 np0005486808 nova_compute[259627]: 2025-10-14 09:23:56.386 2 DEBUG oslo_concurrency.lockutils [None req-1d42d78c-7259-447e-bd71-bbd1f4a1e84a f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "8f5e63fb-23c2-4f15-acca-bc5fbeb0729b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:57Z|01297|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:23:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:57Z|01298|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 05:23:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:57Z|01299|binding|INFO|Releasing lport 04719e6c-d55b-4ad7-a45c-52e6e59101ab from this chassis (sb_readonly=0)
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 14 05:23:57 np0005486808 nova_compute[259627]: 2025-10-14 09:23:57.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 14 05:23:57 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 14 05:23:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 358 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 384 KiB/s wr, 105 op/s
Oct 14 05:23:58 np0005486808 nova_compute[259627]: 2025-10-14 09:23:58.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:58 np0005486808 nova_compute[259627]: 2025-10-14 09:23:58.990 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:23:58 np0005486808 nova_compute[259627]: 2025-10-14 09:23:58.990 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.018 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.280 2 DEBUG nova.compute.manager [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG nova.compute.manager [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing instance network info cache due to event network-changed-6d5e10b7-5c07-4389-8916-e7c277cb2c88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.281 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Refreshing network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.356 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.356 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.357 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.358 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.358 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.360 2 INFO nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Terminating instance#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.362 2 DEBUG nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:23:59 np0005486808 kernel: tap6d5e10b7-5c (unregistering): left promiscuous mode
Oct 14 05:23:59 np0005486808 NetworkManager[44885]: <info>  [1760433839.4171] device (tap6d5e10b7-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:59Z|01300|binding|INFO|Releasing lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 from this chassis (sb_readonly=0)
Oct 14 05:23:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:59Z|01301|binding|INFO|Setting lport 6d5e10b7-5c07-4389-8916-e7c277cb2c88 down in Southbound
Oct 14 05:23:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:23:59Z|01302|binding|INFO|Removing iface tap6d5e10b7-5c ovn-installed in OVS
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.442 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:22:e5 10.100.0.12'], port_security=['fa:16:3e:7c:22:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6810b29b-088f-441b-8a6a-02eaafada0c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4112adc84657452aa0e117ac5999054a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7a53172-9b5e-49ee-bb03-aeca0d4a8fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=add5cdec-6440-4df9-aea8-21659d7bab06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=6d5e10b7-5c07-4389-8916-e7c277cb2c88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 6d5e10b7-5c07-4389-8916-e7c277cb2c88 in datapath 4fc37d66-193b-4ab7-80e3-58e26dc76e47 unbound from our chassis#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.447 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fc37d66-193b-4ab7-80e3-58e26dc76e47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.448 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a87801fa-0314-4e00-acd6-d32c9d85475a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.451 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 namespace which is not needed anymore#033[00m
Oct 14 05:23:59 np0005486808 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 14 05:23:59 np0005486808 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 15.782s CPU time.
Oct 14 05:23:59 np0005486808 systemd-machined[214636]: Machine qemu-151-instance-00000077 terminated.
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : haproxy version is 2.8.14-c23fe91
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [NOTICE]   (378998) : path to executable is /usr/sbin/haproxy
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : Exiting Master process...
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : Exiting Master process...
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [ALERT]    (378998) : Current worker (379000) exited with code 143 (Terminated)
Oct 14 05:23:59 np0005486808 neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47[378994]: [WARNING]  (378998) : All workers exited. Exiting... (0)
Oct 14 05:23:59 np0005486808 systemd[1]: libpod-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope: Deactivated successfully.
Oct 14 05:23:59 np0005486808 podman[382193]: 2025-10-14 09:23:59.606201608 +0000 UTC m=+0.053694708 container died a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.605 2 INFO nova.virt.libvirt.driver [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Instance destroyed successfully.#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.606 2 DEBUG nova.objects.instance [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lazy-loading 'resources' on Instance uuid 6810b29b-088f-441b-8a6a-02eaafada0c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.637 2 DEBUG nova.virt.libvirt.vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1951271072',display_name='tempest-TestSnapshotPattern-server-1951271072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1951271072',id=119,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/LFMmap4OB0txXt7JPmRqMoCntTC6tPJ8UMtOaZfJpNHOk397YhzNpC3bp7HY+1PlJdNNwm/PwDw4vAnX0LcroehUvKxClr2ruduakP/QJ599/XDk5NHuriBpOA/llJg==',key_name='tempest-TestSnapshotPattern-1883373955',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4112adc84657452aa0e117ac5999054a',ramdisk_id='',reservation_id='r-51g4schc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-70687399',owner_user_name='tempest-TestSnapshotPattern-70687399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:56Z,user_data=None,user_id='f232ab535af04111bf570569aa293116',uuid=6810b29b-088f-441b-8a6a-02eaafada0c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.637 2 DEBUG nova.network.os_vif_util [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converting VIF {"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.639 2 DEBUG nova.network.os_vif_util [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.639 2 DEBUG os_vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d5e10b7-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e-userdata-shm.mount: Deactivated successfully.
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-de954708f8f3fdfb0e31660e7186517bbce587741a2e68b98ff8eba35c2ae10b-merged.mount: Deactivated successfully.
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.658 2 INFO os_vif [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:22:e5,bridge_name='br-int',has_traffic_filtering=True,id=6d5e10b7-5c07-4389-8916-e7c277cb2c88,network=Network(4fc37d66-193b-4ab7-80e3-58e26dc76e47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d5e10b7-5c')#033[00m
Oct 14 05:23:59 np0005486808 podman[382193]: 2025-10-14 09:23:59.670789674 +0000 UTC m=+0.118282784 container cleanup a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:23:59 np0005486808 systemd[1]: libpod-conmon-a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e.scope: Deactivated successfully.
Oct 14 05:23:59 np0005486808 podman[382248]: 2025-10-14 09:23:59.755872486 +0000 UTC m=+0.052947589 container remove a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2199c-87ec-4aef-82f4-215c60fe4c1a]: (4, ('Tue Oct 14 09:23:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 (a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e)\na80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e\nTue Oct 14 09:23:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 (a80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e)\na80be19c082f215f5d033f85196a94ab27665b6b2c853494b96bead108109a8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.767 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c19e78cc-194b-43f2-9064-a8cbc5efa4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.768 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fc37d66-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 kernel: tap4fc37d66-10: left promiscuous mode
Oct 14 05:23:59 np0005486808 nova_compute[259627]: 2025-10-14 09:23:59.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.798 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d184de9-801a-4e06-b3cb-c80b548e2668]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.830 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[799d3fd6-be19-4fd5-9c6b-a89ed0f1588b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.832 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f16545dd-361f-4c45-846b-92dc5914436d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.848 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd4ca85-1ea9-4fa7-9ccd-bb27bbeaa603]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752711, 'reachable_time': 43077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382266, 'error': None, 'target': 'ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:23:59 np0005486808 systemd[1]: run-netns-ovnmeta\x2d4fc37d66\x2d193b\x2d4ab7\x2d80e3\x2d58e26dc76e47.mount: Deactivated successfully.
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.852 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fc37d66-193b-4ab7-80e3-58e26dc76e47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:23:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:23:59.852 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8da79455-58c9-45c4-bfa0-9df95a99914d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.047 2 INFO nova.virt.libvirt.driver [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deleting instance files /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5_del#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.047 2 INFO nova.virt.libvirt.driver [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deletion of /var/lib/nova/instances/6810b29b-088f-441b-8a6a-02eaafada0c5_del complete#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.123 2 INFO nova.compute.manager [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG oslo.service.loopingcall [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:24:00 np0005486808 nova_compute[259627]: 2025-10-14 09:24:00.124 2 DEBUG nova.network.neutron [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:24:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 328 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 317 KiB/s wr, 96 op/s
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.124 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updated VIF entry in instance network info cache for port 6d5e10b7-5c07-4389-8916-e7c277cb2c88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.125 2 DEBUG nova.network.neutron [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [{"id": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "address": "fa:16:3e:7c:22:e5", "network": {"id": "4fc37d66-193b-4ab7-80e3-58e26dc76e47", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-511692965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4112adc84657452aa0e117ac5999054a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d5e10b7-5c", "ovs_interfaceid": "6d5e10b7-5c07-4389-8916-e7c277cb2c88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.152 2 DEBUG oslo_concurrency.lockutils [req-7ba800f0-03c3-4a00-92d2-e0c5adee5462 req-5c3869ba-c9b4-4018-9488-2ad8950f66da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-6810b29b-088f-441b-8a6a-02eaafada0c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.399 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.400 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-unplugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.401 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.402 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.402 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.403 2 DEBUG oslo_concurrency.lockutils [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.403 2 DEBUG nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] No waiting events found dispatching network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:01 np0005486808 nova_compute[259627]: 2025-10-14 09:24:01.403 2 WARNING nova.compute.manager [req-8976f90e-66c2-424e-933c-e910cc9e0307 req-d8047737-592d-464a-8b17-98d5bb045c70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received unexpected event network-vif-plugged-6d5e10b7-5c07-4389-8916-e7c277cb2c88 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.020 2 DEBUG nova.network.neutron [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.043 2 INFO nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Took 1.92 seconds to deallocate network for instance.#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.093 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.093 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.201 2 DEBUG oslo_concurrency.processutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 293 KiB/s wr, 147 op/s
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680459775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.638 2 DEBUG oslo_concurrency.processutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.647 2 DEBUG nova.compute.provider_tree [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.668 2 DEBUG nova.scheduler.client.report [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:02 np0005486808 podman[382289]: 2025-10-14 09:24:02.685199449 +0000 UTC m=+0.081503025 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:24:02 np0005486808 podman[382288]: 2025-10-14 09:24:02.690928381 +0000 UTC m=+0.088982980 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.698 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.724 2 INFO nova.scheduler.client.report [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Deleted allocations for instance 6810b29b-088f-441b-8a6a-02eaafada0c5#033[00m
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:02 np0005486808 nova_compute[259627]: 2025-10-14 09:24:02.790 2 DEBUG oslo_concurrency.lockutils [None req-a683aa59-0878-4840-a200-a738ad967690 f232ab535af04111bf570569aa293116 4112adc84657452aa0e117ac5999054a - - default default] Lock "6810b29b-088f-441b-8a6a-02eaafada0c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 14 05:24:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:02Z|01303|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:24:02 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:02Z|01304|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 05:24:02 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 14 05:24:03 np0005486808 nova_compute[259627]: 2025-10-14 09:24:03.044 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433828.0427566, 060db45d-e2f9-4bf6-bcc0-c72e479bfae1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:03 np0005486808 nova_compute[259627]: 2025-10-14 09:24:03.045 2 INFO nova.compute.manager [-] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:24:03 np0005486808 nova_compute[259627]: 2025-10-14 09:24:03.076 2 DEBUG nova.compute.manager [None req-334d64ca-7666-45f7-9947-cf8f4d8eea1a - - - - - -] [instance: 060db45d-e2f9-4bf6-bcc0-c72e479bfae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:03 np0005486808 nova_compute[259627]: 2025-10-14 09:24:03.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:03 np0005486808 nova_compute[259627]: 2025-10-14 09:24:03.524 2 DEBUG nova.compute.manager [req-738525c8-f843-4cac-82f0-91e43ef875b4 req-467af9bc-95d9-48a7-bf8e-80fcec652b4e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Received event network-vif-deleted-6d5e10b7-5c07-4389-8916-e7c277cb2c88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.2 KiB/s wr, 83 op/s
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.398 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.399 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.400 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.401 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.401 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.403 2 INFO nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Terminating instance#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.405 2 DEBUG nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:24:04 np0005486808 kernel: tap0c7d56b8-70 (unregistering): left promiscuous mode
Oct 14 05:24:04 np0005486808 NetworkManager[44885]: <info>  [1760433844.4718] device (tap0c7d56b8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:04Z|01305|binding|INFO|Releasing lport 0c7d56b8-701e-431d-8f3f-4682c684a719 from this chassis (sb_readonly=0)
Oct 14 05:24:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:04Z|01306|binding|INFO|Setting lport 0c7d56b8-701e-431d-8f3f-4682c684a719 down in Southbound
Oct 14 05:24:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:04Z|01307|binding|INFO|Removing iface tap0c7d56b8-70 ovn-installed in OVS
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.544 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:88:3d 10.100.0.22'], port_security=['fa:16:3e:b4:88:3d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'ef3d76bf-9763-4405-8e48-c2c4405a2a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e990e92a-384a-47c4-be5e-d58c231a3275', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0c7d56b8-701e-431d-8f3f-4682c684a719) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.546 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0c7d56b8-701e-431d-8f3f-4682c684a719 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c unbound from our chassis#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.547 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 14 05:24:04 np0005486808 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Consumed 16.147s CPU time.
Oct 14 05:24:04 np0005486808 systemd-machined[214636]: Machine qemu-153-instance-00000078 terminated.
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc66324d-e5e1-4ff9-98cc-d730c9911131]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.614 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfd2f1-ace1-48aa-aa0e-63cf72c5d242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.623 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0213e232-c9f3-48a7-9b56-400e06bc918a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.657 2 INFO nova.virt.libvirt.driver [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Instance destroyed successfully.#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.658 2 DEBUG nova.objects.instance [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid ef3d76bf-9763-4405-8e48-c2c4405a2a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.672 2 DEBUG nova.virt.libvirt.vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-74163694',display_name='tempest-TestNetworkBasicOps-server-74163694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-74163694',id=120,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJjpHsBb1FmstcXMm13RiW9DIcCDzUbHC1W47DgC4rLa2+YaGMfll4QodMfzMI26CQxBr8mMI8Apo+Vm4ZUA+2D0BmlkJiSjNtRVZZ4pPW+p+wcLG9yH2ONX/d7llYQVA==',key_name='tempest-TestNetworkBasicOps-266793307',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-scl22852',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:58Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=ef3d76bf-9763-4405-8e48-c2c4405a2a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.672 2 DEBUG nova.network.os_vif_util [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "0c7d56b8-701e-431d-8f3f-4682c684a719", "address": "fa:16:3e:b4:88:3d", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c7d56b8-70", "ovs_interfaceid": "0c7d56b8-701e-431d-8f3f-4682c684a719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.674 2 DEBUG nova.network.os_vif_util [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.676 2 DEBUG os_vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c7d56b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.678 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d159fad-e669-42d1-9330-017dbd44ebd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.687 2 INFO os_vif [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:88:3d,bridge_name='br-int',has_traffic_filtering=True,id=0c7d56b8-701e-431d-8f3f-4682c684a719,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c7d56b8-70')#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.716 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30efaecb-f5e8-4fc1-a921-7ee0651e43a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39c21153-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:78:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 7, 'rx_bytes': 1132, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 7, 'rx_bytes': 1132, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753725, 'reachable_time': 17078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 796, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 796, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382398, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.739 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0f0a00-23b9-4c01-a1a4-43986e7d6ebf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753740, 'tstamp': 753740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382417, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39c21153-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753743, 'tstamp': 753743}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382417, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.741 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 nova_compute[259627]: 2025-10-14 09:24:04.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.744 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c21153-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.745 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39c21153-40, col_values=(('external_ids', {'iface-id': '7bf9894c-4dab-4178-94d9-e45a9e10602a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:04.746 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.121 2 INFO nova.virt.libvirt.driver [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deleting instance files /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_del#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.121 2 INFO nova.virt.libvirt.driver [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deletion of /var/lib/nova/instances/ef3d76bf-9763-4405-8e48-c2c4405a2a3b_del complete#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.178 2 INFO nova.compute.manager [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG oslo.service.loopingcall [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.179 2 DEBUG nova.network.neutron [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:24:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:24:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:24:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:24:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1075546762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.637 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.637 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.638 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.638 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.639 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.639 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-unplugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.640 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.641 2 DEBUG oslo_concurrency.lockutils [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.641 2 DEBUG nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] No waiting events found dispatching network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:05 np0005486808 nova_compute[259627]: 2025-10-14 09:24:05.641 2 WARNING nova.compute.manager [req-eef2216a-e437-4223-aef3-ad0a9b15d507 req-ea66d4d2-d248-4942-aba7-93a4742005fd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received unexpected event network-vif-plugged-0c7d56b8-701e-431d-8f3f-4682c684a719 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 27356cb3-385b-4584-bbc6-fa49f3337939 does not exist
Oct 14 05:24:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 06a1773c-59c6-4de7-8972-2a93c2fa4693 does not exist
Oct 14 05:24:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 09c46760-8c72-4775-ac13-49b86cdd97e0 does not exist
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.146626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846146700, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1760, "num_deletes": 255, "total_data_size": 2664952, "memory_usage": 2711392, "flush_reason": "Manual Compaction"}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846164214, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2614024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43411, "largest_seqno": 45170, "table_properties": {"data_size": 2605905, "index_size": 4933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17202, "raw_average_key_size": 20, "raw_value_size": 2589497, "raw_average_value_size": 3082, "num_data_blocks": 218, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433683, "oldest_key_time": 1760433683, "file_creation_time": 1760433846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17622 microseconds, and 10741 cpu microseconds.
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.164258) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2614024 bytes OK
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.164276) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165563) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165575) EVENT_LOG_v1 {"time_micros": 1760433846165571, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.165590) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2657321, prev total WAL file size 2657321, number of live WAL files 2.
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.166371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2552KB)], [101(7470KB)]
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846166456, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10263413, "oldest_snapshot_seqno": -1}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6607 keys, 8666808 bytes, temperature: kUnknown
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846224880, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8666808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8622667, "index_size": 26493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 171863, "raw_average_key_size": 26, "raw_value_size": 8504268, "raw_average_value_size": 1287, "num_data_blocks": 1035, "num_entries": 6607, "num_filter_entries": 6607, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760433846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.225099) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8666808 bytes
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.226453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 148.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.3 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7131, records dropped: 524 output_compression: NoCompression
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.226468) EVENT_LOG_v1 {"time_micros": 1760433846226460, "job": 60, "event": "compaction_finished", "compaction_time_micros": 58255, "compaction_time_cpu_micros": 39344, "output_level": 6, "num_output_files": 1, "total_output_size": 8666808, "num_input_records": 7131, "num_output_records": 6607, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846226961, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760433846228310, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.166284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:24:06.228375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:24:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 9.4 KiB/s wr, 105 op/s
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.653 2 DEBUG nova.network.neutron [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.677 2 INFO nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Took 1.50 seconds to deallocate network for instance.#033[00m
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.727 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.728 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.743669581 +0000 UTC m=+0.065335846 container create 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:24:06 np0005486808 systemd[1]: Started libpod-conmon-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope.
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.797 2 DEBUG oslo_concurrency.processutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:06Z|01308|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:24:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:06Z|01309|binding|INFO|Releasing lport 7bf9894c-4dab-4178-94d9-e45a9e10602a from this chassis (sb_readonly=0)
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.719120444 +0000 UTC m=+0.040786809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.846393879 +0000 UTC m=+0.168060194 container init 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.857029072 +0000 UTC m=+0.178695337 container start 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.860883997 +0000 UTC m=+0.182550272 container attach 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:24:06 np0005486808 cool_nobel[382776]: 167 167
Oct 14 05:24:06 np0005486808 systemd[1]: libpod-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope: Deactivated successfully.
Oct 14 05:24:06 np0005486808 conmon[382776]: conmon 707154b64902b6df87c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope/container/memory.events
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.868087625 +0000 UTC m=+0.189753950 container died 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:24:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a498481b9cf6fe03809d4ee27f54b847e7c66237884886ed97a33cd75205bfc4-merged.mount: Deactivated successfully.
Oct 14 05:24:06 np0005486808 nova_compute[259627]: 2025-10-14 09:24:06.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:06 np0005486808 podman[382760]: 2025-10-14 09:24:06.950488282 +0000 UTC m=+0.272154547 container remove 707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:24:06 np0005486808 systemd[1]: libpod-conmon-707154b64902b6df87c023586c019769610540dd305cb2c638284a2e722256d3.scope: Deactivated successfully.
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.039 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:24:07 np0005486808 podman[382821]: 2025-10-14 09:24:07.155900187 +0000 UTC m=+0.043314671 container create 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:24:07 np0005486808 systemd[1]: Started libpod-conmon-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope.
Oct 14 05:24:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:07 np0005486808 podman[382821]: 2025-10-14 09:24:07.13618981 +0000 UTC m=+0.023604314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:07 np0005486808 podman[382821]: 2025-10-14 09:24:07.259978499 +0000 UTC m=+0.147393043 container init 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 05:24:07 np0005486808 podman[382821]: 2025-10-14 09:24:07.267421333 +0000 UTC m=+0.154835817 container start 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:24:07 np0005486808 podman[382821]: 2025-10-14 09:24:07.272243842 +0000 UTC m=+0.159658316 container attach 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181953327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.304 2 DEBUG oslo_concurrency.processutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.311 2 DEBUG nova.compute.provider_tree [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.330 2 DEBUG nova.scheduler.client.report [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.365 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.396 2 INFO nova.scheduler.client.report [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance ef3d76bf-9763-4405-8e48-c2c4405a2a3b#033[00m
Oct 14 05:24:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.457 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:07.459 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.561 2 DEBUG oslo_concurrency.lockutils [None req-853a48f5-5544-460b-8b45-300cbc32859a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "ef3d76bf-9763-4405-8e48-c2c4405a2a3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:07 np0005486808 nova_compute[259627]: 2025-10-14 09:24:07.762 2 DEBUG nova.compute.manager [req-46cb717e-d82c-4046-b5d7-de50572b77e4 req-8729897c-bfb1-4961-8753-b658bc434990 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Received event network-vif-deleted-0c7d56b8-701e-431d-8f3f-4682c684a719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.148 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.150 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.171 2 DEBUG nova.objects.instance [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.193 2 DEBUG nova.virt.libvirt.vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.193 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.194 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.198 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.201 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.202 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Attempting to detach device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.203 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:17:98:a5"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <target dev="tapff2b9b74-a6"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.208 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.212 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <name>instance-00000075</name>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='tap81977d79-f7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:17:98:a5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='tapff2b9b74-a6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='net1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.213 2 INFO nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the persistent domain config.#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.213 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] (1/8): Attempting to detach device tapff2b9b74-a6 with device alias net1 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.214 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] detach device xml: <interface type="ethernet">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <mac address="fa:16:3e:17:98:a5"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <model type="virtio"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <mtu size="1442"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <target dev="tapff2b9b74-a6"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </interface>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 14 05:24:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 121 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 7.8 KiB/s wr, 86 op/s
Oct 14 05:24:08 np0005486808 jolly_joliot[382838]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:24:08 np0005486808 jolly_joliot[382838]: --> relative data size: 1.0
Oct 14 05:24:08 np0005486808 jolly_joliot[382838]: --> All data devices are unavailable
Oct 14 05:24:08 np0005486808 kernel: tapff2b9b74-a6 (unregistering): left promiscuous mode
Oct 14 05:24:08 np0005486808 NetworkManager[44885]: <info>  [1760433848.3224] device (tapff2b9b74-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:24:08 np0005486808 systemd[1]: libpod-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope: Deactivated successfully.
Oct 14 05:24:08 np0005486808 podman[382821]: 2025-10-14 09:24:08.327686901 +0000 UTC m=+1.215101375 container died 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.340 2 DEBUG nova.virt.libvirt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Received event <DeviceRemovedEvent: 1760433848.340114, 50c83173-31e3-4f7a-8836-26e52affd0f2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 14 05:24:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:08Z|01310|binding|INFO|Releasing lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 from this chassis (sb_readonly=0)
Oct 14 05:24:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:08Z|01311|binding|INFO|Setting lport ff2b9b74-a6fc-4774-89d2-9c010f121d65 down in Southbound
Oct 14 05:24:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:08Z|01312|binding|INFO|Removing iface tapff2b9b74-a6 ovn-installed in OVS
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.343 2 DEBUG nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Start waiting for the detach event from libvirt for device tapff2b9b74-a6 with device alias net1 for instance 50c83173-31e3-4f7a-8836-26e52affd0f2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.344 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.348 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:98:a5 10.100.0.18', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab61dd9b-dbf7-46d4-89df-319a0a1fc6a6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ff2b9b74-a6fc-4774-89d2-9c010f121d65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.349 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433833.3482473, 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.349 2 INFO nova.compute.manager [-] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.350 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 in datapath 39c21153-4a3d-40fd-91df-ae7d5dae4d8c unbound from our chassis#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.352 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.354 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3468914f-bbfe-4ae0-b02b-50c1823dc14c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.355 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c namespace which is not needed anymore#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.356 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <name>instance-00000075</name>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:22:38</nova:creationTime>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:port uuid="ff2b9b74-a6fc-4774-89d2-9c010f121d65">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target dev='tap81977d79-f7'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.356 2 INFO nova.virt.libvirt.driver [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully detached device tapff2b9b74-a6 from instance 50c83173-31e3-4f7a-8836-26e52affd0f2 from the live domain config.#033[00m
Oct 14 05:24:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-53e2e3dffdcf2e4e10be91a85b2df6f91495ee8b1a6dc5bafdb32318db141676-merged.mount: Deactivated successfully.
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.361 2 DEBUG nova.virt.libvirt.vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.362 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.363 2 DEBUG nova.network.os_vif_util [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.368 2 DEBUG os_vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff2b9b74-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.374 2 DEBUG nova.compute.manager [None req-3cbc76b5-7b46-4181-83fd-de04683ca66c - - - - - -] [instance: 8f5e63fb-23c2-4f15-acca-bc5fbeb0729b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.385 2 INFO os_vif [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.386 2 DEBUG nova.virt.libvirt.guest [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:08 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:08 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:08 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:24:08 np0005486808 podman[382821]: 2025-10-14 09:24:08.398266285 +0000 UTC m=+1.285680759 container remove 9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:24:08 np0005486808 systemd[1]: libpod-conmon-9f813c9c449763cbff96bb30f765da29df4572e9852cbecb22a8beeb90423678.scope: Deactivated successfully.
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : haproxy version is 2.8.14-c23fe91
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [NOTICE]   (379291) : path to executable is /usr/sbin/haproxy
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : Exiting Master process...
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : Exiting Master process...
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [ALERT]    (379291) : Current worker (379293) exited with code 143 (Terminated)
Oct 14 05:24:08 np0005486808 neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c[379287]: [WARNING]  (379291) : All workers exited. Exiting... (0)
Oct 14 05:24:08 np0005486808 systemd[1]: libpod-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope: Deactivated successfully.
Oct 14 05:24:08 np0005486808 podman[382925]: 2025-10-14 09:24:08.555638204 +0000 UTC m=+0.049897334 container died 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:24:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9-userdata-shm.mount: Deactivated successfully.
Oct 14 05:24:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-36c287ab2c96f243d6a33b107299142f79d3dafdd9e98b859faafdb8e51bef5d-merged.mount: Deactivated successfully.
Oct 14 05:24:08 np0005486808 podman[382925]: 2025-10-14 09:24:08.601131318 +0000 UTC m=+0.095390438 container cleanup 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:24:08 np0005486808 systemd[1]: libpod-conmon-812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9.scope: Deactivated successfully.
Oct 14 05:24:08 np0005486808 podman[382978]: 2025-10-14 09:24:08.664840932 +0000 UTC m=+0.041737712 container remove 812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.671 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c20974d6-c205-45ff-9f71-d9eea27f8e99]: (4, ('Tue Oct 14 09:24:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c (812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9)\n812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9\nTue Oct 14 09:24:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c (812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9)\n812842f4154758e5cf55dfb2ac091173dabeadadad9fa3ff3e6cac9acea483e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[933bf4aa-16cb-4c5b-bd3f-b654834140c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.673 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c21153-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:08 np0005486808 kernel: tap39c21153-40: left promiscuous mode
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 nova_compute[259627]: 2025-10-14 09:24:08.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d39bf5a4-b4fe-49d0-8888-3b1d3c7bec24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.726 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4539ac4b-edc0-47dc-8a5b-8df63650206e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.727 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f2d433-ab77-4628-9b99-da0778d7431b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5eccdc32-e475-4a80-977b-86457cbc693b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753718, 'reachable_time': 33156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383040, 'error': None, 'target': 'ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.746 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39c21153-4a3d-40fd-91df-ae7d5dae4d8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:24:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:08.746 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[484fa5fb-3603-4b36-ae34-9942eb770173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:08 np0005486808 systemd[1]: run-netns-ovnmeta\x2d39c21153\x2d4a3d\x2d40fd\x2d91df\x2dae7d5dae4d8c.mount: Deactivated successfully.
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.162 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Triggering sync for uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.162 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.163 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.196 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.199138545 +0000 UTC m=+0.079929416 container create 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:24:09 np0005486808 systemd[1]: Started libpod-conmon-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope.
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.16820215 +0000 UTC m=+0.048993121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.306805945 +0000 UTC m=+0.187596836 container init 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.315161222 +0000 UTC m=+0.195952103 container start 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.318995906 +0000 UTC m=+0.199786787 container attach 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:24:09 np0005486808 flamboyant_goldberg[383100]: 167 167
Oct 14 05:24:09 np0005486808 systemd[1]: libpod-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope: Deactivated successfully.
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.323847736 +0000 UTC m=+0.204638647 container died 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:24:09 np0005486808 podman[383084]: 2025-10-14 09:24:09.365636909 +0000 UTC m=+0.246427820 container remove 46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:24:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b6379b5f4e870b5f94699ca12df069e8ab3c4394ab1de72acfe8f66a7cc2e7ab-merged.mount: Deactivated successfully.
Oct 14 05:24:09 np0005486808 systemd[1]: libpod-conmon-46b60fb3ee968f5265252c25309e7a104df2908bda08e5ec88cc163d18aa6e37.scope: Deactivated successfully.
Oct 14 05:24:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:09.460 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.462 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.463 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.463 2 DEBUG nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:09 np0005486808 podman[383124]: 2025-10-14 09:24:09.631655972 +0000 UTC m=+0.061195453 container create a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:24:09 np0005486808 systemd[1]: Started libpod-conmon-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope.
Oct 14 05:24:09 np0005486808 podman[383124]: 2025-10-14 09:24:09.613574555 +0000 UTC m=+0.043114036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:09 np0005486808 podman[383124]: 2025-10-14 09:24:09.737865296 +0000 UTC m=+0.167404877 container init a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 05:24:09 np0005486808 podman[383124]: 2025-10-14 09:24:09.746203883 +0000 UTC m=+0.175743364 container start a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:24:09 np0005486808 podman[383124]: 2025-10-14 09:24:09.750681863 +0000 UTC m=+0.180221384 container attach a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.873 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.874 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.875 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.875 2 WARNING nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-unplugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.876 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG oslo_concurrency.lockutils [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.877 2 WARNING nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-ff2b9b74-a6fc-4774-89d2-9c010f121d65 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.877 2 DEBUG nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-deleted-ff2b9b74-a6fc-4774-89d2-9c010f121d65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.878 2 INFO nova.compute.manager [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Neutron deleted interface ff2b9b74-a6fc-4774-89d2-9c010f121d65; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.878 2 DEBUG nova.network.neutron [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.902 2 DEBUG nova.objects.instance [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'system_metadata' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.926 2 DEBUG nova.objects.instance [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lazy-loading 'flavor' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.942 2 DEBUG nova.virt.libvirt.vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.943 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.943 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.952 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.956 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <name>instance-00000075</name>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='tap81977d79-f7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.957 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.961 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:17:98:a5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapff2b9b74-a6"/></interface>not found in domain: <domain type='kvm' id='148'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <name>instance-00000075</name>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <uuid>50c83173-31e3-4f7a-8836-26e52affd0f2</uuid>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:24:08</nova:creationTime>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <memory unit='KiB'>131072</memory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <vcpu placement='static'>1</vcpu>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <resource>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <partition>/machine</partition>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </resource>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <sysinfo type='smbios'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='manufacturer'>RDO</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='product'>OpenStack Compute</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='serial'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='uuid'>50c83173-31e3-4f7a-8836-26e52affd0f2</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <entry name='family'>Virtual Machine</entry>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <boot dev='hd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <smbios mode='sysinfo'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <vmcoreinfo state='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <cpu mode='custom' match='exact' check='full'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <vendor>AMD</vendor>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='x2apic'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc-deadline'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='hypervisor'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='tsc_adjust'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='spec-ctrl'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='stibp'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='arch-capabilities'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='cmp_legacy'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='overflow-recov'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='succor'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='ibrs'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='amd-ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='virt-ssbd'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='lbrv'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='tsc-scale'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='vmcb-clean'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='flushbyasid'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pause-filter'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='pfthreshold'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='rdctl-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='mds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='pschange-mc-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='gds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='rfds-no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='xsaves'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='svm'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='require' name='topoext'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='npt'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <feature policy='disable' name='nrip-save'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <clock offset='utc'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='pit' tickpolicy='delay'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <timer name='hpet' present='no'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_poweroff>destroy</on_poweroff>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_reboot>restart</on_reboot>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <on_crash>destroy</on_crash>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <disk type='network' device='disk'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk' index='2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='vda' bus='virtio'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='virtio-disk0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <disk type='network' device='cdrom'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='qemu' type='raw' cache='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <auth username='openstack'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <secret type='ceph' uuid='c49aadb6-9b04-5cb1-8f5f-4c91676c568e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source protocol='rbd' name='vms/50c83173-31e3-4f7a-8836-26e52affd0f2_disk.config' index='1'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <host name='192.168.122.100' port='6789'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='sda' bus='sata'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <readonly/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='sata0-0-0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='0' model='pcie-root'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pcie.0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='1' port='0x10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='2' port='0x11'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='3' port='0x12'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='4' port='0x13'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='5' port='0x14'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='6' port='0x15'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='7' port='0x16'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='8' port='0x17'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.8'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='9' port='0x18'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.9'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='10' port='0x19'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='11' port='0x1a'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.11'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='12' port='0x1b'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.12'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='13' port='0x1c'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.13'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='14' port='0x1d'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.14'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='15' port='0x1e'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.15'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='16' port='0x1f'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.16'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='17' port='0x20'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.17'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='18' port='0x21'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.18'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='19' port='0x22'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.19'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='20' port='0x23'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.20'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='21' port='0x24'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.21'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='22' port='0x25'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.22'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='23' port='0x26'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.23'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='24' port='0x27'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.24'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-root-port'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target chassis='25' port='0x28'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.25'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model name='pcie-pci-bridge'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='pci.26'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='usb'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <controller type='sata' index='0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='ide'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </controller>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <interface type='ethernet'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <mac address='fa:16:3e:32:f2:3f'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target dev='tap81977d79-f7'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model type='virtio'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <driver name='vhost' rx_queue_size='512'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <mtu size='1442'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='net0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <serial type='pty'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target type='isa-serial' port='0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:        <model name='isa-serial'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      </target>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <console type='pty' tty='/dev/pts/0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <source path='/dev/pts/0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <log file='/var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2/console.log' append='off'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <target type='serial' port='0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='serial0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </console>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='tablet' bus='usb'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='usb' bus='0' port='1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='mouse' bus='ps2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input1'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <input type='keyboard' bus='ps2'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='input2'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </input>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <listen type='address' address='::0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </graphics>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <audio id='1' type='none'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <model type='virtio' heads='1' primary='yes'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='video0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <watchdog model='itco' action='reset'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='watchdog0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </watchdog>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <memballoon model='virtio'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <stats period='10'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='balloon0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <rng model='virtio'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <backend model='random'>/dev/urandom</backend>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <alias name='rng0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <label>system_u:system_r:svirt_t:s0:c470,c678</label>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c470,c678</imagelabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <label>+107:+107</label>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <imagelabel>+107:+107</imagelabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </seclabel>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.961 2 WARNING nova.virt.libvirt.driver [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Detaching interface fa:16:3e:17:98:a5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapff2b9b74-a6' not found.#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.virt.libvirt.vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converting VIF {"id": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "address": "fa:16:3e:17:98:a5", "network": {"id": "39c21153-4a3d-40fd-91df-ae7d5dae4d8c", "bridge": "br-int", "label": "tempest-network-smoke--1731672308", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff2b9b74-a6", "ovs_interfaceid": "ff2b9b74-a6fc-4774-89d2-9c010f121d65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG nova.network.os_vif_util [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.962 2 DEBUG os_vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff2b9b74-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.966 2 INFO os_vif [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:98:a5,bridge_name='br-int',has_traffic_filtering=True,id=ff2b9b74-a6fc-4774-89d2-9c010f121d65,network=Network(39c21153-4a3d-40fd-91df-ae7d5dae4d8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff2b9b74-a6')#033[00m
Oct 14 05:24:09 np0005486808 nova_compute[259627]: 2025-10-14 09:24:09.967 2 DEBUG nova.virt.libvirt.guest [req-8b226498-d4f6-48f0-8b2b-58701ddc2fc4 req-be3d70cb-c0f8-4b06-bbb0-aa27498d66b8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:name>tempest-TestNetworkBasicOps-server-805281293</nova:name>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:creationTime>2025-10-14 09:24:09</nova:creationTime>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:flavor name="m1.nano">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:memory>128</nova:memory>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:disk>1</nova:disk>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:swap>0</nova:swap>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:flavor>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:owner>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  <nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    <nova:port uuid="81977d79-f754-42ba-8b3c-c4eb2f9651d2">
Oct 14 05:24:09 np0005486808 nova_compute[259627]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:    </nova:port>
Oct 14 05:24:09 np0005486808 nova_compute[259627]:  </nova:ports>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: </nova:instance>
Oct 14 05:24:09 np0005486808 nova_compute[259627]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 14 05:24:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:10Z|01313|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:24:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 8.7 KiB/s wr, 88 op/s
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:10 np0005486808 keen_wu[383140]: {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    "0": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "devices": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "/dev/loop3"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            ],
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_name": "ceph_lv0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_size": "21470642176",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "name": "ceph_lv0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "tags": {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_name": "ceph",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.crush_device_class": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.encrypted": "0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_id": "0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.vdo": "0"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            },
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "vg_name": "ceph_vg0"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        }
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    ],
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    "1": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "devices": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "/dev/loop4"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            ],
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_name": "ceph_lv1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_size": "21470642176",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "name": "ceph_lv1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "tags": {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_name": "ceph",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.crush_device_class": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.encrypted": "0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_id": "1",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.vdo": "0"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            },
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "vg_name": "ceph_vg1"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        }
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    ],
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    "2": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "devices": [
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "/dev/loop5"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            ],
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_name": "ceph_lv2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_size": "21470642176",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "name": "ceph_lv2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "tags": {
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.cluster_name": "ceph",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.crush_device_class": "",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.encrypted": "0",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osd_id": "2",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:                "ceph.vdo": "0"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            },
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "type": "block",
Oct 14 05:24:10 np0005486808 keen_wu[383140]:            "vg_name": "ceph_vg2"
Oct 14 05:24:10 np0005486808 keen_wu[383140]:        }
Oct 14 05:24:10 np0005486808 keen_wu[383140]:    ]
Oct 14 05:24:10 np0005486808 keen_wu[383140]: }
Oct 14 05:24:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:10Z|01314|binding|INFO|Releasing lport 16a7cbd0-b25e-4461-8725-c92979b01f53 from this chassis (sb_readonly=0)
Oct 14 05:24:10 np0005486808 systemd[1]: libpod-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope: Deactivated successfully.
Oct 14 05:24:10 np0005486808 podman[383124]: 2025-10-14 09:24:10.5239949 +0000 UTC m=+0.953534381 container died a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bcf1e3424c38ad456ad941553216df932737b6dbcf59456e24c41aeefe55aec6-merged.mount: Deactivated successfully.
Oct 14 05:24:10 np0005486808 podman[383124]: 2025-10-14 09:24:10.578862446 +0000 UTC m=+1.008401927 container remove a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:24:10 np0005486808 systemd[1]: libpod-conmon-a7c6e6fc15fddf842bc256502aa85868c820e76a148f3ce1d62bbe4261314d5c.scope: Deactivated successfully.
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.770 2 DEBUG nova.compute.manager [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.771 2 DEBUG nova.compute.manager [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing instance network info cache due to event network-changed-81977d79-f754-42ba-8b3c-c4eb2f9651d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.771 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.825 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.826 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.826 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.827 2 INFO nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Terminating instance#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.827 2 DEBUG nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:24:10 np0005486808 kernel: tap81977d79-f7 (unregistering): left promiscuous mode
Oct 14 05:24:10 np0005486808 NetworkManager[44885]: <info>  [1760433850.8777] device (tap81977d79-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:10Z|01315|binding|INFO|Releasing lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 from this chassis (sb_readonly=0)
Oct 14 05:24:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:10Z|01316|binding|INFO|Setting lport 81977d79-f754-42ba-8b3c-c4eb2f9651d2 down in Southbound
Oct 14 05:24:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:10Z|01317|binding|INFO|Removing iface tap81977d79-f7 ovn-installed in OVS
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.903 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f2:3f 10.100.0.10'], port_security=['fa:16:3e:32:f2:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '50c83173-31e3-4f7a-8836-26e52affd0f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1981aa60-63c9-49df-94e5-0874b5ab31e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=547c8605-a609-4b00-82f5-2d938c7ab8e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=81977d79-f754-42ba-8b3c-c4eb2f9651d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.905 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 in datapath 99e78054-f9f4-417c-a942-d4f9dd534ef7 unbound from our chassis#033[00m
Oct 14 05:24:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.908 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e78054-f9f4-417c-a942-d4f9dd534ef7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:24:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a49dd713-1531-4879-93ff-52471731bd3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:10.910 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 namespace which is not needed anymore#033[00m
Oct 14 05:24:10 np0005486808 nova_compute[259627]: 2025-10-14 09:24:10.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:10 np0005486808 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 14 05:24:10 np0005486808 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Consumed 19.395s CPU time.
Oct 14 05:24:10 np0005486808 systemd-machined[214636]: Machine qemu-148-instance-00000075 terminated.
Oct 14 05:24:11 np0005486808 NetworkManager[44885]: <info>  [1760433851.0529] manager: (tap81977d79-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.069 2 INFO nova.virt.libvirt.driver [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Instance destroyed successfully.#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.070 2 DEBUG nova.objects.instance [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 50c83173-31e3-4f7a-8836-26e52affd0f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.090 2 DEBUG nova.virt.libvirt.vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:21:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-805281293',display_name='tempest-TestNetworkBasicOps-server-805281293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-805281293',id=117,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLemNDLm8oqFAwx6pW1SyO2V0WIcpPak25N5nC9IiUcuBV+L+dio1UtnHPJvvKiOY7HNqaGgptTx3l7dHlvUaMKtJM7efD9rBJOYA3Gu+LK7uF+l/NOZF2PoS0o9uk9l4A==',key_name='tempest-TestNetworkBasicOps-1122823922',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:22:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ntom2zcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:22:09Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=50c83173-31e3-4f7a-8836-26e52affd0f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.091 2 DEBUG nova.network.os_vif_util [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.091 2 DEBUG nova.network.os_vif_util [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.092 2 DEBUG os_vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81977d79-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : haproxy version is 2.8.14-c23fe91
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [NOTICE]   (378028) : path to executable is /usr/sbin/haproxy
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : Exiting Master process...
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : Exiting Master process...
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [ALERT]    (378028) : Current worker (378030) exited with code 143 (Terminated)
Oct 14 05:24:11 np0005486808 neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7[378024]: [WARNING]  (378028) : All workers exited. Exiting... (0)
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.101 2 INFO os_vif [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f2:3f,bridge_name='br-int',has_traffic_filtering=True,id=81977d79-f754-42ba-8b3c-c4eb2f9651d2,network=Network(99e78054-f9f4-417c-a942-d4f9dd534ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81977d79-f7')#033[00m
Oct 14 05:24:11 np0005486808 systemd[1]: libpod-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383285]: 2025-10-14 09:24:11.110126914 +0000 UTC m=+0.055751809 container died 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:24:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa-userdata-shm.mount: Deactivated successfully.
Oct 14 05:24:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-46afafd803e5df435ff3608e3076ec93ae7e5d3420b992909b0759f480792e61-merged.mount: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383285]: 2025-10-14 09:24:11.151539737 +0000 UTC m=+0.097164632 container cleanup 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:24:11 np0005486808 systemd[1]: libpod-conmon-7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa.scope: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383344]: 2025-10-14 09:24:11.226173421 +0000 UTC m=+0.049961346 container remove 7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.231 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e76e48b-0aab-439a-a79a-e933c5357538]: (4, ('Tue Oct 14 09:24:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 (7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa)\n7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa\nTue Oct 14 09:24:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 (7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa)\n7fd39b421e00871df7487a1f16c9e47fcb8af61338e133bc4cd28ae2110e9faa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.233 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98007bca-172c-4197-87c3-9e41e5e64450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.234 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e78054-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 kernel: tap99e78054-f0: left promiscuous mode
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.239 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[94f9d075-b20f-4ae6-be69-1099061e47b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.279 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e5190f94-c574-4ed0-8aa1-5961af7cccc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcc48c4-d226-4fde-9744-3adba414ca08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.298 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8118069-5f3b-474e-9054-b2112126d8ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750668, 'reachable_time': 38249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383382, 'error': None, 'target': 'ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.301 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e78054-f9f4-417c-a942-d4f9dd534ef7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:24:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:11.301 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[0e36258e-6143-4e6f-bd97-0d9c014eb3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2d99e78054\x2df9f4\x2d417c\x2da942\x2dd4f9dd534ef7.mount: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.427349842 +0000 UTC m=+0.056891677 container create d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:24:11 np0005486808 systemd[1]: Started libpod-conmon-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope.
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.39730305 +0000 UTC m=+0.026844955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.517 2 INFO nova.virt.libvirt.driver [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deleting instance files /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2_del#033[00m
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.520659688 +0000 UTC m=+0.150201543 container init d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.519 2 INFO nova.virt.libvirt.driver [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deletion of /var/lib/nova/instances/50c83173-31e3-4f7a-8836-26e52affd0f2_del complete#033[00m
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.527951998 +0000 UTC m=+0.157493833 container start d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.531042684 +0000 UTC m=+0.160584569 container attach d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:24:11 np0005486808 quirky_poincare[383412]: 167 167
Oct 14 05:24:11 np0005486808 systemd[1]: libpod-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.533966316 +0000 UTC m=+0.163508171 container died d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 14 05:24:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6b54ef0006ee395095bd4c340785ee3a3c517ef2decb631b94aa65e07b050448-merged.mount: Deactivated successfully.
Oct 14 05:24:11 np0005486808 podman[383395]: 2025-10-14 09:24:11.573467113 +0000 UTC m=+0.203008988 container remove d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poincare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.574 2 INFO nova.compute.manager [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.575 2 DEBUG oslo.service.loopingcall [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.576 2 DEBUG nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.576 2 DEBUG nova.network.neutron [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:24:11 np0005486808 systemd[1]: libpod-conmon-d23f487d8ebc3f4c75c4bd554b744a4240896d435511777ad5a8ea6e0708ae51.scope: Deactivated successfully.
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.592 2 INFO nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Port ff2b9b74-a6fc-4774-89d2-9c010f121d65 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.593 2 DEBUG nova.network.neutron [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.608 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.610 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.610 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Refreshing network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.632 2 DEBUG oslo_concurrency.lockutils [None req-2396c8b6-1363-476b-9796-d21a13b781c2 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "interface-50c83173-31e3-4f7a-8836-26e52affd0f2-ff2b9b74-a6fc-4774-89d2-9c010f121d65" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:11 np0005486808 podman[383436]: 2025-10-14 09:24:11.767679811 +0000 UTC m=+0.054932478 container create b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:24:11 np0005486808 systemd[1]: Started libpod-conmon-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope.
Oct 14 05:24:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:11 np0005486808 podman[383436]: 2025-10-14 09:24:11.741948436 +0000 UTC m=+0.029201183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:11 np0005486808 podman[383436]: 2025-10-14 09:24:11.854420005 +0000 UTC m=+0.141672712 container init b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:24:11 np0005486808 podman[383436]: 2025-10-14 09:24:11.865112159 +0000 UTC m=+0.152364856 container start b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:24:11 np0005486808 podman[383436]: 2025-10-14 09:24:11.869223621 +0000 UTC m=+0.156476318 container attach b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.982 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.982 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG oslo_concurrency.lockutils [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.983 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:11 np0005486808 nova_compute[259627]: 2025-10-14 09:24:11.984 2 DEBUG nova.compute.manager [req-cfed1345-164b-4777-8117-2e239ad5abbc req-1cb83ef4-cb45-4ccd-b0db-32a106d84b74 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-unplugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.045 2 DEBUG nova.network.neutron [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.060 2 INFO nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Took 0.48 seconds to deallocate network for instance.#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.107 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.112 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.147 2 DEBUG oslo_concurrency.processutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 KiB/s wr, 40 op/s
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355233882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.607 2 DEBUG oslo_concurrency.processutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.617 2 DEBUG nova.compute.provider_tree [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.635 2 DEBUG nova.scheduler.client.report [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.659 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.688 2 INFO nova.scheduler.client.report [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 50c83173-31e3-4f7a-8836-26e52affd0f2#033[00m
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.764 2 DEBUG oslo_concurrency.lockutils [None req-416d9d17-5b90-44d8-86d5-da10538a081a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:12 np0005486808 focused_gates[383452]: {
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_id": 2,
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "type": "bluestore"
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    },
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_id": 1,
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "type": "bluestore"
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    },
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_id": 0,
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:24:12 np0005486808 focused_gates[383452]:        "type": "bluestore"
Oct 14 05:24:12 np0005486808 focused_gates[383452]:    }
Oct 14 05:24:12 np0005486808 focused_gates[383452]: }
Oct 14 05:24:12 np0005486808 systemd[1]: libpod-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope: Deactivated successfully.
Oct 14 05:24:12 np0005486808 podman[383436]: 2025-10-14 09:24:12.813562995 +0000 UTC m=+1.100815692 container died b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:24:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-211a0c18b29b6cdf53ecbe486d8048d500e825ed02502c99a9acccf8e8390e1c-merged.mount: Deactivated successfully.
Oct 14 05:24:12 np0005486808 podman[383436]: 2025-10-14 09:24:12.876028538 +0000 UTC m=+1.163281185 container remove b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:24:12 np0005486808 systemd[1]: libpod-conmon-b0d772973fb82a6f86689b22c2e00796cbe29c8baee9b9d583ae03518c5a6bbf.scope: Deactivated successfully.
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fb79dafb-f443-4ecc-b7c8-98f61a431fa3 does not exist
Oct 14 05:24:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f430ab29-df09-46a4-a048-be2274958b87 does not exist
Oct 14 05:24:12 np0005486808 nova_compute[259627]: 2025-10-14 09:24:12.929 2 DEBUG nova.compute.manager [req-d3481104-175e-4aaf-a13f-739de2fb79f1 req-05852587-aaa7-4d04-9bc8-e73ce9052564 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-deleted-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:24:13 np0005486808 nova_compute[259627]: 2025-10-14 09:24:13.588 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updated VIF entry in instance network info cache for port 81977d79-f754-42ba-8b3c-c4eb2f9651d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:24:13 np0005486808 nova_compute[259627]: 2025-10-14 09:24:13.588 2 DEBUG nova.network.neutron [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Updating instance_info_cache with network_info: [{"id": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "address": "fa:16:3e:32:f2:3f", "network": {"id": "99e78054-f9f4-417c-a942-d4f9dd534ef7", "bridge": "br-int", "label": "tempest-network-smoke--1093348429", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81977d79-f7", "ovs_interfaceid": "81977d79-f754-42ba-8b3c-c4eb2f9651d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:13 np0005486808 nova_compute[259627]: 2025-10-14 09:24:13.610 2 DEBUG oslo_concurrency.lockutils [req-60772ba5-7603-424b-acce-225fa9720628 req-ab24f010-b429-4306-ab5d-90f75a03b945 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-50c83173-31e3-4f7a-8836-26e52affd0f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.193 2 DEBUG nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.194 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.194 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.195 2 DEBUG oslo_concurrency.lockutils [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "50c83173-31e3-4f7a-8836-26e52affd0f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.195 2 DEBUG nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] No waiting events found dispatching network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.195 2 WARNING nova.compute.manager [req-2ec5e0a6-afdb-4b98-b656-d92f062d8046 req-26af9426-6ff0-49ec-a8b4-fbb9c3670288 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Received unexpected event network-vif-plugged-81977d79-f754-42ba-8b3c-c4eb2f9651d2 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:24:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 36 op/s
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.603 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433839.6015317, 6810b29b-088f-441b-8a6a-02eaafada0c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.603 2 INFO nova.compute.manager [-] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:24:14 np0005486808 nova_compute[259627]: 2025-10-14 09:24:14.635 2 DEBUG nova.compute.manager [None req-1aab3de4-f9f9-4e2c-a3a4-d7d94c20f12a - - - - - -] [instance: 6810b29b-088f-441b-8a6a-02eaafada0c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:16 np0005486808 nova_compute[259627]: 2025-10-14 09:24:16.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 5.4 KiB/s wr, 61 op/s
Oct 14 05:24:16 np0005486808 podman[383572]: 2025-10-14 09:24:16.671144583 +0000 UTC m=+0.078015158 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:24:16 np0005486808 podman[383571]: 2025-10-14 09:24:16.709927452 +0000 UTC m=+0.112251935 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 05:24:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:18 np0005486808 nova_compute[259627]: 2025-10-14 09:24:18.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:18 np0005486808 nova_compute[259627]: 2025-10-14 09:24:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 05:24:19 np0005486808 nova_compute[259627]: 2025-10-14 09:24:19.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:19 np0005486808 nova_compute[259627]: 2025-10-14 09:24:19.650 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433844.649698, ef3d76bf-9763-4405-8e48-c2c4405a2a3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:19 np0005486808 nova_compute[259627]: 2025-10-14 09:24:19.651 2 INFO nova.compute.manager [-] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:24:19 np0005486808 nova_compute[259627]: 2025-10-14 09:24:19.681 2 DEBUG nova.compute.manager [None req-d65167ca-1b97-4fa1-8939-c957c864a01e - - - - - -] [instance: ef3d76bf-9763-4405-8e48-c2c4405a2a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Oct 14 05:24:21 np0005486808 nova_compute[259627]: 2025-10-14 09:24:21.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:24:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:24:24 np0005486808 nova_compute[259627]: 2025-10-14 09:24:24.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:26 np0005486808 nova_compute[259627]: 2025-10-14 09:24:26.068 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433851.0668418, 50c83173-31e3-4f7a-8836-26e52affd0f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:26 np0005486808 nova_compute[259627]: 2025-10-14 09:24:26.068 2 INFO nova.compute.manager [-] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:24:26 np0005486808 nova_compute[259627]: 2025-10-14 09:24:26.100 2 DEBUG nova.compute.manager [None req-c0b6e475-3ded-4579-b472-051e29d78cae - - - - - -] [instance: 50c83173-31e3-4f7a-8836-26e52affd0f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:26 np0005486808 nova_compute[259627]: 2025-10-14 09:24:26.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:24:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:29 np0005486808 nova_compute[259627]: 2025-10-14 09:24:29.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:31 np0005486808 nova_compute[259627]: 2025-10-14 09:24:31.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:24:32
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'vms', 'images', 'default.rgw.meta']
Oct 14 05:24:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:24:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:24:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:24:33 np0005486808 podman[383615]: 2025-10-14 09:24:33.670692221 +0000 UTC m=+0.086999971 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:24:33 np0005486808 podman[383616]: 2025-10-14 09:24:33.692885099 +0000 UTC m=+0.098773651 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:24:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:34 np0005486808 nova_compute[259627]: 2025-10-14 09:24:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.778 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.779 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.795 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.881 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.882 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.890 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.890 2 INFO nova.compute.claims [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:24:36 np0005486808 nova_compute[259627]: 2025-10-14 09:24:36.999 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.005 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.027 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.028 2 DEBUG nova.compute.provider_tree [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.050 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.069 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.120 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256022138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.586 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.595 2 DEBUG nova.compute.provider_tree [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.612 2 DEBUG nova.scheduler.client.report [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.634 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.636 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.691 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.691 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.709 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.726 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.834 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.837 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.837 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating image(s)#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.871 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.902 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.933 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.938 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:37 np0005486808 nova_compute[259627]: 2025-10-14 09:24:37.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.013 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.068 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.071 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.072 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.073 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.110 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.115 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 831 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.469 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541032966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.567 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.574 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.694 2 DEBUG nova.objects.instance [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.709 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.710 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Ensure instance console log exists: /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.710 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.711 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.711 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.878 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3657MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.881 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.970 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c4b3476e-7b32-4a60-ad45-41cb6716adaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.971 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:24:38 np0005486808 nova_compute[259627]: 2025-10-14 09:24:38.971 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.004 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790958384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.529 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.536 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.556 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.579 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.580 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:39 np0005486808 nova_compute[259627]: 2025-10-14 09:24:39.721 2 DEBUG nova.policy [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:24:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 51 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 333 KiB/s wr, 14 op/s
Oct 14 05:24:40 np0005486808 nova_compute[259627]: 2025-10-14 09:24:40.567 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:40 np0005486808 nova_compute[259627]: 2025-10-14 09:24:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:41 np0005486808 nova_compute[259627]: 2025-10-14 09:24:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:41 np0005486808 nova_compute[259627]: 2025-10-14 09:24:41.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.127 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Successfully updated port: 8cdca031-de5b-4956-a27b-c6c6320c9764 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.150 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.235 2 DEBUG nova.compute.manager [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.236 2 DEBUG nova.compute.manager [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.236 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:24:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.356 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:42 np0005486808 nova_compute[259627]: 2025-10-14 09:24:42.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:24:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.072 2 DEBUG nova.network.neutron [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.093 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.093 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance network_info: |[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.094 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.094 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.098 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start _get_guest_xml network_info=[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.102 2 WARNING nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.107 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.107 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.116 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.117 2 DEBUG nova.virt.libvirt.host [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.117 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.118 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.118 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.119 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.120 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.121 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.121 2 DEBUG nova.virt.hardware [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.125 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:24:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:24:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:24:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251679546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.608 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.629 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:43 np0005486808 nova_compute[259627]: 2025-10-14 09:24:43.633 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:24:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896303892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.072 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.075 2 DEBUG nova.virt.libvirt.vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:24:37Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.075 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.077 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.079 2 DEBUG nova.objects.instance [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.104 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <uuid>c4b3476e-7b32-4a60-ad45-41cb6716adaf</uuid>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <name>instance-0000007b</name>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-922814987</nova:name>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:24:43</nova:creationTime>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <nova:port uuid="8cdca031-de5b-4956-a27b-c6c6320c9764">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="serial">c4b3476e-7b32-4a60-ad45-41cb6716adaf</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="uuid">c4b3476e-7b32-4a60-ad45-41cb6716adaf</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:ed:fc:28"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <target dev="tap8cdca031-de"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/console.log" append="off"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:24:44 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:24:44 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:24:44 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:24:44 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.106 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Preparing to wait for external event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.106 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.107 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.107 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.108 2 DEBUG nova.virt.libvirt.vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:24:37Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.109 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.110 2 DEBUG nova.network.os_vif_util [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.111 2 DEBUG os_vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cdca031-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cdca031-de, col_values=(('external_ids', {'iface-id': '8cdca031-de5b-4956-a27b-c6c6320c9764', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:fc:28', 'vm-uuid': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:44 np0005486808 NetworkManager[44885]: <info>  [1760433884.1226] manager: (tap8cdca031-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.130 2 INFO os_vif [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.193 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.193 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.194 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:ed:fc:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.194 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Using config drive#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.231 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.838 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Creating config drive at /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.848 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_49qynh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:24:44 np0005486808 nova_compute[259627]: 2025-10-14 09:24:44.998 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.023 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_49qynh" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.055 2 DEBUG nova.storage.rbd_utils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.059 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.282 2 DEBUG oslo_concurrency.processutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config c4b3476e-7b32-4a60-ad45-41cb6716adaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.283 2 INFO nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deleting local config drive /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf/disk.config because it was imported into RBD.#033[00m
Oct 14 05:24:45 np0005486808 kernel: tap8cdca031-de: entered promiscuous mode
Oct 14 05:24:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:45Z|01318|binding|INFO|Claiming lport 8cdca031-de5b-4956-a27b-c6c6320c9764 for this chassis.
Oct 14 05:24:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:45Z|01319|binding|INFO|8cdca031-de5b-4956-a27b-c6c6320c9764: Claiming fa:16:3e:ed:fc:28 10.100.0.14
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.3706] manager: (tap8cdca031-de): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.384 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.385 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a bound to our chassis#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.386 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee3efe32-94e6-45cb-ae71-b379f4a2309a#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.400 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0ff928-354b-4ca7-ae56-2d03b8717cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.401 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee3efe32-91 in ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.403 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee3efe32-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.403 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98524d8f-2a24-4e52-9e70-7455177c831f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.405 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8798c73-af39-42fb-a96f-c663ac8280ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 systemd-machined[214636]: New machine qemu-156-instance-0000007b.
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.422 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8b677b-4869-42ca-b939-aaa1563b65a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.454 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[859a14a1-cb52-4f55-b7cc-c5071840d881]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 systemd-udevd[384029]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:24:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:45Z|01320|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 ovn-installed in OVS
Oct 14 05:24:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:45Z|01321|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 up in Southbound
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.4878] device (tap8cdca031-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.4894] device (tap8cdca031-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8d23ed-ea68-4ff4-b7ca-a03fc9eb876e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.502 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa58725-fbf1-4ad9-8678-a26cdb09070b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.5042] manager: (tapee3efe32-90): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.549 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6a180720-36b9-4ced-841a-fa29b5cdb48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.554 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.555 2 DEBUG nova.network.neutron [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f249cbd3-b754-4837-9b9c-7df4bccbb7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.578 2 DEBUG oslo_concurrency.lockutils [req-55631e5e-e17e-4af2-9752-60cc9442ac1d req-088eb718-5df1-4df3-88d0-2196689095fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.5916] device (tapee3efe32-90): carrier: link connected
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.600 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b5ffa0-20b0-4449-b44d-57d7e79cef90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c32d276d-3e36-4dfd-8c99-15aaee4bbec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766435, 'reachable_time': 28860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384058, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d9716ba0-c8d3-4a73-b584-3b3ce0e1e7a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:fab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766435, 'tstamp': 766435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384059, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.655 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a56e693-7d85-4004-8446-04ae45cacb7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766435, 'reachable_time': 28860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384060, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.694 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[658d052a-f2e2-4c2a-bdae-3ac5f68fc8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[21595f0a-9397-40ab-ae8c-738b4d8c2f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.765 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.765 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.766 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3efe32-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.767 2 DEBUG nova.compute.manager [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.768 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.769 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.769 2 DEBUG oslo_concurrency.lockutils [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.770 2 DEBUG nova.compute.manager [req-e3d374a8-ad59-4a9f-9dfd-4ee009862e02 req-25bb980b-a15e-4ad6-9e26-937d2bca9b04 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Processing event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 kernel: tapee3efe32-90: entered promiscuous mode
Oct 14 05:24:45 np0005486808 NetworkManager[44885]: <info>  [1760433885.8027] manager: (tapee3efe32-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee3efe32-90, col_values=(('external_ids', {'iface-id': '77c455ce-a111-4a2d-9630-1d923bb22b5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:45Z|01322|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 05:24:45 np0005486808 nova_compute[259627]: 2025-10-14 09:24:45.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.840 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.841 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa9f65f-6d3f-4ae3-a6f3-097b5be63f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.841 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:24:45 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:45.843 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'env', 'PROCESS_TAG=haproxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee3efe32-94e6-45cb-ae71-b379f4a2309a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:24:46 np0005486808 podman[384092]: 2025-10-14 09:24:46.262461559 +0000 UTC m=+0.083973686 container create 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:24:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 05:24:46 np0005486808 systemd[1]: Started libpod-conmon-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope.
Oct 14 05:24:46 np0005486808 podman[384092]: 2025-10-14 09:24:46.22241693 +0000 UTC m=+0.043929067 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:24:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:24:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43da196363ab20472740d45986e51af165d5a9498f073d634db3931ca381187d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:24:46 np0005486808 podman[384092]: 2025-10-14 09:24:46.344406344 +0000 UTC m=+0.165918491 container init 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:24:46 np0005486808 podman[384092]: 2025-10-14 09:24:46.355257542 +0000 UTC m=+0.176769659 container start 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:24:46 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : New worker (384113) forked
Oct 14 05:24:46 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : Loading success.
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.365 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.364822, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Started (Lifecycle Event)#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.368 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.372 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.377 2 INFO nova.virt.libvirt.driver [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance spawned successfully.#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.378 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.397 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.407 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.414 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.415 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.417 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.417 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.418 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.419 2 DEBUG nova.virt.libvirt.driver [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.435 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.436 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.3653703, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.436 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.467 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.473 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433887.3714688, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.473 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.504 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.510 2 INFO nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 9.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.511 2 DEBUG nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.580 2 INFO nova.compute.manager [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 10.74 seconds to build instance.#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.599 2 DEBUG oslo_concurrency.lockutils [None req-8d982bd6-cccb-49db-a926-1fcce2dc193d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:47 np0005486808 podman[384165]: 2025-10-14 09:24:47.669991769 +0000 UTC m=+0.071087948 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 05:24:47 np0005486808 podman[384164]: 2025-10-14 09:24:47.700132263 +0000 UTC m=+0.101977020 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.881 2 DEBUG nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.882 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.883 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.883 2 DEBUG oslo_concurrency.lockutils [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.884 2 DEBUG nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:24:47 np0005486808 nova_compute[259627]: 2025-10-14 09:24:47.885 2 WARNING nova.compute.manager [req-ee294a03-ceaf-4f17-bc39-1173d5856c5d req-71f643cf-dece-46c7-ae0e-f3662fa29a34 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:24:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 14 05:24:49 np0005486808 nova_compute[259627]: 2025-10-14 09:24:49.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:49 np0005486808 nova_compute[259627]: 2025-10-14 09:24:49.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:49 np0005486808 nova_compute[259627]: 2025-10-14 09:24:49.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 14 05:24:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:51Z|01323|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 05:24:51 np0005486808 NetworkManager[44885]: <info>  [1760433891.1147] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:51 np0005486808 NetworkManager[44885]: <info>  [1760433891.1171] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct 14 05:24:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:51Z|01324|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.835 2 DEBUG nova.compute.manager [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.836 2 DEBUG nova.compute.manager [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.837 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.837 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.838 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:24:51 np0005486808 nova_compute[259627]: 2025-10-14 09:24:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.032 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.033 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.034 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.035 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.035 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.037 2 INFO nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Terminating instance#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.039 2 DEBUG nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:24:52 np0005486808 kernel: tap8cdca031-de (unregistering): left promiscuous mode
Oct 14 05:24:52 np0005486808 NetworkManager[44885]: <info>  [1760433892.0892] device (tap8cdca031-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:24:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:52Z|01325|binding|INFO|Releasing lport 8cdca031-de5b-4956-a27b-c6c6320c9764 from this chassis (sb_readonly=0)
Oct 14 05:24:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:52Z|01326|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 down in Southbound
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:24:52Z|01327|binding|INFO|Removing iface tap8cdca031-de ovn-installed in OVS
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.115 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4b3476e-7b32-4a60-ad45-41cb6716adaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.117 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a unbound from our chassis#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.119 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee3efe32-94e6-45cb-ae71-b379f4a2309a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.121 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cc424803-2bb6-441e-8a41-4e0295526b09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.122 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace which is not needed anymore#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 14 05:24:52 np0005486808 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 6.626s CPU time.
Oct 14 05:24:52 np0005486808 systemd-machined[214636]: Machine qemu-156-instance-0000007b terminated.
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 159 op/s
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.277 2 INFO nova.virt.libvirt.driver [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Instance destroyed successfully.#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.278 2 DEBUG nova.objects.instance [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid c4b3476e-7b32-4a60-ad45-41cb6716adaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : haproxy version is 2.8.14-c23fe91
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [NOTICE]   (384111) : path to executable is /usr/sbin/haproxy
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : Exiting Master process...
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : Exiting Master process...
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [ALERT]    (384111) : Current worker (384113) exited with code 143 (Terminated)
Oct 14 05:24:52 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384107]: [WARNING]  (384111) : All workers exited. Exiting... (0)
Oct 14 05:24:52 np0005486808 systemd[1]: libpod-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope: Deactivated successfully.
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.291 2 DEBUG nova.virt.libvirt.vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-922814987',display_name='tempest-TestNetworkBasicOps-server-922814987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-922814987',id=123,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIDfkSnFvDSr+GSt1dM38bvIyYGUhpm7xroCMStk06IR9Vyf1uV/kgX14ev9LDBZUF7QO+LVF5DG5NihTr1U28RQ9HNU8vPNSalnywo2YhCd69n5Jhi3ssJBlCIEOLbD3Q==',key_name='tempest-TestNetworkBasicOps-1537576407',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:24:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-s3vt9dhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:24:47Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c4b3476e-7b32-4a60-ad45-41cb6716adaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.291 2 DEBUG nova.network.os_vif_util [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.292 2 DEBUG nova.network.os_vif_util [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.292 2 DEBUG os_vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:24:52 np0005486808 podman[384231]: 2025-10-14 09:24:52.294916398 +0000 UTC m=+0.057865431 container died 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cdca031-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.305 2 INFO os_vif [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')#033[00m
Oct 14 05:24:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706-userdata-shm.mount: Deactivated successfully.
Oct 14 05:24:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-43da196363ab20472740d45986e51af165d5a9498f073d634db3931ca381187d-merged.mount: Deactivated successfully.
Oct 14 05:24:52 np0005486808 podman[384231]: 2025-10-14 09:24:52.34760319 +0000 UTC m=+0.110552203 container cleanup 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:24:52 np0005486808 systemd[1]: libpod-conmon-8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706.scope: Deactivated successfully.
Oct 14 05:24:52 np0005486808 podman[384288]: 2025-10-14 09:24:52.432845076 +0000 UTC m=+0.051209576 container remove 8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[68dc95f8-94a9-4f54-b539-28b829a187df]: (4, ('Tue Oct 14 09:24:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706)\n8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706\nTue Oct 14 09:24:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706)\n8a59dc4e0398e28c8c32e91d87812ee0ef9c0a36cd08b2da7035af7cdc38f706\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.447 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[368e0674-2a00-4318-81c5-3b7fada5ce11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.448 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:24:52 np0005486808 kernel: tapee3efe32-90: left promiscuous mode
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.456 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bf54b7-f0c6-46a6-9b02-9aa72e6f87e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.491 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6656baeb-75ce-4cd8-83de-3de930449c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdebaf2f-f3af-4b81-970d-0c797b9f2107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.510 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bca30114-49e6-4abe-a0b1-64b4d0b5d86a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766425, 'reachable_time': 26523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384303, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.512 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:24:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:24:52.512 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37f5632e-d95b-4129-87fa-a92f71da605c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:24:52 np0005486808 systemd[1]: run-netns-ovnmeta\x2dee3efe32\x2d94e6\x2d45cb\x2dae71\x2db379f4a2309a.mount: Deactivated successfully.
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.714 2 INFO nova.virt.libvirt.driver [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deleting instance files /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf_del#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.717 2 INFO nova.virt.libvirt.driver [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deletion of /var/lib/nova/instances/c4b3476e-7b32-4a60-ad45-41cb6716adaf_del complete#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.761 2 INFO nova.compute.manager [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.761 2 DEBUG oslo.service.loopingcall [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.762 2 DEBUG nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.762 2 DEBUG nova.network.neutron [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.828 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.828 2 DEBUG nova.network.neutron [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:52 np0005486808 nova_compute[259627]: 2025-10-14 09:24:52.845 2 DEBUG oslo_concurrency.lockutils [req-f3423a93-ffcd-457d-9e10-de4abe1c20c5 req-c08e6aeb-8d59-4c30-bf78-7457dd0eb2ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c4b3476e-7b32-4a60-ad45-41cb6716adaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:24:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.196 2 DEBUG nova.network.neutron [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.217 2 INFO nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Took 1.45 seconds to deallocate network for instance.#033[00m
Oct 14 05:24:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.273 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.274 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.330 2 DEBUG oslo_concurrency.processutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:24:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114859750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.837 2 DEBUG oslo_concurrency.processutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.847 2 DEBUG nova.compute.provider_tree [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.873 2 DEBUG nova.scheduler.client.report [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.908 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:54 np0005486808 nova_compute[259627]: 2025-10-14 09:24:54.960 2 INFO nova.scheduler.client.report [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance c4b3476e-7b32-4a60-ad45-41cb6716adaf#033[00m
Oct 14 05:24:55 np0005486808 nova_compute[259627]: 2025-10-14 09:24:55.039 2 DEBUG oslo_concurrency.lockutils [None req-3db36424-38fb-4ccb-84fd-a4b85712eb41 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c4b3476e-7b32-4a60-ad45-41cb6716adaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:24:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 128 op/s
Oct 14 05:24:57 np0005486808 nova_compute[259627]: 2025-10-14 09:24:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:24:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:24:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 05:24:59 np0005486808 nova_compute[259627]: 2025-10-14 09:24:59.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 77 op/s
Oct 14 05:25:02 np0005486808 nova_compute[259627]: 2025-10-14 09:25:02.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.182 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.183 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.197 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.288 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.288 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.300 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.301 2 INFO nova.compute.claims [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.432 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:25:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027551998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.928 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.936 2 DEBUG nova.compute.provider_tree [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.954 2 DEBUG nova.scheduler.client.report [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.987 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:03 np0005486808 nova_compute[259627]: 2025-10-14 09:25:03.988 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.098 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.098 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.123 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.146 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.229 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.230 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.231 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating image(s)#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.260 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.287 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.320 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.325 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.432 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.433 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.434 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.435 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.471 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.477 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c0981e31-738f-44e8-be4c-b64961716660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:04 np0005486808 podman[384427]: 2025-10-14 09:25:04.702967993 +0000 UTC m=+0.105355814 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 14 05:25:04 np0005486808 podman[384434]: 2025-10-14 09:25:04.705835904 +0000 UTC m=+0.110731177 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.819 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c0981e31-738f-44e8-be4c-b64961716660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.881 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image c0981e31-738f-44e8-be4c-b64961716660_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.975 2 DEBUG nova.policy [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.982 2 DEBUG nova.objects.instance [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.997 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.997 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Ensure instance console log exists: /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.998 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.998 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:04 np0005486808 nova_compute[259627]: 2025-10-14 09:25:04.999 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:25:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:25:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:25:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1794537339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:25:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.766 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Successfully updated port: 8cdca031-de5b-4956-a27b-c6c6320c9764 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.788 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.789 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.789 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG nova.compute.manager [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG nova.compute.manager [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Refreshing instance network info cache due to event network-changed-8cdca031-de5b-4956-a27b-c6c6320c9764. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.885 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:25:06 np0005486808 nova_compute[259627]: 2025-10-14 09:25:06.981 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:25:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:07 np0005486808 nova_compute[259627]: 2025-10-14 09:25:07.277 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433892.2757423, c4b3476e-7b32-4a60-ad45-41cb6716adaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:07 np0005486808 nova_compute[259627]: 2025-10-14 09:25:07.277 2 INFO nova.compute.manager [-] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:25:07 np0005486808 nova_compute[259627]: 2025-10-14 09:25:07.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:07 np0005486808 nova_compute[259627]: 2025-10-14 09:25:07.308 2 DEBUG nova.compute.manager [None req-34b6b0a4-892f-453d-a301-94ff33d99efb - - - - - -] [instance: c4b3476e-7b32-4a60-ad45-41cb6716adaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 88 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.761 2 DEBUG nova.network.neutron [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.784 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.785 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance network_info: |[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.786 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.787 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Refreshing network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.795 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start _get_guest_xml network_info=[{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.803 2 WARNING nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.808 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.809 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.814 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.815 2 DEBUG nova.virt.libvirt.host [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.816 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.816 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.817 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.817 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.818 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.818 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.819 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.819 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.820 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.821 2 DEBUG nova.virt.hardware [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:25:08 np0005486808 nova_compute[259627]: 2025-10-14 09:25:08.826 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:25:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171874576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.272 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.310 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.315 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:09.526 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:25:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:09.528 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:25:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731177902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.819 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.821 2 DEBUG nova.virt.libvirt.vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:04Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.822 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.824 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.826 2 DEBUG nova.objects.instance [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.844 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <uuid>c0981e31-738f-44e8-be4c-b64961716660</uuid>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <name>instance-0000007c</name>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-546978415</nova:name>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:25:08</nova:creationTime>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <nova:port uuid="8cdca031-de5b-4956-a27b-c6c6320c9764">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="serial">c0981e31-738f-44e8-be4c-b64961716660</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="uuid">c0981e31-738f-44e8-be4c-b64961716660</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c0981e31-738f-44e8-be4c-b64961716660_disk">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c0981e31-738f-44e8-be4c-b64961716660_disk.config">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:ed:fc:28"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <target dev="tap8cdca031-de"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/console.log" append="off"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:25:09 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:25:09 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:25:09 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:25:09 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.846 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Preparing to wait for external event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.847 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.847 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.848 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.849 2 DEBUG nova.virt.libvirt.vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:04Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.850 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.851 2 DEBUG nova.network.os_vif_util [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.851 2 DEBUG os_vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cdca031-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cdca031-de, col_values=(('external_ids', {'iface-id': '8cdca031-de5b-4956-a27b-c6c6320c9764', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:fc:28', 'vm-uuid': 'c0981e31-738f-44e8-be4c-b64961716660'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 NetworkManager[44885]: <info>  [1760433909.8628] manager: (tap8cdca031-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.871 2 INFO os_vif [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.943 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.944 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.944 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:ed:fc:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.945 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Using config drive#033[00m
Oct 14 05:25:09 np0005486808 nova_compute[259627]: 2025-10-14 09:25:09.979 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:10.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:10 np0005486808 nova_compute[259627]: 2025-10-14 09:25:10.786 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updated VIF entry in instance network info cache for port 8cdca031-de5b-4956-a27b-c6c6320c9764. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:25:10 np0005486808 nova_compute[259627]: 2025-10-14 09:25:10.787 2 DEBUG nova.network.neutron [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [{"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:10 np0005486808 nova_compute[259627]: 2025-10-14 09:25:10.803 2 DEBUG oslo_concurrency.lockutils [req-9e2b2dcd-220f-4720-b799-2b3690c2cd08 req-234f6586-ae5a-422c-9a3f-f24050690e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c0981e31-738f-44e8-be4c-b64961716660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:25:10 np0005486808 nova_compute[259627]: 2025-10-14 09:25:10.972 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Creating config drive at /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config#033[00m
Oct 14 05:25:10 np0005486808 nova_compute[259627]: 2025-10-14 09:25:10.980 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpraoob577 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.126 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpraoob577" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.176 2 DEBUG nova.storage.rbd_utils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image c0981e31-738f-44e8-be4c-b64961716660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.181 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config c0981e31-738f-44e8-be4c-b64961716660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.393 2 DEBUG oslo_concurrency.processutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config c0981e31-738f-44e8-be4c-b64961716660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.396 2 INFO nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deleting local config drive /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660/disk.config because it was imported into RBD.#033[00m
Oct 14 05:25:11 np0005486808 kernel: tap8cdca031-de: entered promiscuous mode
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.4765] manager: (tap8cdca031-de): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Oct 14 05:25:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:11Z|01328|binding|INFO|Claiming lport 8cdca031-de5b-4956-a27b-c6c6320c9764 for this chassis.
Oct 14 05:25:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:11Z|01329|binding|INFO|8cdca031-de5b-4956-a27b-c6c6320c9764: Claiming fa:16:3e:ed:fc:28 10.100.0.14
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.491 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c0981e31-738f-44e8-be4c-b64961716660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '6', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.494 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a bound to our chassis#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.496 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee3efe32-94e6-45cb-ae71-b379f4a2309a#033[00m
Oct 14 05:25:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:11Z|01330|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 ovn-installed in OVS
Oct 14 05:25:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:11Z|01331|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 up in Southbound
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.512 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[11837e53-63e2-4c77-a4c6-95eb1c60e317]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.513 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee3efe32-91 in ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.515 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee3efe32-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb983b8b-6bb9-458b-b189-d2657f58532e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.516 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f3aa7e-17c2-4b6c-9a27-f461c43c50eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 systemd-machined[214636]: New machine qemu-157-instance-0000007c.
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.530 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[22e87b2c-6ee6-4a7d-9d74-e5c4dada7c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.546 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7c6c11-4252-468a-9e3c-4c3886b5aa56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 systemd-udevd[384692]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.5829] device (tap8cdca031-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.584 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e12a9d-1b55-47e3-9748-85e46bf18cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.5878] device (tap8cdca031-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:25:11 np0005486808 systemd-udevd[384696]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.592 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[648dff7a-e0b6-4b58-a502-3b5e70114538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.5946] manager: (tapee3efe32-90): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.644 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1bd880-500c-4762-bc6d-ee52ca560467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.649 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fc44f8-2c09-481b-b48e-076e892a9457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.6754] device (tapee3efe32-90): carrier: link connected
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.681 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbbacad-cb69-440b-af39-212dbfc28bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.702 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c800d9a-a8e2-44ce-9261-d489865aa268]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769043, 'reachable_time': 43826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384721, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[54371e6d-4cd5-459a-84a6-660f0739287f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:fab9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 769043, 'tstamp': 769043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384722, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.742 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9582821c-def1-4e8c-b581-f735675ac67b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3efe32-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:fa:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769043, 'reachable_time': 43826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384723, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.784 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0ac306-32b7-4f11-905e-0b1142736989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.854 2 DEBUG nova.compute.manager [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.855 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG oslo_concurrency.lockutils [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.856 2 DEBUG nova.compute.manager [req-a09dee75-919c-4e98-abd6-04b027f2338f req-0b8786f9-e519-4afa-963e-d1fbe6c9a2c2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Processing event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[015901de-7228-4f41-b88e-a3e348b28acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.872 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.872 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3efe32-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:11 np0005486808 NetworkManager[44885]: <info>  [1760433911.8763] manager: (tapee3efe32-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Oct 14 05:25:11 np0005486808 kernel: tapee3efe32-90: entered promiscuous mode
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.880 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee3efe32-90, col_values=(('external_ids', {'iface-id': '77c455ce-a111-4a2d-9630-1d923bb22b5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:11Z|01332|binding|INFO|Releasing lport 77c455ce-a111-4a2d-9630-1d923bb22b5c from this chassis (sb_readonly=0)
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.884 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.885 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[30e94be4-061c-417a-9ae4-aeb75a7d955d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.886 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/ee3efe32-94e6-45cb-ae71-b379f4a2309a.pid.haproxy
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID ee3efe32-94e6-45cb-ae71-b379f4a2309a
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:25:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:11.888 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'env', 'PROCESS_TAG=haproxy-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee3efe32-94e6-45cb-ae71-b379f4a2309a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:25:11 np0005486808 nova_compute[259627]: 2025-10-14 09:25:11.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:12 np0005486808 podman[384797]: 2025-10-14 09:25:12.344604323 +0000 UTC m=+0.058808624 container create 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 14 05:25:12 np0005486808 systemd[1]: Started libpod-conmon-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope.
Oct 14 05:25:12 np0005486808 podman[384797]: 2025-10-14 09:25:12.314576781 +0000 UTC m=+0.028781072 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:25:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3cb1e7089941c132d39655646185e2c3a1080aba046532a423fb17ff44148c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:12 np0005486808 podman[384797]: 2025-10-14 09:25:12.445821604 +0000 UTC m=+0.160025965 container init 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:25:12 np0005486808 podman[384797]: 2025-10-14 09:25:12.452983721 +0000 UTC m=+0.167188042 container start 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:25:12 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : New worker (384818) forked
Oct 14 05:25:12 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : Loading success.
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.841 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.8405082, c0981e31-738f-44e8-be4c-b64961716660 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.842 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Started (Lifecycle Event)#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.844 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.849 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.853 2 INFO nova.virt.libvirt.driver [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance spawned successfully.#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.854 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.883 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.890 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.891 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.892 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.893 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.893 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.894 2 DEBUG nova.virt.libvirt.driver [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.948 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.840722, c0981e31-738f-44e8-be4c-b64961716660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.982 2 INFO nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 8.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.983 2 DEBUG nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.985 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.996 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433912.8476634, c0981e31-738f-44e8-be4c-b64961716660 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:12 np0005486808 nova_compute[259627]: 2025-10-14 09:25:12.996 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.023 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.027 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.064 2 INFO nova.compute.manager [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 9.82 seconds to build instance.#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.080 2 DEBUG oslo_concurrency.lockutils [None req-6278599b-6ddb-4aa6-a877-f5488a6cca44 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:25:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:25:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.926 2 DEBUG oslo_concurrency.lockutils [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.927 2 DEBUG nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:25:13 np0005486808 nova_compute[259627]: 2025-10-14 09:25:13.927 2 WARNING nova.compute.manager [req-b13a34a8-2026-49c1-b6de-4393853763c3 req-35750815-4706-45c9-b580-62e4eb0ddafd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:25:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:14 np0005486808 nova_compute[259627]: 2025-10-14 09:25:14.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:14 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:14 np0005486808 nova_compute[259627]: 2025-10-14 09:25:14.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.253787177 +0000 UTC m=+0.061844819 container create 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.260 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.260 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.261 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.262 2 INFO nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Terminating instance#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.263 2 DEBUG nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.226956354 +0000 UTC m=+0.035013976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:15 np0005486808 systemd[1]: Started libpod-conmon-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope.
Oct 14 05:25:15 np0005486808 kernel: tap8cdca031-de (unregistering): left promiscuous mode
Oct 14 05:25:15 np0005486808 NetworkManager[44885]: <info>  [1760433915.3410] device (tap8cdca031-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:15Z|01333|binding|INFO|Releasing lport 8cdca031-de5b-4956-a27b-c6c6320c9764 from this chassis (sb_readonly=0)
Oct 14 05:25:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:15Z|01334|binding|INFO|Setting lport 8cdca031-de5b-4956-a27b-c6c6320c9764 down in Southbound
Oct 14 05:25:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:15Z|01335|binding|INFO|Removing iface tap8cdca031-de ovn-installed in OVS
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.391 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:fc:28 10.100.0.14'], port_security=['fa:16:3e:ed:fc:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c0981e31-738f-44e8-be4c-b64961716660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1769258053', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '72094e2b-bb7f-4f40-9ee7-497daf8e97ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=596a534e-c0c1-49ed-bcdd-00855a90d08e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=8cdca031-de5b-4956-a27b-c6c6320c9764) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.392 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 8cdca031-de5b-4956-a27b-c6c6320c9764 in datapath ee3efe32-94e6-45cb-ae71-b379f4a2309a unbound from our chassis#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.393 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee3efe32-94e6-45cb-ae71-b379f4a2309a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.394 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5375285-91c1-487e-b794-f81a6c9987c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.395 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a namespace which is not needed anymore#033[00m
Oct 14 05:25:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.42183244 +0000 UTC m=+0.229890112 container init 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:15 np0005486808 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 14 05:25:15 np0005486808 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 3.752s CPU time.
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.433605691 +0000 UTC m=+0.241663333 container start 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:15 np0005486808 systemd-machined[214636]: Machine qemu-157-instance-0000007c terminated.
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.437407335 +0000 UTC m=+0.245465017 container attach 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:15 np0005486808 systemd[1]: libpod-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope: Deactivated successfully.
Oct 14 05:25:15 np0005486808 affectionate_hodgkin[385232]: 167 167
Oct 14 05:25:15 np0005486808 conmon[385232]: conmon 7cdf12f0918426b56a1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope/container/memory.events
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.442742156 +0000 UTC m=+0.250799798 container died 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:25:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ac10fa9e5f153fef45ed4d9a5811ca423f863c7412ea62c8082d590aeb18e6cf-merged.mount: Deactivated successfully.
Oct 14 05:25:15 np0005486808 podman[385216]: 2025-10-14 09:25:15.509344502 +0000 UTC m=+0.317402104 container remove 7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hodgkin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.526 2 INFO nova.virt.libvirt.driver [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Instance destroyed successfully.#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.529 2 DEBUG nova.objects.instance [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid c0981e31-738f-44e8-be4c-b64961716660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:25:15 np0005486808 systemd[1]: libpod-conmon-7cdf12f0918426b56a1c091bc373661fe059fe65709007f6badb0291bb1202c6.scope: Deactivated successfully.
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.544 2 DEBUG nova.virt.libvirt.vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-546978415',display_name='tempest-TestNetworkBasicOps-server-546978415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-546978415',id=124,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKPHsxRqxnHLpMWsVQOQ2YVhzbM1QIaVRvazbKfcBx066Wk2bLss4UyHnFnwGk2N+hL3dCcdm0s3ho7BXaEBpPBlInClKepgsjMFj/5tj/fAwTM9jsdqXQDPYNKI8XGpQ==',key_name='tempest-TestNetworkBasicOps-1675596793',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:25:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-ycr49veq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:25:13Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=c0981e31-738f-44e8-be4c-b64961716660,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.545 2 DEBUG nova.network.os_vif_util [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "8cdca031-de5b-4956-a27b-c6c6320c9764", "address": "fa:16:3e:ed:fc:28", "network": {"id": "ee3efe32-94e6-45cb-ae71-b379f4a2309a", "bridge": "br-int", "label": "tempest-network-smoke--539325759", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cdca031-de", "ovs_interfaceid": "8cdca031-de5b-4956-a27b-c6c6320c9764", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.546 2 DEBUG nova.network.os_vif_util [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.546 2 DEBUG os_vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cdca031-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.556 2 INFO os_vif [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:fc:28,bridge_name='br-int',has_traffic_filtering=True,id=8cdca031-de5b-4956-a27b-c6c6320c9764,network=Network(ee3efe32-94e6-45cb-ae71-b379f4a2309a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8cdca031-de')#033[00m
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : haproxy version is 2.8.14-c23fe91
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [NOTICE]   (384816) : path to executable is /usr/sbin/haproxy
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : Exiting Master process...
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : Exiting Master process...
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [ALERT]    (384816) : Current worker (384818) exited with code 143 (Terminated)
Oct 14 05:25:15 np0005486808 neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a[384812]: [WARNING]  (384816) : All workers exited. Exiting... (0)
Oct 14 05:25:15 np0005486808 systemd[1]: libpod-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope: Deactivated successfully.
Oct 14 05:25:15 np0005486808 conmon[384812]: conmon 9927ec3ccf3768341212 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope/container/memory.events
Oct 14 05:25:15 np0005486808 podman[385282]: 2025-10-14 09:25:15.62215082 +0000 UTC m=+0.059666396 container died 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:25:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6-userdata-shm.mount: Deactivated successfully.
Oct 14 05:25:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a3cb1e7089941c132d39655646185e2c3a1080aba046532a423fb17ff44148c4-merged.mount: Deactivated successfully.
Oct 14 05:25:15 np0005486808 podman[385282]: 2025-10-14 09:25:15.661332158 +0000 UTC m=+0.098847724 container cleanup 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 05:25:15 np0005486808 systemd[1]: libpod-conmon-9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6.scope: Deactivated successfully.
Oct 14 05:25:15 np0005486808 podman[385321]: 2025-10-14 09:25:15.68568552 +0000 UTC m=+0.043055995 container create 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:15 np0005486808 systemd[1]: Started libpod-conmon-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope.
Oct 14 05:25:15 np0005486808 podman[385348]: 2025-10-14 09:25:15.735701265 +0000 UTC m=+0.051787040 container remove 9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.740 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d202c7bb-2f45-4755-bd4c-c8ddee81809a]: (4, ('Tue Oct 14 09:25:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6)\n9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6\nTue Oct 14 09:25:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a (9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6)\n9927ec3ccf3768341212bcbb6ac6d5f9b11bad415f946f99049a1cfa88670cc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0617239c-fe7e-455d-b1e7-b634a3346243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.744 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3efe32-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:15 np0005486808 kernel: tapee3efe32-90: left promiscuous mode
Oct 14 05:25:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c627340-d9be-44fe-aec4-64d10b7eee39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 podman[385321]: 2025-10-14 09:25:15.669650803 +0000 UTC m=+0.027021288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:15 np0005486808 podman[385321]: 2025-10-14 09:25:15.772858214 +0000 UTC m=+0.130228699 container init 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:15 np0005486808 podman[385321]: 2025-10-14 09:25:15.782612785 +0000 UTC m=+0.139983250 container start 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:25:15 np0005486808 podman[385321]: 2025-10-14 09:25:15.786211894 +0000 UTC m=+0.143582359 container attach 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.794 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[af5ba01d-4ec3-4497-aa04-3c3230b7ef7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.796 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9965318b-3a97-408d-b2f7-d37a82d75382]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[733461b9-0f04-4220-a8ba-dbfbddc3c727]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 769033, 'reachable_time': 37028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385369, 'error': None, 'target': 'ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.816 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee3efe32-94e6-45cb-ae71-b379f4a2309a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:25:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:15.817 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[29401b3e-4831-4cb2-b87b-7deeabdcb686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.943 2 INFO nova.virt.libvirt.driver [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deleting instance files /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660_del#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.944 2 INFO nova.virt.libvirt.driver [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deletion of /var/lib/nova/instances/c0981e31-738f-44e8-be4c-b64961716660_del complete#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.998 2 INFO nova.compute.manager [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG oslo.service.loopingcall [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:25:15 np0005486808 nova_compute[259627]: 2025-10-14 09:25:15.998 2 DEBUG nova.network.neutron [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.060 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG oslo_concurrency.lockutils [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:25:16 np0005486808 nova_compute[259627]: 2025-10-14 09:25:16.061 2 DEBUG nova.compute.manager [req-a2b22f13-aec4-4451-a393-2c0b6ab55e58 req-dcc44fd7-a850-40d8-b3b8-4a975d92864d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-unplugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:25:16 np0005486808 systemd[1]: run-netns-ovnmeta\x2dee3efe32\x2d94e6\x2d45cb\x2dae71\x2db379f4a2309a.mount: Deactivated successfully.
Oct 14 05:25:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:25:17 np0005486808 reverent_golick[385361]: [
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:    {
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "available": false,
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "ceph_device": false,
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "lsm_data": {},
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "lvs": [],
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "path": "/dev/sr0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "rejected_reasons": [
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "Has a FileSystem",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "Insufficient space (<5GB)"
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        ],
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        "sys_api": {
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "actuators": null,
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "device_nodes": "sr0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "devname": "sr0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "human_readable_size": "482.00 KB",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "id_bus": "ata",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "model": "QEMU DVD-ROM",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "nr_requests": "2",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "parent": "/dev/sr0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "partitions": {},
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "path": "/dev/sr0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "removable": "1",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "rev": "2.5+",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "ro": "0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "rotational": "0",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "sas_address": "",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "sas_device_handle": "",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "scheduler_mode": "mq-deadline",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "sectors": 0,
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "sectorsize": "2048",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "size": 493568.0,
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "support_discard": "2048",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "type": "disk",
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:            "vendor": "QEMU"
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:        }
Oct 14 05:25:17 np0005486808 reverent_golick[385361]:    }
Oct 14 05:25:17 np0005486808 reverent_golick[385361]: ]
Oct 14 05:25:17 np0005486808 systemd[1]: libpod-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Deactivated successfully.
Oct 14 05:25:17 np0005486808 systemd[1]: libpod-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Consumed 1.491s CPU time.
Oct 14 05:25:17 np0005486808 podman[385321]: 2025-10-14 09:25:17.261922328 +0000 UTC m=+1.619292833 container died 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:25:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-040a1eafa18ad43db61baedc35ab34d3bce36379a237d07a6c1be1830f6a558f-merged.mount: Deactivated successfully.
Oct 14 05:25:17 np0005486808 podman[385321]: 2025-10-14 09:25:17.342341385 +0000 UTC m=+1.699711850 container remove 8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:17 np0005486808 systemd[1]: libpod-conmon-8eaf32f819418d287b180f55e23d506d244184a1c968640a50daf12fb3a7f377.scope: Deactivated successfully.
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.386 2 DEBUG nova.network.neutron [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e0155b0a-dfe8-4997-a18d-d5f21717cdb7 does not exist
Oct 14 05:25:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7bd4f0ed-ac98-4f80-8cfb-62521ee88fd9 does not exist
Oct 14 05:25:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a4418754-4f0a-4ce1-b407-b244220031f3 does not exist
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.408 2 INFO nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.452 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.453 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.507 2 DEBUG oslo_concurrency.processutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:25:17 np0005486808 podman[387367]: 2025-10-14 09:25:17.837943351 +0000 UTC m=+0.083280858 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:17 np0005486808 podman[387366]: 2025-10-14 09:25:17.852820269 +0000 UTC m=+0.109597729 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493938824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.959 2 DEBUG oslo_concurrency.processutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.965 2 DEBUG nova.compute.provider_tree [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:25:17 np0005486808 nova_compute[259627]: 2025-10-14 09:25:17.985 2 DEBUG nova.scheduler.client.report [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:25:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.022 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.061 2 INFO nova.scheduler.client.report [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance c0981e31-738f-44e8-be4c-b64961716660#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.131 2 DEBUG oslo_concurrency.lockutils [None req-a7a6f75f-ef4c-4de3-9648-acd565036319 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c0981e31-738f-44e8-be4c-b64961716660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.163 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.164 2 DEBUG oslo_concurrency.lockutils [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c0981e31-738f-44e8-be4c-b64961716660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.164 2 DEBUG nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] No waiting events found dispatching network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:25:18 np0005486808 nova_compute[259627]: 2025-10-14 09:25:18.164 2 WARNING nova.compute.manager [req-0de48470-5b42-41ee-b255-4e2552946b33 req-2109c979-002c-42d5-bc52-5e429dee4d28 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c0981e31-738f-44e8-be4c-b64961716660] Received unexpected event network-vif-plugged-8cdca031-de5b-4956-a27b-c6c6320c9764 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.188898033 +0000 UTC m=+0.067288773 container create 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:18 np0005486808 systemd[1]: Started libpod-conmon-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope.
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.166069489 +0000 UTC m=+0.044460209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.283322775 +0000 UTC m=+0.161713465 container init 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:25:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.296592583 +0000 UTC m=+0.174983273 container start 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.301797822 +0000 UTC m=+0.180188512 container attach 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:25:18 np0005486808 serene_gagarin[387469]: 167 167
Oct 14 05:25:18 np0005486808 systemd[1]: libpod-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope: Deactivated successfully.
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.308924388 +0000 UTC m=+0.187315118 container died 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:25:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4cb9ec100d2c38d205368e11003954aaaa7805906556738adb79a7236ca28b06-merged.mount: Deactivated successfully.
Oct 14 05:25:18 np0005486808 podman[387453]: 2025-10-14 09:25:18.367661399 +0000 UTC m=+0.246052089 container remove 7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:18 np0005486808 systemd[1]: libpod-conmon-7a46ce94696fd771965f572fce07f8717480cc37833609464d048b6f68eb2029.scope: Deactivated successfully.
Oct 14 05:25:18 np0005486808 podman[387491]: 2025-10-14 09:25:18.619896152 +0000 UTC m=+0.079252119 container create 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:25:18 np0005486808 systemd[1]: Started libpod-conmon-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope.
Oct 14 05:25:18 np0005486808 podman[387491]: 2025-10-14 09:25:18.588291001 +0000 UTC m=+0.047647048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:18 np0005486808 podman[387491]: 2025-10-14 09:25:18.736191446 +0000 UTC m=+0.195547453 container init 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 05:25:18 np0005486808 podman[387491]: 2025-10-14 09:25:18.749057334 +0000 UTC m=+0.208413321 container start 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:18 np0005486808 podman[387491]: 2025-10-14 09:25:18.753206136 +0000 UTC m=+0.212562213 container attach 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:19 np0005486808 nova_compute[259627]: 2025-10-14 09:25:19.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:19 np0005486808 distracted_elion[387508]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:25:19 np0005486808 distracted_elion[387508]: --> relative data size: 1.0
Oct 14 05:25:19 np0005486808 distracted_elion[387508]: --> All data devices are unavailable
Oct 14 05:25:20 np0005486808 systemd[1]: libpod-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Deactivated successfully.
Oct 14 05:25:20 np0005486808 systemd[1]: libpod-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Consumed 1.205s CPU time.
Oct 14 05:25:20 np0005486808 podman[387491]: 2025-10-14 09:25:20.010441572 +0000 UTC m=+1.469797589 container died 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3a51aa430a33976c283090717c3dd5e4127eecc58c3d376b780631e7fa277ee9-merged.mount: Deactivated successfully.
Oct 14 05:25:20 np0005486808 podman[387491]: 2025-10-14 09:25:20.093570026 +0000 UTC m=+1.552926023 container remove 6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:20 np0005486808 systemd[1]: libpod-conmon-6d7a3963377f7b6c7917851939555b7b61a1de56b821f93e81d38563c4fb5099.scope: Deactivated successfully.
Oct 14 05:25:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 75 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 14 05:25:20 np0005486808 nova_compute[259627]: 2025-10-14 09:25:20.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:20 np0005486808 podman[387692]: 2025-10-14 09:25:20.936448723 +0000 UTC m=+0.061318676 container create a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:25:20 np0005486808 systemd[1]: Started libpod-conmon-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope.
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:20.912238655 +0000 UTC m=+0.037108658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:21.050187454 +0000 UTC m=+0.175057467 container init a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:21.064386055 +0000 UTC m=+0.189255988 container start a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:21.069349577 +0000 UTC m=+0.194219550 container attach a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:21 np0005486808 inspiring_grothendieck[387708]: 167 167
Oct 14 05:25:21 np0005486808 systemd[1]: libpod-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope: Deactivated successfully.
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:21.07349159 +0000 UTC m=+0.198361553 container died a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:25:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-32591531d28ccaca99d6a47b17a49a47a6517a0dfa7c270f4d8bbba3d675ba21-merged.mount: Deactivated successfully.
Oct 14 05:25:21 np0005486808 podman[387692]: 2025-10-14 09:25:21.122938311 +0000 UTC m=+0.247808244 container remove a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:25:21 np0005486808 systemd[1]: libpod-conmon-a258e2bce1d49ce0e1b2c16a9d4546ab1376c3a8ef7372a2026314338e1e9308.scope: Deactivated successfully.
Oct 14 05:25:21 np0005486808 podman[387731]: 2025-10-14 09:25:21.327871675 +0000 UTC m=+0.069330954 container create 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:25:21 np0005486808 systemd[1]: Started libpod-conmon-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope.
Oct 14 05:25:21 np0005486808 podman[387731]: 2025-10-14 09:25:21.29810843 +0000 UTC m=+0.039567769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:21 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:21 np0005486808 podman[387731]: 2025-10-14 09:25:21.427890557 +0000 UTC m=+0.169349896 container init 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:25:21 np0005486808 podman[387731]: 2025-10-14 09:25:21.443563864 +0000 UTC m=+0.185023123 container start 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:21 np0005486808 podman[387731]: 2025-10-14 09:25:21.447369648 +0000 UTC m=+0.188828977 container attach 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]: {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    "0": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "devices": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "/dev/loop3"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            ],
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_name": "ceph_lv0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_size": "21470642176",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "name": "ceph_lv0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "tags": {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_name": "ceph",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.crush_device_class": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.encrypted": "0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_id": "0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.vdo": "0"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            },
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "vg_name": "ceph_vg0"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        }
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    ],
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    "1": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "devices": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "/dev/loop4"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            ],
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_name": "ceph_lv1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_size": "21470642176",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "name": "ceph_lv1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "tags": {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_name": "ceph",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.crush_device_class": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.encrypted": "0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_id": "1",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.vdo": "0"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            },
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "vg_name": "ceph_vg1"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        }
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    ],
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    "2": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "devices": [
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "/dev/loop5"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            ],
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_name": "ceph_lv2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_size": "21470642176",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "name": "ceph_lv2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "tags": {
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.cluster_name": "ceph",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.crush_device_class": "",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.encrypted": "0",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osd_id": "2",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:                "ceph.vdo": "0"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            },
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "type": "block",
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:            "vg_name": "ceph_vg2"
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:        }
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]:    ]
Oct 14 05:25:22 np0005486808 agitated_babbage[387747]: }
Oct 14 05:25:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 05:25:22 np0005486808 systemd[1]: libpod-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope: Deactivated successfully.
Oct 14 05:25:22 np0005486808 podman[387731]: 2025-10-14 09:25:22.297858192 +0000 UTC m=+1.039317501 container died 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:25:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6f71c7394a99c17c5ae3c08bcb91b57534cf0db1dd0866b7a9415c5d41316534-merged.mount: Deactivated successfully.
Oct 14 05:25:22 np0005486808 podman[387731]: 2025-10-14 09:25:22.384131074 +0000 UTC m=+1.125590353 container remove 4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_babbage, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:25:22 np0005486808 systemd[1]: libpod-conmon-4f28499885b87a6bd1e5dea82112a77c2b4683ac01eed0689f9b4527083bcaab.scope: Deactivated successfully.
Oct 14 05:25:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.118533241 +0000 UTC m=+0.045627078 container create b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:23 np0005486808 systemd[1]: Started libpod-conmon-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope.
Oct 14 05:25:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.094982589 +0000 UTC m=+0.022076516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.207158801 +0000 UTC m=+0.134252628 container init b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.217724982 +0000 UTC m=+0.144818809 container start b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.220720666 +0000 UTC m=+0.147814493 container attach b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:25:23 np0005486808 admiring_burnell[387926]: 167 167
Oct 14 05:25:23 np0005486808 systemd[1]: libpod-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope: Deactivated successfully.
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.226739425 +0000 UTC m=+0.153833272 container died b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b603d1ce05ef40b91803e3789e86fafe6ca64455daf6e7490a3175e749c49bd0-merged.mount: Deactivated successfully.
Oct 14 05:25:23 np0005486808 podman[387910]: 2025-10-14 09:25:23.276245138 +0000 UTC m=+0.203338985 container remove b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_burnell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:25:23 np0005486808 systemd[1]: libpod-conmon-b611bd2c76642c4a66782fecb9eec5780999f15b08f628e388c151688d6c2b73.scope: Deactivated successfully.
Oct 14 05:25:23 np0005486808 podman[387951]: 2025-10-14 09:25:23.47987071 +0000 UTC m=+0.058096537 container create 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:25:23 np0005486808 systemd[1]: Started libpod-conmon-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope.
Oct 14 05:25:23 np0005486808 podman[387951]: 2025-10-14 09:25:23.459757563 +0000 UTC m=+0.037983430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:25:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:23 np0005486808 podman[387951]: 2025-10-14 09:25:23.577586974 +0000 UTC m=+0.155812831 container init 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:25:23 np0005486808 podman[387951]: 2025-10-14 09:25:23.589937019 +0000 UTC m=+0.168162876 container start 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 14 05:25:23 np0005486808 podman[387951]: 2025-10-14 09:25:23.593934148 +0000 UTC m=+0.172160015 container attach 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:25:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]: {
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_id": 2,
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "type": "bluestore"
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    },
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_id": 1,
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "type": "bluestore"
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    },
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_id": 0,
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:        "type": "bluestore"
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]:    }
Oct 14 05:25:24 np0005486808 compassionate_fermi[387967]: }
Oct 14 05:25:24 np0005486808 systemd[1]: libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Deactivated successfully.
Oct 14 05:25:24 np0005486808 systemd[1]: libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Consumed 1.051s CPU time.
Oct 14 05:25:24 np0005486808 conmon[387967]: conmon 3e900b48c665756bd555 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope/container/memory.events
Oct 14 05:25:24 np0005486808 podman[387951]: 2025-10-14 09:25:24.637371501 +0000 UTC m=+1.215597388 container died 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:25:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-29677713dcd91e68fadc3b2c12c19892f4244aec1453afaba4c5aadd20e20986-merged.mount: Deactivated successfully.
Oct 14 05:25:24 np0005486808 podman[387951]: 2025-10-14 09:25:24.704352376 +0000 UTC m=+1.282578203 container remove 3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:25:24 np0005486808 systemd[1]: libpod-conmon-3e900b48c665756bd5550929e00fed5f5e04eb980193dec070d2a1952650888c.scope: Deactivated successfully.
Oct 14 05:25:24 np0005486808 nova_compute[259627]: 2025-10-14 09:25:24.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:25:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:25:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8cd24a57-e538-4a40-94f9-4d8503074b58 does not exist
Oct 14 05:25:24 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a22ccd02-a15a-49e9-a0fd-639c5f7b51c6 does not exist
Oct 14 05:25:25 np0005486808 nova_compute[259627]: 2025-10-14 09:25:25.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:25:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 14 05:25:27 np0005486808 nova_compute[259627]: 2025-10-14 09:25:27.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:27 np0005486808 nova_compute[259627]: 2025-10-14 09:25:27.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 05:25:29 np0005486808 nova_compute[259627]: 2025-10-14 09:25:29.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 05:25:30 np0005486808 nova_compute[259627]: 2025-10-14 09:25:30.521 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433915.5197697, c0981e31-738f-44e8-be4c-b64961716660 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:30 np0005486808 nova_compute[259627]: 2025-10-14 09:25:30.521 2 INFO nova.compute.manager [-] [instance: c0981e31-738f-44e8-be4c-b64961716660] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:25:30 np0005486808 nova_compute[259627]: 2025-10-14 09:25:30.546 2 DEBUG nova.compute.manager [None req-701c9003-6589-43a4-902f-1feb2a5d1606 - - - - - -] [instance: c0981e31-738f-44e8-be4c-b64961716660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:30 np0005486808 nova_compute[259627]: 2025-10-14 09:25:30.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:25:32
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', 'vms']
Oct 14 05:25:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:25:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:25:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:25:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:34 np0005486808 nova_compute[259627]: 2025-10-14 09:25:34.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:35 np0005486808 nova_compute[259627]: 2025-10-14 09:25:35.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:35 np0005486808 podman[388064]: 2025-10-14 09:25:35.671858357 +0000 UTC m=+0.073528428 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 14 05:25:35 np0005486808 podman[388063]: 2025-10-14 09:25:35.672839321 +0000 UTC m=+0.073882686 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:25:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:37 np0005486808 nova_compute[259627]: 2025-10-14 09:25:37.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:39 np0005486808 nova_compute[259627]: 2025-10-14 09:25:39.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:39 np0005486808 nova_compute[259627]: 2025-10-14 09:25:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:39 np0005486808 nova_compute[259627]: 2025-10-14 09:25:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:25:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712989019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.472 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.654 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3671MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.740 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.741 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:25:40 np0005486808 nova_compute[259627]: 2025-10-14 09:25:40.766 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:25:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998892646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:25:41 np0005486808 nova_compute[259627]: 2025-10-14 09:25:41.172 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:41 np0005486808 nova_compute[259627]: 2025-10-14 09:25:41.178 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:25:41 np0005486808 nova_compute[259627]: 2025-10-14 09:25:41.194 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:25:41 np0005486808 nova_compute[259627]: 2025-10-14 09:25:41.213 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:25:41 np0005486808 nova_compute[259627]: 2025-10-14 09:25:41.214 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:43 np0005486808 nova_compute[259627]: 2025-10-14 09:25:43.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:43 np0005486808 nova_compute[259627]: 2025-10-14 09:25:43.209 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:25:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:25:43 np0005486808 nova_compute[259627]: 2025-10-14 09:25:43.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:43 np0005486808 nova_compute[259627]: 2025-10-14 09:25:43.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:25:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.621 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.621 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.646 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.744 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.745 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.753 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.754 2 INFO nova.compute.claims [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.905 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:25:44 np0005486808 nova_compute[259627]: 2025-10-14 09:25:44.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:25:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:25:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325488986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.331 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.337 2 DEBUG nova.compute.provider_tree [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.357 2 DEBUG nova.scheduler.client.report [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.381 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.382 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.431 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.431 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.447 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.474 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.587 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.589 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.589 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating image(s)#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.613 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.637 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.661 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.665 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.718 2 DEBUG nova.policy [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.769 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.771 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.771 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.772 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.796 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:45 np0005486808 nova_compute[259627]: 2025-10-14 09:25:45.801 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f41def60-a7be-4154-86bc-ef63a639ee94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.133 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f41def60-a7be-4154-86bc-ef63a639ee94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.179 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.254 2 DEBUG nova.objects.instance [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.275 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.275 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Ensure instance console log exists: /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.276 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:46 np0005486808 nova_compute[259627]: 2025-10-14 09:25:46.415 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Successfully created port: 52d803a0-5139-4197-a575-2530583dda13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.384 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Successfully updated port: 52d803a0-5139-4197-a575-2530583dda13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.398 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.491 2 DEBUG nova.compute.manager [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.492 2 DEBUG nova.compute.manager [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.492 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:25:47 np0005486808 nova_compute[259627]: 2025-10-14 09:25:47.620 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:25:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 825 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.408 2 DEBUG nova.network.neutron [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.442 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance network_info: |[{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.443 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.449 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start _get_guest_xml network_info=[{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.454 2 WARNING nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.461 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.461 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.471 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.libvirt.host [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.472 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.473 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.474 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.475 2 DEBUG nova.virt.hardware [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.479 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:48 np0005486808 podman[388338]: 2025-10-14 09:25:48.686580023 +0000 UTC m=+0.092682511 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:25:48 np0005486808 podman[388337]: 2025-10-14 09:25:48.731585505 +0000 UTC m=+0.146248454 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:25:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:25:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213029771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.945 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.972 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:48 np0005486808 nova_compute[259627]: 2025-10-14 09:25:48.977 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:25:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498982399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.444 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.446 2 DEBUG nova.virt.libvirt.vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.447 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.448 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.450 2 DEBUG nova.objects.instance [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.473 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <uuid>f41def60-a7be-4154-86bc-ef63a639ee94</uuid>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <name>instance-0000007d</name>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-2135752872</nova:name>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:25:48</nova:creationTime>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <nova:port uuid="52d803a0-5139-4197-a575-2530583dda13">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="serial">f41def60-a7be-4154-86bc-ef63a639ee94</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="uuid">f41def60-a7be-4154-86bc-ef63a639ee94</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f41def60-a7be-4154-86bc-ef63a639ee94_disk">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f41def60-a7be-4154-86bc-ef63a639ee94_disk.config">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:6a:56:7c"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <target dev="tap52d803a0-51"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/console.log" append="off"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:25:49 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:25:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:25:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:25:49 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.475 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Preparing to wait for external event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.477 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.478 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.478 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.479 2 DEBUG nova.virt.libvirt.vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:25:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.480 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.481 2 DEBUG nova.network.os_vif_util [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.482 2 DEBUG os_vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52d803a0-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52d803a0-51, col_values=(('external_ids', {'iface-id': '52d803a0-5139-4197-a575-2530583dda13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:56:7c', 'vm-uuid': 'f41def60-a7be-4154-86bc-ef63a639ee94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:49 np0005486808 NetworkManager[44885]: <info>  [1760433949.4955] manager: (tap52d803a0-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.505 2 INFO os_vif [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51')#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.561 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.562 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.562 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:6a:56:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.562 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Using config drive#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.587 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:49 np0005486808 nova_compute[259627]: 2025-10-14 09:25:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.081 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Creating config drive at /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.087 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmv4vdjgv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.260 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmv4vdjgv" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.297 2 DEBUG nova.storage.rbd_utils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image f41def60-a7be-4154-86bc-ef63a639ee94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:25:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 52 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 360 KiB/s wr, 11 op/s
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.301 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config f41def60-a7be-4154-86bc-ef63a639ee94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.356 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.358 2 DEBUG nova.network.neutron [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.378 2 DEBUG oslo_concurrency.lockutils [req-be37a6ba-61ac-49ce-ba4f-6b7717a6015a req-20d321b7-0560-4318-91f3-2f0653c84559 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.508 2 DEBUG oslo_concurrency.processutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config f41def60-a7be-4154-86bc-ef63a639ee94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.510 2 INFO nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deleting local config drive /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94/disk.config because it was imported into RBD.#033[00m
Oct 14 05:25:50 np0005486808 kernel: tap52d803a0-51: entered promiscuous mode
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:50 np0005486808 NetworkManager[44885]: <info>  [1760433950.5813] manager: (tap52d803a0-51): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Oct 14 05:25:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:50Z|01336|binding|INFO|Claiming lport 52d803a0-5139-4197-a575-2530583dda13 for this chassis.
Oct 14 05:25:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:50Z|01337|binding|INFO|52d803a0-5139-4197-a575-2530583dda13: Claiming fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.597 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:56:7c 10.100.0.9'], port_security=['fa:16:3e:6a:56:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f41def60-a7be-4154-86bc-ef63a639ee94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '68240394-e812-45d0-9e91-6623d4ac03bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c79ac29-f3b4-494c-ada5-2b93955c4fe1, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=52d803a0-5139-4197-a575-2530583dda13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.599 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 52d803a0-5139-4197-a575-2530583dda13 in datapath 0e4b4c3f-9218-4fba-8f93-74ac472b0db0 bound to our chassis#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.600 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e4b4c3f-9218-4fba-8f93-74ac472b0db0#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.617 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b689237-fcba-402b-aa9f-cecdb73134fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.618 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0e4b4c3f-91 in ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.620 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0e4b4c3f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.620 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c0ee83-141f-4938-b054-11a68305d56d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.622 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[790dfb20-a574-4f67-811a-d6b7e14e7301]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 systemd-machined[214636]: New machine qemu-158-instance-0000007d.
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.638 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[da29f8af-76b5-49a1-9f3a-fcfc383ce0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e09cbd3-0e28-494e-aab9-6e18fd78cb9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 systemd-udevd[388518]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:50Z|01338|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 ovn-installed in OVS
Oct 14 05:25:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:50Z|01339|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 up in Southbound
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:50 np0005486808 NetworkManager[44885]: <info>  [1760433950.6993] device (tap52d803a0-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:25:50 np0005486808 NetworkManager[44885]: <info>  [1760433950.7017] device (tap52d803a0-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.720 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[39ac6064-7053-41d4-8a63-c663f9c14e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 systemd-udevd[388521]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dc832ce7-7d3d-4dbf-95b0-21c0ccab1813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 NetworkManager[44885]: <info>  [1760433950.7307] manager: (tap0e4b4c3f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/544)
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.772 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[15bfab49-5f37-4c1a-858e-a52d467b9336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.776 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76b4a48d-116a-44be-ba37-7cb2e9ef5b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 NetworkManager[44885]: <info>  [1760433950.8066] device (tap0e4b4c3f-90): carrier: link connected
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.811 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[45e7ea4d-c472-4ff8-b277-f97d2266e0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.829 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5ba426-de1c-47bf-962c-fb1cdab11bcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e4b4c3f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:36:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772956, 'reachable_time': 39747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388548, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.846 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7851ea3c-9c94-4644-8955-a030d84cc46a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:36ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772956, 'tstamp': 772956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388549, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.863 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f059c03-bc38-4e21-9b4e-a74d2eb54f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e4b4c3f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:36:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772956, 'reachable_time': 39747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388550, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.887 2 DEBUG nova.compute.manager [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG oslo_concurrency.lockutils [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:50 np0005486808 nova_compute[259627]: 2025-10-14 09:25:50.888 2 DEBUG nova.compute.manager [req-dd282dc2-52f4-4354-9c23-62ea8352ed98 req-0ef8f7e7-c0fc-4783-b8d3-7bd18dec8ae8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Processing event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a675b0a5-c5af-4e98-bb28-442104543c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.995 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e500a0-8270-45b4-8e0f-28ef06ae6268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e4b4c3f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:50.999 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.001 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e4b4c3f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:51 np0005486808 kernel: tap0e4b4c3f-90: entered promiscuous mode
Oct 14 05:25:51 np0005486808 NetworkManager[44885]: <info>  [1760433951.0049] manager: (tap0e4b4c3f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e4b4c3f-90, col_values=(('external_ids', {'iface-id': 'fbc0ee84-b007-472f-ae27-854b1ba7e94b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:51Z|01340|binding|INFO|Releasing lport fbc0ee84-b007-472f-ae27-854b1ba7e94b from this chassis (sb_readonly=0)
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.049 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.050 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cb49469d-44d8-4679-b3c9-8272456dc07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.051 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0e4b4c3f-9218-4fba-8f93-74ac472b0db0
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.pid.haproxy
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0e4b4c3f-9218-4fba-8f93-74ac472b0db0
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:25:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:25:51.053 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'env', 'PROCESS_TAG=haproxy-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0e4b4c3f-9218-4fba-8f93-74ac472b0db0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:25:51 np0005486808 podman[388625]: 2025-10-14 09:25:51.551736879 +0000 UTC m=+0.070291498 container create 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 14 05:25:51 np0005486808 podman[388625]: 2025-10-14 09:25:51.516390336 +0000 UTC m=+0.034945025 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:25:51 np0005486808 systemd[1]: Started libpod-conmon-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope.
Oct 14 05:25:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:25:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4e9f0f032f127958bf9c519d14d596a1ab0d965830f5dc4cf0b7403580501a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:25:51 np0005486808 podman[388625]: 2025-10-14 09:25:51.679318472 +0000 UTC m=+0.197873151 container init 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:25:51 np0005486808 podman[388625]: 2025-10-14 09:25:51.689453782 +0000 UTC m=+0.208008371 container start 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 14 05:25:51 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : New worker (388646) forked
Oct 14 05:25:51 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : Loading success.
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.717 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.7172818, f41def60-a7be-4154-86bc-ef63a639ee94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.718 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Started (Lifecycle Event)#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.721 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.725 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.729 2 INFO nova.virt.libvirt.driver [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance spawned successfully.#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.730 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.749 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.757 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.763 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.764 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.764 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.765 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.766 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.767 2 DEBUG nova.virt.libvirt.driver [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.778 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.717514, f41def60-a7be-4154-86bc-ef63a639ee94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.778 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.804 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.809 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433951.7244825, f41def60-a7be-4154-86bc-ef63a639ee94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.809 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.963 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.969 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.976 2 INFO nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 6.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.976 2 DEBUG nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:25:51 np0005486808 nova_compute[259627]: 2025-10-14 09:25:51.991 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.046 2 INFO nova.compute.manager [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 7.33 seconds to build instance.#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.061 2 DEBUG oslo_concurrency.lockutils [None req-8abe4703-5874-4f53-a35e-b89afbf7bfb0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.974 2 DEBUG nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.974 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG oslo_concurrency.lockutils [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.975 2 DEBUG nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:25:52 np0005486808 nova_compute[259627]: 2025-10-14 09:25:52.975 2 WARNING nova.compute.manager [req-b28c4caa-d4c3-4b69-a567-0b8da0f21a46 req-97b1389f-b473-4865-9876-ff08b1304a0a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received unexpected event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:25:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:53 np0005486808 nova_compute[259627]: 2025-10-14 09:25:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:25:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:25:54 np0005486808 nova_compute[259627]: 2025-10-14 09:25:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:54 np0005486808 nova_compute[259627]: 2025-10-14 09:25:54.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:55 np0005486808 NetworkManager[44885]: <info>  [1760433955.0954] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 14 05:25:55 np0005486808 NetworkManager[44885]: <info>  [1760433955.0965] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:25:55Z|01341|binding|INFO|Releasing lport fbc0ee84-b007-472f-ae27-854b1ba7e94b from this chassis (sb_readonly=0)
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.551 2 DEBUG nova.compute.manager [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.551 2 DEBUG nova.compute.manager [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:25:55 np0005486808 nova_compute[259627]: 2025-10-14 09:25:55.552 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:25:56 np0005486808 nova_compute[259627]: 2025-10-14 09:25:56.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:25:56 np0005486808 nova_compute[259627]: 2025-10-14 09:25:56.957 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:25:56 np0005486808 nova_compute[259627]: 2025-10-14 09:25:56.958 2 DEBUG nova.network.neutron [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:25:56 np0005486808 nova_compute[259627]: 2025-10-14 09:25:56.987 2 DEBUG oslo_concurrency.lockutils [req-5a1cfab9-ce22-4392-b072-bf42b1a2bc30 req-9461c073-5f7d-4e2e-8ddf-b489c913ed46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:25:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:25:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:25:59 np0005486808 nova_compute[259627]: 2025-10-14 09:25:59.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:25:59 np0005486808 nova_compute[259627]: 2025-10-14 09:25:59.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:26:01 np0005486808 nova_compute[259627]: 2025-10-14 09:26:01.995 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:01 np0005486808 nova_compute[259627]: 2025-10-14 09:26:01.996 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.019 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.122 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.123 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.134 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.134 2 INFO nova.compute.claims [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.322 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416403570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.791 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.795 2 DEBUG nova.compute.provider_tree [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.815 2 DEBUG nova.scheduler.client.report [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:26:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.838 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.839 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.892 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.892 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.917 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:26:02 np0005486808 nova_compute[259627]: 2025-10-14 09:26:02.952 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:26:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.053 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.055 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.055 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating image(s)#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.084 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.114 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.139 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.142 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.234 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.235 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.236 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.236 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.260 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.264 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4310595f-2280-438c-97ca-f2de57527501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:03Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:26:03 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:03Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.517 2 DEBUG nova.policy [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.535 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4310595f-2280-438c-97ca-f2de57527501_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.608 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4310595f-2280-438c-97ca-f2de57527501_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.719 2 DEBUG nova.objects.instance [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.733 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.734 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Ensure instance console log exists: /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.735 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.735 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:03 np0005486808 nova_compute[259627]: 2025-10-14 09:26:03.736 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:26:04 np0005486808 nova_compute[259627]: 2025-10-14 09:26:04.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:04 np0005486808 nova_compute[259627]: 2025-10-14 09:26:04.827 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Successfully created port: ea25832f-13d3-41ec-874c-e622d24c912e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:26:04 np0005486808 nova_compute[259627]: 2025-10-14 09:26:04.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:26:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:26:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:26:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415980028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:26:05 np0005486808 nova_compute[259627]: 2025-10-14 09:26:05.920 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Successfully updated port: ea25832f-13d3-41ec-874c-e622d24c912e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:26:05 np0005486808 nova_compute[259627]: 2025-10-14 09:26:05.942 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:05 np0005486808 nova_compute[259627]: 2025-10-14 09:26:05.942 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:05 np0005486808 nova_compute[259627]: 2025-10-14 09:26:05.943 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:26:06 np0005486808 nova_compute[259627]: 2025-10-14 09:26:06.042 2 DEBUG nova.compute.manager [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:06 np0005486808 nova_compute[259627]: 2025-10-14 09:26:06.042 2 DEBUG nova.compute.manager [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:06 np0005486808 nova_compute[259627]: 2025-10-14 09:26:06.043 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:06 np0005486808 nova_compute[259627]: 2025-10-14 09:26:06.113 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:26:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 14 05:26:06 np0005486808 podman[388846]: 2025-10-14 09:26:06.673823777 +0000 UTC m=+0.073966618 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:26:06 np0005486808 podman[388845]: 2025-10-14 09:26:06.697955064 +0000 UTC m=+0.099883959 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:26:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.040 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:07.042 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.660 2 DEBUG nova.network.neutron [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.752 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.753 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance network_info: |[{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.754 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.755 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.759 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start _get_guest_xml network_info=[{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.766 2 WARNING nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.774 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.775 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.779 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.780 2 DEBUG nova.virt.libvirt.host [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.781 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.781 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.782 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.783 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.783 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.784 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.784 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.785 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.786 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.786 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.787 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.787 2 DEBUG nova.virt.hardware [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:26:07 np0005486808 nova_compute[259627]: 2025-10-14 09:26:07.793 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857147547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.249 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.274 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.278 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 05:26:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382903529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.751 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.754 2 DEBUG nova.virt.libvirt.vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:02Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.755 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.757 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.759 2 DEBUG nova.objects.instance [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.778 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <uuid>4310595f-2280-438c-97ca-f2de57527501</uuid>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <name>instance-0000007e</name>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986</nova:name>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:26:07</nova:creationTime>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <nova:port uuid="ea25832f-13d3-41ec-874c-e622d24c912e">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="serial">4310595f-2280-438c-97ca-f2de57527501</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="uuid">4310595f-2280-438c-97ca-f2de57527501</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4310595f-2280-438c-97ca-f2de57527501_disk">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4310595f-2280-438c-97ca-f2de57527501_disk.config">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:fe:45:3f"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <target dev="tapea25832f-13"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/console.log" append="off"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:26:08 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:26:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:26:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:26:08 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.779 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Preparing to wait for external event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.780 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.782 2 DEBUG nova.virt.libvirt.vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:02Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.782 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.783 2 DEBUG nova.network.os_vif_util [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.784 2 DEBUG os_vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea25832f-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea25832f-13, col_values=(('external_ids', {'iface-id': 'ea25832f-13d3-41ec-874c-e622d24c912e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:45:3f', 'vm-uuid': '4310595f-2280-438c-97ca-f2de57527501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:08 np0005486808 NetworkManager[44885]: <info>  [1760433968.7978] manager: (tapea25832f-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.806 2 INFO os_vif [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13')#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.875 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.876 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.876 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:fe:45:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.877 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Using config drive#033[00m
Oct 14 05:26:08 np0005486808 nova_compute[259627]: 2025-10-14 09:26:08.907 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.224 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Creating config drive at /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.234 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xoix8t2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.299 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.300 2 DEBUG nova.network.neutron [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.314 2 DEBUG oslo_concurrency.lockutils [req-c652b128-6d11-4e7f-b086-c93c83489e23 req-6ccdb29b-6271-4a5a-b81e-5f389f8486ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.389 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xoix8t2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.429 2 DEBUG nova.storage.rbd_utils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4310595f-2280-438c-97ca-f2de57527501_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.434 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config 4310595f-2280-438c-97ca-f2de57527501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.639 2 DEBUG oslo_concurrency.processutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config 4310595f-2280-438c-97ca-f2de57527501_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.640 2 INFO nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deleting local config drive /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501/disk.config because it was imported into RBD.#033[00m
Oct 14 05:26:09 np0005486808 kernel: tapea25832f-13: entered promiscuous mode
Oct 14 05:26:09 np0005486808 NetworkManager[44885]: <info>  [1760433969.7073] manager: (tapea25832f-13): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:09Z|01342|binding|INFO|Claiming lport ea25832f-13d3-41ec-874c-e622d24c912e for this chassis.
Oct 14 05:26:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:09Z|01343|binding|INFO|ea25832f-13d3-41ec-874c-e622d24c912e: Claiming fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.720 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:45:3f 10.100.0.12'], port_security=['fa:16:3e:fe:45:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4310595f-2280-438c-97ca-f2de57527501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ea5a077-a2c7-41d4-9c82-971893cbca2e 5ded162a-2a98-4fc1-94d1-b742c1816f61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57f23724-2b34-445a-b3d0-46a0f0ee87c3, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ea25832f-13d3-41ec-874c-e622d24c912e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.721 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ea25832f-13d3-41ec-874c-e622d24c912e in datapath 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c bound to our chassis#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.723 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.734 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[794f794c-1d79-4f59-95de-6dafd46f877d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.735 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69d5e5f4-01 in ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.736 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69d5e5f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d36a989-5da6-4817-ac9e-c8fbdef463bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9922194-5371-43c7-bc75-d793df664450]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 systemd-udevd[389021]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:26:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:09Z|01344|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e ovn-installed in OVS
Oct 14 05:26:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:09Z|01345|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e up in Southbound
Oct 14 05:26:09 np0005486808 systemd-machined[214636]: New machine qemu-159-instance-0000007e.
Oct 14 05:26:09 np0005486808 NetworkManager[44885]: <info>  [1760433969.7535] device (tapea25832f-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:26:09 np0005486808 NetworkManager[44885]: <info>  [1760433969.7548] device (tapea25832f-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.771 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb6b723-d2c1-4fec-8f45-c267aff10820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:09 np0005486808 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.817 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2274fa-20e8-411e-bae8-554cb694a199]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.846 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[76289ff7-2bea-45d7-8b72-7f3770494d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.850 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[97efc030-1ce9-4336-bb54-fce1f86f7025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 NetworkManager[44885]: <info>  [1760433969.8527] manager: (tap69d5e5f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/550)
Oct 14 05:26:09 np0005486808 systemd-udevd[389026]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.889 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d6374c-9b13-4605-b38c-4735bccb3f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.892 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ba1908-8791-4adf-991e-b64e675184be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 nova_compute[259627]: 2025-10-14 09:26:09.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:09 np0005486808 NetworkManager[44885]: <info>  [1760433969.9148] device (tap69d5e5f4-00): carrier: link connected
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.921 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3dba3ad3-9d2a-4556-8821-36c94e52e14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.943 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe00eed-f7cf-449f-bc4a-ba8cc479ee66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69d5e5f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:7d:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774867, 'reachable_time': 33876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389055, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.960 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9f5172-810b-4fc9-af0f-6b16bb7e8053]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:7da1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774867, 'tstamp': 774867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389056, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:09.979 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[beff8ccf-b361-4a4f-8919-d53d19de4842]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69d5e5f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:7d:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774867, 'reachable_time': 33876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389057, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.009 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06129994-bfa0-440f-b047-75cd6fd3308c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.083 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[002fa53c-eb68-4d9c-a486-2865451ae061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.085 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69d5e5f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.086 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.087 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69d5e5f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:10 np0005486808 NetworkManager[44885]: <info>  [1760433970.0902] manager: (tap69d5e5f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:10 np0005486808 kernel: tap69d5e5f4-00: entered promiscuous mode
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.094 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69d5e5f4-00, col_values=(('external_ids', {'iface-id': 'a300b10f-f6fd-47ab-bc03-160d747e5ac0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:10Z|01346|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.124 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.125 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[659a4138-4ab1-4e81-aa29-22cb7863ecae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.126 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.pid.haproxy
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:26:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:10.127 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'env', 'PROCESS_TAG=haproxy-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:26:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.319 2 INFO nova.compute.manager [None req-1b472b76-f7e5-4d11-9bbc-9cfd0232a5e3 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Get console output#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.328 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:26:10 np0005486808 podman[389131]: 2025-10-14 09:26:10.520137217 +0000 UTC m=+0.064978916 container create a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:26:10 np0005486808 systemd[1]: Started libpod-conmon-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope.
Oct 14 05:26:10 np0005486808 podman[389131]: 2025-10-14 09:26:10.481112543 +0000 UTC m=+0.025954262 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:26:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a75670ad0d206a928ee627a44dd9949ee1ddfbb2472618a373916a2273c134/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:10 np0005486808 podman[389131]: 2025-10-14 09:26:10.610675055 +0000 UTC m=+0.155516784 container init a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:26:10 np0005486808 podman[389131]: 2025-10-14 09:26:10.615809201 +0000 UTC m=+0.160650900 container start a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:26:10 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : New worker (389152) forked
Oct 14 05:26:10 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : Loading success.
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.680 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.6797283, 4310595f-2280-438c-97ca-f2de57527501 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.680 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Started (Lifecycle Event)#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.715 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.718 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.6799247, 4310595f-2280-438c-97ca-f2de57527501 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.718 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.740 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.742 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.771 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.927 2 DEBUG nova.compute.manager [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG oslo_concurrency.lockutils [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.928 2 DEBUG nova.compute.manager [req-de4601fb-7992-4f4b-917b-603479fe6c5e req-3ab91eb3-8f4c-4d80-acb4-e3d7aeecf27e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Processing event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.929 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.933 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760433970.933312, 4310595f-2280-438c-97ca-f2de57527501 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.934 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.935 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.938 2 INFO nova.virt.libvirt.driver [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance spawned successfully.#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.938 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.967 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.972 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.975 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.975 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:10 np0005486808 nova_compute[259627]: 2025-10-14 09:26:10.976 2 DEBUG nova.virt.libvirt.driver [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.008 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.066 2 INFO nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 8.01 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.067 2 DEBUG nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:11.162 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:26:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:11.163 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.176 2 INFO nova.compute.manager [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 9.09 seconds to build instance.#033[00m
Oct 14 05:26:11 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:11Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.245 2 DEBUG oslo_concurrency.lockutils [None req-99819bd9-d47f-439b-a37a-bc4b74a6c512 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:11 np0005486808 nova_compute[259627]: 2025-10-14 09:26:11.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 05:26:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:13Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.199 2 DEBUG oslo_concurrency.lockutils [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.200 2 DEBUG nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.200 2 WARNING nova.compute.manager [req-75585999-cd82-4e2c-91c3-00bc217a318a req-8f9172f4-5ef5-46f1-9b87-893cb2d9b08f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received unexpected event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with vm_state active and task_state None.#033[00m
Oct 14 05:26:13 np0005486808 nova_compute[259627]: 2025-10-14 09:26:13.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 14 05:26:14 np0005486808 nova_compute[259627]: 2025-10-14 09:26:14.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:14Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:56:7c 10.100.0.9
Oct 14 05:26:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:15.165 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:15 np0005486808 nova_compute[259627]: 2025-10-14 09:26:15.912 2 DEBUG nova.compute.manager [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-changed-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:15 np0005486808 nova_compute[259627]: 2025-10-14 09:26:15.913 2 DEBUG nova.compute.manager [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing instance network info cache due to event network-changed-52d803a0-5139-4197-a575-2530583dda13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:15 np0005486808 nova_compute[259627]: 2025-10-14 09:26:15.913 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:15 np0005486808 nova_compute[259627]: 2025-10-14 09:26:15.914 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:15 np0005486808 nova_compute[259627]: 2025-10-14 09:26:15.914 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Refreshing network info cache for port 52d803a0-5139-4197-a575-2530583dda13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.022 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.023 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.024 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.026 2 INFO nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Terminating instance#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.027 2 DEBUG nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:26:16 np0005486808 kernel: tap52d803a0-51 (unregistering): left promiscuous mode
Oct 14 05:26:16 np0005486808 NetworkManager[44885]: <info>  [1760433976.0880] device (tap52d803a0-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:26:16 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:16Z|01347|binding|INFO|Releasing lport 52d803a0-5139-4197-a575-2530583dda13 from this chassis (sb_readonly=0)
Oct 14 05:26:16 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:16Z|01348|binding|INFO|Setting lport 52d803a0-5139-4197-a575-2530583dda13 down in Southbound
Oct 14 05:26:16 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:16Z|01349|binding|INFO|Removing iface tap52d803a0-51 ovn-installed in OVS
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.169 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:56:7c 10.100.0.9'], port_security=['fa:16:3e:6a:56:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f41def60-a7be-4154-86bc-ef63a639ee94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68240394-e812-45d0-9e91-6623d4ac03bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c79ac29-f3b4-494c-ada5-2b93955c4fe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=52d803a0-5139-4197-a575-2530583dda13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.171 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 52d803a0-5139-4197-a575-2530583dda13 in datapath 0e4b4c3f-9218-4fba-8f93-74ac472b0db0 unbound from our chassis#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.174 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e4b4c3f-9218-4fba-8f93-74ac472b0db0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.175 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[79d64cbf-f8af-4860-b362-f7250bb82306]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.177 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 namespace which is not needed anymore#033[00m
Oct 14 05:26:16 np0005486808 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 14 05:26:16 np0005486808 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 13.063s CPU time.
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 systemd-machined[214636]: Machine qemu-158-instance-0000007d terminated.
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.282 2 INFO nova.virt.libvirt.driver [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Instance destroyed successfully.#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.284 2 DEBUG nova.objects.instance [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid f41def60-a7be-4154-86bc-ef63a639ee94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.307 2 DEBUG nova.virt.libvirt.vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:25:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2135752872',display_name='tempest-TestNetworkBasicOps-server-2135752872',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2135752872',id=125,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnKWI1fcgm2J2USW/ocYjEbdNv3UFcXFKMO5si6IlBqcZWb9CUblu//WPv2zFMmUex5tiH7jg81h5bD0kUs5doUIUp4qv9iNUQKKi7Q5u8sjjVGsY4n55Yf/sl7IQnjgw==',key_name='tempest-TestNetworkBasicOps-1922317585',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:25:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-h0pwqz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:25:52Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=f41def60-a7be-4154-86bc-ef63a639ee94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.308 2 DEBUG nova.network.os_vif_util [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.309 2 DEBUG nova.network.os_vif_util [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.310 2 DEBUG os_vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52d803a0-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.320 2 INFO os_vif [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:56:7c,bridge_name='br-int',has_traffic_filtering=True,id=52d803a0-5139-4197-a575-2530583dda13,network=Network(0e4b4c3f-9218-4fba-8f93-74ac472b0db0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52d803a0-51')#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.339 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.341 2 DEBUG oslo_concurrency.lockutils [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.342 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.342 2 DEBUG nova.compute.manager [req-fc0ecba7-49f4-47c9-87e9-b2c9fbd4c1bc req-3af47033-b1d2-4402-a308-143e5e30c018 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-unplugged-52d803a0-5139-4197-a575-2530583dda13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:26:16 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : haproxy version is 2.8.14-c23fe91
Oct 14 05:26:16 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [NOTICE]   (388644) : path to executable is /usr/sbin/haproxy
Oct 14 05:26:16 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [WARNING]  (388644) : Exiting Master process...
Oct 14 05:26:16 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [ALERT]    (388644) : Current worker (388646) exited with code 143 (Terminated)
Oct 14 05:26:16 np0005486808 neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0[388640]: [WARNING]  (388644) : All workers exited. Exiting... (0)
Oct 14 05:26:16 np0005486808 systemd[1]: libpod-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope: Deactivated successfully.
Oct 14 05:26:16 np0005486808 podman[389195]: 2025-10-14 09:26:16.407442359 +0000 UTC m=+0.076169203 container died 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:26:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd-userdata-shm.mount: Deactivated successfully.
Oct 14 05:26:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8e4e9f0f032f127958bf9c519d14d596a1ab0d965830f5dc4cf0b7403580501a-merged.mount: Deactivated successfully.
Oct 14 05:26:16 np0005486808 podman[389195]: 2025-10-14 09:26:16.475630024 +0000 UTC m=+0.144356798 container cleanup 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:26:16 np0005486808 systemd[1]: libpod-conmon-0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd.scope: Deactivated successfully.
Oct 14 05:26:16 np0005486808 podman[389245]: 2025-10-14 09:26:16.574151678 +0000 UTC m=+0.065061178 container remove 0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.587 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82b4c77d-780e-40db-8ac8-68796206c5cd]: (4, ('Tue Oct 14 09:26:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 (0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd)\n0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd\nTue Oct 14 09:26:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 (0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd)\n0e3f49f3540434f79328dcdc2603eefa7e399f0c51f578824585822f5d241dfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.589 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f685f836-ca49-42ac-a33a-9df792fff3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.591 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e4b4c3f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 kernel: tap0e4b4c3f-90: left promiscuous mode
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.602 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[31b85c8f-5e27-4a53-8be4-1c1cc8403e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.629 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fe0cfc-e359-4457-bab5-f73c34433f3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.632 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[18fad62d-c41b-4be5-bf3f-9a219c64a8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[700e5e20-16ba-41d5-8656-eb039b93ef3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772947, 'reachable_time': 35338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389261, 'error': None, 'target': 'ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0e4b4c3f\x2d9218\x2d4fba\x2d8f93\x2d74ac472b0db0.mount: Deactivated successfully.
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.658 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0e4b4c3f-9218-4fba-8f93-74ac472b0db0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:26:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:16.658 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[54ad42e8-51e7-4c06-8541-a444976952c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.972 2 INFO nova.virt.libvirt.driver [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deleting instance files /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94_del#033[00m
Oct 14 05:26:16 np0005486808 nova_compute[259627]: 2025-10-14 09:26:16.973 2 INFO nova.virt.libvirt.driver [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deletion of /var/lib/nova/instances/f41def60-a7be-4154-86bc-ef63a639ee94_del complete#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.035 2 INFO nova.compute.manager [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.035 2 DEBUG oslo.service.loopingcall [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.036 2 DEBUG nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.036 2 DEBUG nova.network.neutron [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.520 2 DEBUG nova.network.neutron [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.537 2 INFO nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Took 0.50 seconds to deallocate network for instance.#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.593 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.594 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.656 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updated VIF entry in instance network info cache for port 52d803a0-5139-4197-a575-2530583dda13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.657 2 DEBUG nova.network.neutron [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [{"id": "52d803a0-5139-4197-a575-2530583dda13", "address": "fa:16:3e:6a:56:7c", "network": {"id": "0e4b4c3f-9218-4fba-8f93-74ac472b0db0", "bridge": "br-int", "label": "tempest-network-smoke--2018979810", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52d803a0-51", "ovs_interfaceid": "52d803a0-5139-4197-a575-2530583dda13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.668 2 DEBUG oslo_concurrency.processutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:17 np0005486808 nova_compute[259627]: 2025-10-14 09:26:17.713 2 DEBUG oslo_concurrency.lockutils [req-38c1443d-b2d1-4b3b-824c-59f92c1b9290 req-b5b8faed-4612-4b79-9b83-687c8c9d825b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f41def60-a7be-4154-86bc-ef63a639ee94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.045 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.046 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.046 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.050 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.051 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662381266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.103 2 DEBUG oslo_concurrency.processutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.114 2 DEBUG nova.compute.provider_tree [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.138 2 DEBUG nova.scheduler.client.report [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.167 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.193 2 INFO nova.scheduler.client.report [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance f41def60-a7be-4154-86bc-ef63a639ee94#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.256 2 DEBUG oslo_concurrency.lockutils [None req-8594b6c3-4b44-497d-8799-786396e36b9a 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 167 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.420 2 DEBUG nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG oslo_concurrency.lockutils [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f41def60-a7be-4154-86bc-ef63a639ee94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.421 2 DEBUG nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] No waiting events found dispatching network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:26:18 np0005486808 nova_compute[259627]: 2025-10-14 09:26:18.421 2 WARNING nova.compute.manager [req-72a7b175-dc4a-4d3e-b3dc-1928e017cf23 req-a3885386-23ac-4595-8ebc-35c40a070342 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received unexpected event network-vif-plugged-52d803a0-5139-4197-a575-2530583dda13 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.224 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.224 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.249 2 DEBUG oslo_concurrency.lockutils [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.249 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Received event network-vif-deleted-52d803a0-5139-4197-a575-2530583dda13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.250 2 INFO nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Neutron deleted interface 52d803a0-5139-4197-a575-2530583dda13; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.250 2 DEBUG nova.network.neutron [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.274 2 DEBUG nova.compute.manager [req-8df171dd-9d99-4592-83a4-5768f160ac58 req-d35698be-5b0f-4cbb-a22a-bfb2b8a7bc95 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Detach interface failed, port_id=52d803a0-5139-4197-a575-2530583dda13, reason: Instance f41def60-a7be-4154-86bc-ef63a639ee94 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:26:19 np0005486808 podman[389285]: 2025-10-14 09:26:19.668465217 +0000 UTC m=+0.070912884 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:26:19 np0005486808 podman[389284]: 2025-10-14 09:26:19.735640867 +0000 UTC m=+0.134476484 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:26:19 np0005486808 nova_compute[259627]: 2025-10-14 09:26:19.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 145 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 80 op/s
Oct 14 05:26:21 np0005486808 nova_compute[259627]: 2025-10-14 09:26:21.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:22Z|01350|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 05:26:22 np0005486808 nova_compute[259627]: 2025-10-14 09:26:22.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.6 KiB/s wr, 98 op/s
Oct 14 05:26:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:23Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 05:26:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:23Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:45:3f 10.100.0.12
Oct 14 05:26:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 88 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.5 KiB/s wr, 96 op/s
Oct 14 05:26:24 np0005486808 nova_compute[259627]: 2025-10-14 09:26:24.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5706d6d7-ffaf-4478-b036-7da8e9e245e7 does not exist
Oct 14 05:26:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c4b8f133-1fc2-4fdd-bbd9-c2a37bc5fd7b does not exist
Oct 14 05:26:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f7b1d075-aa45-4329-ae5b-85e17246b8a1 does not exist
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:26:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:26:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Oct 14 05:26:26 np0005486808 nova_compute[259627]: 2025-10-14 09:26:26.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:26:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:26 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.730877805 +0000 UTC m=+0.059458870 container create 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:26:26 np0005486808 systemd[1]: Started libpod-conmon-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope.
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.697420018 +0000 UTC m=+0.026001143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.842735119 +0000 UTC m=+0.171316244 container init 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.852932281 +0000 UTC m=+0.181513346 container start 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.857087254 +0000 UTC m=+0.185668319 container attach 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:26:26 np0005486808 determined_black[389617]: 167 167
Oct 14 05:26:26 np0005486808 systemd[1]: libpod-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope: Deactivated successfully.
Oct 14 05:26:26 np0005486808 conmon[389617]: conmon 96f1daee704f99a9b0c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope/container/memory.events
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.866229609 +0000 UTC m=+0.194810664 container died 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:26:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-21e75694929f4930f869bb1d59ac7d938764fe23e3a226f496bc4fb5e2b709d6-merged.mount: Deactivated successfully.
Oct 14 05:26:26 np0005486808 podman[389600]: 2025-10-14 09:26:26.916091092 +0000 UTC m=+0.244672137 container remove 96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:26:26 np0005486808 systemd[1]: libpod-conmon-96f1daee704f99a9b0c76bd952d5796e8ea3ef88cbfd945686d635c04d7323e3.scope: Deactivated successfully.
Oct 14 05:26:27 np0005486808 podman[389640]: 2025-10-14 09:26:27.125136317 +0000 UTC m=+0.060778493 container create d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:26:27 np0005486808 systemd[1]: Started libpod-conmon-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope.
Oct 14 05:26:27 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:27 np0005486808 podman[389640]: 2025-10-14 09:26:27.10340758 +0000 UTC m=+0.039049786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:27 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:27 np0005486808 podman[389640]: 2025-10-14 09:26:27.216258499 +0000 UTC m=+0.151900715 container init d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:26:27 np0005486808 podman[389640]: 2025-10-14 09:26:27.225029515 +0000 UTC m=+0.160671691 container start d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 05:26:27 np0005486808 podman[389640]: 2025-10-14 09:26:27.228511481 +0000 UTC m=+0.164153677 container attach d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:26:27 np0005486808 nova_compute[259627]: 2025-10-14 09:26:27.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:28 np0005486808 cool_maxwell[389657]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:26:28 np0005486808 cool_maxwell[389657]: --> relative data size: 1.0
Oct 14 05:26:28 np0005486808 cool_maxwell[389657]: --> All data devices are unavailable
Oct 14 05:26:28 np0005486808 systemd[1]: libpod-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Deactivated successfully.
Oct 14 05:26:28 np0005486808 podman[389640]: 2025-10-14 09:26:28.299597927 +0000 UTC m=+1.235240143 container died d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:26:28 np0005486808 systemd[1]: libpod-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Consumed 1.015s CPU time.
Oct 14 05:26:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 05:26:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-63e80123e806aab5f3c2caf6f59d150a0e3288de95ea98e000aa58ea7a91090b-merged.mount: Deactivated successfully.
Oct 14 05:26:28 np0005486808 podman[389640]: 2025-10-14 09:26:28.400498441 +0000 UTC m=+1.336140607 container remove d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:26:28 np0005486808 systemd[1]: libpod-conmon-d3f27b87d4c66602259fa9c578b8d1bacd091cb94a75e73c5c79213d46aaf65a.scope: Deactivated successfully.
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.238885057 +0000 UTC m=+0.057959623 container create 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:29 np0005486808 systemd[1]: Started libpod-conmon-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope.
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.205146253 +0000 UTC m=+0.024220859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.316898405 +0000 UTC m=+0.135972981 container init 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.324174004 +0000 UTC m=+0.143248530 container start 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:26:29 np0005486808 nostalgic_black[389858]: 167 167
Oct 14 05:26:29 np0005486808 systemd[1]: libpod-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope: Deactivated successfully.
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.32964751 +0000 UTC m=+0.148722076 container attach 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.330414369 +0000 UTC m=+0.149488935 container died 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:26:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f320ef68bc6fe12d8308661a06df34a12a238288c7f9c4ae7e7239db58ad4b80-merged.mount: Deactivated successfully.
Oct 14 05:26:29 np0005486808 podman[389842]: 2025-10-14 09:26:29.377692167 +0000 UTC m=+0.196766723 container remove 03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:26:29 np0005486808 systemd[1]: libpod-conmon-03f9776c30438b3112b3bbaaebfd53e6de83b6c9b7c51abec12bea7097b53448.scope: Deactivated successfully.
Oct 14 05:26:29 np0005486808 podman[389881]: 2025-10-14 09:26:29.593455768 +0000 UTC m=+0.067007967 container create e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 05:26:29 np0005486808 systemd[1]: Started libpod-conmon-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope.
Oct 14 05:26:29 np0005486808 podman[389881]: 2025-10-14 09:26:29.567722992 +0000 UTC m=+0.041275251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:29 np0005486808 podman[389881]: 2025-10-14 09:26:29.699113769 +0000 UTC m=+0.172665998 container init e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:26:29 np0005486808 podman[389881]: 2025-10-14 09:26:29.708101961 +0000 UTC m=+0.181654130 container start e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:29 np0005486808 podman[389881]: 2025-10-14 09:26:29.712303675 +0000 UTC m=+0.185855924 container attach e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:26:29 np0005486808 nova_compute[259627]: 2025-10-14 09:26:29.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 05:26:30 np0005486808 keen_easley[389897]: {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    "0": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "devices": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "/dev/loop3"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            ],
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_name": "ceph_lv0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_size": "21470642176",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "name": "ceph_lv0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "tags": {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_name": "ceph",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.crush_device_class": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.encrypted": "0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_id": "0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.vdo": "0"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            },
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "vg_name": "ceph_vg0"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        }
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    ],
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    "1": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "devices": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "/dev/loop4"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            ],
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_name": "ceph_lv1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_size": "21470642176",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "name": "ceph_lv1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "tags": {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_name": "ceph",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.crush_device_class": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.encrypted": "0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_id": "1",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.vdo": "0"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            },
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "vg_name": "ceph_vg1"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        }
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    ],
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    "2": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "devices": [
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "/dev/loop5"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            ],
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_name": "ceph_lv2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_size": "21470642176",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "name": "ceph_lv2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "tags": {
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.cluster_name": "ceph",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.crush_device_class": "",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.encrypted": "0",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osd_id": "2",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:                "ceph.vdo": "0"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            },
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "type": "block",
Oct 14 05:26:30 np0005486808 keen_easley[389897]:            "vg_name": "ceph_vg2"
Oct 14 05:26:30 np0005486808 keen_easley[389897]:        }
Oct 14 05:26:30 np0005486808 keen_easley[389897]:    ]
Oct 14 05:26:30 np0005486808 keen_easley[389897]: }
Oct 14 05:26:30 np0005486808 systemd[1]: libpod-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope: Deactivated successfully.
Oct 14 05:26:30 np0005486808 podman[389881]: 2025-10-14 09:26:30.440717513 +0000 UTC m=+0.914269712 container died e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:26:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4a7b28ebbc24ec730957ec7af34211775df17a5012de754cebd2e153d28895f6-merged.mount: Deactivated successfully.
Oct 14 05:26:30 np0005486808 podman[389881]: 2025-10-14 09:26:30.505082903 +0000 UTC m=+0.978635062 container remove e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:26:30 np0005486808 systemd[1]: libpod-conmon-e5cf4ba068e1179d6204d5e7fc506ec5ddf399f1517d7dc90d06b347da07a2a4.scope: Deactivated successfully.
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.224138061 +0000 UTC m=+0.045496205 container create 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:26:31 np0005486808 systemd[1]: Started libpod-conmon-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope.
Oct 14 05:26:31 np0005486808 nova_compute[259627]: 2025-10-14 09:26:31.278 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760433976.2759538, f41def60-a7be-4154-86bc-ef63a639ee94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:31 np0005486808 nova_compute[259627]: 2025-10-14 09:26:31.280 2 INFO nova.compute.manager [-] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:26:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.298777245 +0000 UTC m=+0.120135429 container init 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.2046777 +0000 UTC m=+0.026035894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:31 np0005486808 nova_compute[259627]: 2025-10-14 09:26:31.300 2 DEBUG nova.compute.manager [None req-43748baa-03a5-427a-8826-3f0607419475 - - - - - -] [instance: f41def60-a7be-4154-86bc-ef63a639ee94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.310425253 +0000 UTC m=+0.131783407 container start 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.314223197 +0000 UTC m=+0.135581361 container attach 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:31 np0005486808 eager_antonelli[390073]: 167 167
Oct 14 05:26:31 np0005486808 systemd[1]: libpod-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope: Deactivated successfully.
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.315941619 +0000 UTC m=+0.137299763 container died 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:26:31 np0005486808 nova_compute[259627]: 2025-10-14 09:26:31.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-dac19be301b6f6428f48fa64978b2e75bb65447bd7a9b5aa6b1f0a740c81d543-merged.mount: Deactivated successfully.
Oct 14 05:26:31 np0005486808 podman[390057]: 2025-10-14 09:26:31.358635034 +0000 UTC m=+0.179993188 container remove 903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_antonelli, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:26:31 np0005486808 systemd[1]: libpod-conmon-903e901ee0180871cb39fc6c8acb8ac14781aeead9364fbf538bbd713668ec54.scope: Deactivated successfully.
Oct 14 05:26:31 np0005486808 podman[390096]: 2025-10-14 09:26:31.601707901 +0000 UTC m=+0.075836875 container create ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:26:31 np0005486808 systemd[1]: Started libpod-conmon-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope.
Oct 14 05:26:31 np0005486808 podman[390096]: 2025-10-14 09:26:31.572446378 +0000 UTC m=+0.046575402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:26:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:31 np0005486808 podman[390096]: 2025-10-14 09:26:31.707983197 +0000 UTC m=+0.182112151 container init ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:31 np0005486808 podman[390096]: 2025-10-14 09:26:31.720528227 +0000 UTC m=+0.194657211 container start ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:26:31 np0005486808 podman[390096]: 2025-10-14 09:26:31.72470505 +0000 UTC m=+0.198833994 container attach ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]: {
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_id": 2,
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "type": "bluestore"
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    },
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_id": 1,
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "type": "bluestore"
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    },
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_id": 0,
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:        "type": "bluestore"
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]:    }
Oct 14 05:26:32 np0005486808 modest_mclaren[390112]: }
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:26:32
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 14 05:26:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:26:32 np0005486808 systemd[1]: libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Deactivated successfully.
Oct 14 05:26:32 np0005486808 systemd[1]: libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Consumed 1.120s CPU time.
Oct 14 05:26:32 np0005486808 conmon[390112]: conmon ac3d3340070fb8322e6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope/container/memory.events
Oct 14 05:26:32 np0005486808 podman[390096]: 2025-10-14 09:26:32.839271301 +0000 UTC m=+1.313400235 container died ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:26:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-785d3e5cb7f499f15bf5e8ca373705f709d72123c74eed4d60af5b36eaab2487-merged.mount: Deactivated successfully.
Oct 14 05:26:32 np0005486808 podman[390096]: 2025-10-14 09:26:32.918438668 +0000 UTC m=+1.392567622 container remove ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:26:32 np0005486808 systemd[1]: libpod-conmon-ac3d3340070fb8322e6b9e7bf709245efd372dd04b9e89b916b1084c3741c2f8.scope: Deactivated successfully.
Oct 14 05:26:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 96edb838-5106-49b9-a085-3621c6e7fe11 does not exist
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4d7357fd-a2ad-4e04-9a62-cb5a9dcf9ec9 does not exist
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:26:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:33 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:26:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:26:34 np0005486808 nova_compute[259627]: 2025-10-14 09:26:34.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:34 np0005486808 nova_compute[259627]: 2025-10-14 09:26:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:36 np0005486808 nova_compute[259627]: 2025-10-14 09:26:36.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:26:37 np0005486808 nova_compute[259627]: 2025-10-14 09:26:37.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:37 np0005486808 podman[390206]: 2025-10-14 09:26:37.714429693 +0000 UTC m=+0.107949258 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 05:26:37 np0005486808 podman[390207]: 2025-10-14 09:26:37.735538515 +0000 UTC m=+0.126322792 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:26:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:26:38 np0005486808 nova_compute[259627]: 2025-10-14 09:26:38.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:39 np0005486808 nova_compute[259627]: 2025-10-14 09:26:39.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:39 np0005486808 nova_compute[259627]: 2025-10-14 09:26:39.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:39 np0005486808 nova_compute[259627]: 2025-10-14 09:26:39.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 14 05:26:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044762309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.470 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.555 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.555 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.714 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.837 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4310595f-2280-438c-97ca-f2de57527501 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.838 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.838 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:26:40 np0005486808 nova_compute[259627]: 2025-10-14 09:26:40.887 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840802512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.359 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.366 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.419 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.445 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:26:41 np0005486808 nova_compute[259627]: 2025-10-14 09:26:41.446 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:42 np0005486808 nova_compute[259627]: 2025-10-14 09:26:42.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 14 05:26:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007593366420850427 of space, bias 1.0, pg target 0.22780099262551282 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:26:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:26:43 np0005486808 nova_compute[259627]: 2025-10-14 09:26:43.446 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:43 np0005486808 nova_compute[259627]: 2025-10-14 09:26:43.447 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.048 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.049 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.069 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.141 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.142 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.149 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.150 2 INFO nova.compute.claims [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.285 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 05:26:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661505640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.864 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.872 2 DEBUG nova.compute.provider_tree [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.894 2 DEBUG nova.scheduler.client.report [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.923 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.924 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.977 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.978 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.982 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.983 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:26:44 np0005486808 nova_compute[259627]: 2025-10-14 09:26:44.999 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.017 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.115 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.118 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.119 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating image(s)#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.156 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.197 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.235 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.239 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.308 2 DEBUG nova.policy [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.358 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.359 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.359 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.360 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.393 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.399 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.693 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.742 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.835 2 DEBUG nova.objects.instance [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.861 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.861 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Ensure instance console log exists: /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.862 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.862 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.863 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:26:45 np0005486808 nova_compute[259627]: 2025-10-14 09:26:45.982 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.012 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.213 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.214 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.214 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.268 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Successfully created port: 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:26:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.776 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.928 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.928 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.933 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:26:46 np0005486808 nova_compute[259627]: 2025-10-14 09:26:46.934 2 INFO nova.compute.claims [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.099 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.393 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Successfully updated port: 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.411 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.411 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.412 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.499 2 DEBUG nova.compute.manager [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.500 2 DEBUG nova.compute.manager [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.500 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:26:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764751085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.585 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.594 2 DEBUG nova.compute.provider_tree [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.664 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.752 2 DEBUG nova.scheduler.client.report [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.786 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.788 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.838 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.839 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.856 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.873 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.969 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.972 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:26:47 np0005486808 nova_compute[259627]: 2025-10-14 09:26:47.973 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating image(s)#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.008 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.033 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.056 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.060 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.163 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.164 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.165 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.165 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.192 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.197 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e16af982-3cd8-4600-99c4-aeec45986dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 121 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.506 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e16af982-3cd8-4600-99c4-aeec45986dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.592 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] resizing rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.686 2 DEBUG nova.objects.instance [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'migration_context' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.708 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.709 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Ensure instance console log exists: /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.710 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.823 2 DEBUG nova.policy [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a250f9c11f864fb49faf97cbb4399ece', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '261a0f5c61f04b77863377f034e70f01', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.840 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.857 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:48 np0005486808 nova_compute[259627]: 2025-10-14 09:26:48.857 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:26:49 np0005486808 nova_compute[259627]: 2025-10-14 09:26:49.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 177 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 26 op/s
Oct 14 05:26:50 np0005486808 podman[390667]: 2025-10-14 09:26:50.684829394 +0000 UTC m=+0.078955912 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 05:26:50 np0005486808 podman[390666]: 2025-10-14 09:26:50.747182594 +0000 UTC m=+0.144607964 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 14 05:26:50 np0005486808 nova_compute[259627]: 2025-10-14 09:26:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:51 np0005486808 nova_compute[259627]: 2025-10-14 09:26:51.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.204 2 DEBUG nova.network.neutron [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.231 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.231 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance network_info: |[{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.232 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.233 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.239 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start _get_guest_xml network_info=[{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.246 2 WARNING nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.260 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.261 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.269 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.270 2 DEBUG nova.virt.libvirt.host [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.271 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.271 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.272 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.273 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.273 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.274 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.274 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.275 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.276 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.276 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.277 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.277 2 DEBUG nova.virt.hardware [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.283 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.418 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Successfully created port: 563493a8-f727-4a25-97de-548a04398264 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:26:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694355942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.844 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.869 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:52 np0005486808 nova_compute[259627]: 2025-10-14 09:26:52.874 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348820056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.362 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.364 2 DEBUG nova.virt.libvirt.vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.364 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.365 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.366 2 DEBUG nova.objects.instance [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.384 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <uuid>2595dec0-9170-4e8f-a6bc-9179d30519a9</uuid>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <name>instance-0000007f</name>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-358564178</nova:name>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:26:52</nova:creationTime>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <nova:port uuid="9ecc8f01-430d-4714-8d7e-e60d7edaa73c">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="serial">2595dec0-9170-4e8f-a6bc-9179d30519a9</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="uuid">2595dec0-9170-4e8f-a6bc-9179d30519a9</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2595dec0-9170-4e8f-a6bc-9179d30519a9_disk">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:50:8f:64"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <target dev="tap9ecc8f01-43"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/console.log" append="off"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:26:53 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:26:53 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:26:53 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:26:53 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.385 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Preparing to wait for external event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.385 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.386 2 DEBUG nova.virt.libvirt.vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:45Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.387 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.387 2 DEBUG nova.network.os_vif_util [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.388 2 DEBUG os_vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ecc8f01-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ecc8f01-43, col_values=(('external_ids', {'iface-id': '9ecc8f01-430d-4714-8d7e-e60d7edaa73c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:8f:64', 'vm-uuid': '2595dec0-9170-4e8f-a6bc-9179d30519a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:53 np0005486808 NetworkManager[44885]: <info>  [1760434013.4432] manager: (tap9ecc8f01-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.453 2 INFO os_vif [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43')#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.505 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Successfully updated port: 563493a8-f727-4a25-97de-548a04398264 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.508 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.509 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.509 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:50:8f:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.509 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Using config drive#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.531 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.541 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.541 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.542 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.744 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.745 2 DEBUG nova.network.neutron [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.758 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.762 2 DEBUG oslo_concurrency.lockutils [req-d34cd1ae-0d29-4443-b8e6-1f8bcff80c79 req-cd1476ea-f3e5-4d49-a6a1-8afdcbae599c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.860 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Creating config drive at /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.866 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkwk9554 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.934 2 DEBUG nova.compute.manager [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.935 2 DEBUG nova.compute.manager [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.936 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:53 np0005486808 nova_compute[259627]: 2025-10-14 09:26:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.034 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplkwk9554" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.075 2 DEBUG nova.storage.rbd_utils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.081 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.285 2 DEBUG oslo_concurrency.processutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config 2595dec0-9170-4e8f-a6bc-9179d30519a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.287 2 INFO nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deleting local config drive /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:26:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 05:26:54 np0005486808 kernel: tap9ecc8f01-43: entered promiscuous mode
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.3656] manager: (tap9ecc8f01-43): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Oct 14 05:26:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:54Z|01351|binding|INFO|Claiming lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c for this chassis.
Oct 14 05:26:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:54Z|01352|binding|INFO|9ecc8f01-430d-4714-8d7e-e60d7edaa73c: Claiming fa:16:3e:50:8f:64 10.100.0.3
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.380 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 bound to our chassis#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.382 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317#033[00m
Oct 14 05:26:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:54Z|01353|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c ovn-installed in OVS
Oct 14 05:26:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:54Z|01354|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c up in Southbound
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.399 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf925185-b958-40aa-a237-a69cdd70526d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.400 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44344b65-f1 in ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.402 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44344b65-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.402 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2233d6-af71-4ca3-9aff-2819c2d36314]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.406 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[633b5405-ce9a-4de5-a209-dd99b3b630bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 systemd-machined[214636]: New machine qemu-160-instance-0000007f.
Oct 14 05:26:54 np0005486808 systemd-udevd[390845]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.429 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[71069dc2-eb44-4bdd-a992-968ebd23a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.4427] device (tap9ecc8f01-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.4445] device (tap9ecc8f01-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.460 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8eca1783-e28b-46e6-8745-79d00c14446c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.499 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9e9b71-bb36-4006-bfc7-1342b4a81239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.507 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[88074195-1718-4bae-ab33-bec8998498bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.5090] manager: (tap44344b65-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/554)
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.557 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c58a12-7702-46a2-ab64-859590dd4fb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.560 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[00a1b721-995e-4b6d-9fa0-fb61e233b571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.5971] device (tap44344b65-f0): carrier: link connected
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.605 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[854935db-6a96-43c0-a8ad-d5c48fd6f949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.630 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a5ad0b-c412-4f8a-9466-7db0af579d8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390876, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e67e97a-a53e-422e-8fd4-85e470f7a6a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4026'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779335, 'tstamp': 779335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390877, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2ce37f-5970-4eee-b1aa-534943ba823a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390878, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.714 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bacf69e-48e5-4a4b-a601-4f44e64fc80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.806 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a46c82-4f60-4a45-9620-77cde25f87c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.809 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.810 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:54 np0005486808 kernel: tap44344b65-f0: entered promiscuous mode
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 NetworkManager[44885]: <info>  [1760434014.8135] manager: (tap44344b65-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:54Z|01355|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.851 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.852 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[efc51777-122d-43ca-a09e-2c3d8df2cc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.853 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/44344b65-f325-470a-bd36-6f52ed03d317.pid.haproxy
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 44344b65-f325-470a-bd36-6f52ed03d317
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:26:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:54.854 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'env', 'PROCESS_TAG=haproxy-44344b65-f325-470a-bd36-6f52ed03d317', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44344b65-f325-470a-bd36-6f52ed03d317.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:26:54 np0005486808 nova_compute[259627]: 2025-10-14 09:26:54.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.302 2 DEBUG nova.network.neutron [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:55 np0005486808 podman[390952]: 2025-10-14 09:26:55.325666665 +0000 UTC m=+0.081213948 container create 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.335 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.336 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance network_info: |[{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.337 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.338 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.341 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start _get_guest_xml network_info=[{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.346 2 WARNING nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.351 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.352 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.361 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.362 2 DEBUG nova.virt.libvirt.host [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.363 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.363 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.364 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.364 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.365 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.366 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.366 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.367 2 DEBUG nova.virt.hardware [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.371 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:55 np0005486808 podman[390952]: 2025-10-14 09:26:55.278859198 +0000 UTC m=+0.034406571 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:26:55 np0005486808 systemd[1]: Started libpod-conmon-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope.
Oct 14 05:26:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2b1df47e436b2c8cb55d0e32cd763db11840ac3a9a7beaf43710acbfe9e885/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:55 np0005486808 podman[390952]: 2025-10-14 09:26:55.428482975 +0000 UTC m=+0.184030288 container init 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 05:26:55 np0005486808 podman[390952]: 2025-10-14 09:26:55.441728412 +0000 UTC m=+0.197275695 container start 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:55 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : New worker (390974) forked
Oct 14 05:26:55 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : Loading success.
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.555 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434015.555475, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.557 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.581 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.585 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434015.5557122, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.600 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.603 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.621 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543092115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.822 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.848 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:55 np0005486808 nova_compute[259627]: 2025-10-14 09:26:55.854 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.060 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.062 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.064 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.064 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Processing event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.065 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.066 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.066 2 DEBUG oslo_concurrency.lockutils [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.067 2 DEBUG nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.067 2 WARNING nova.compute.manager [req-2decabdd-29db-419b-a45b-446fd37f80ea req-1bafa3ca-fbfd-4f7b-8a4e-70ea792f6ba9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.068 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.073 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434016.0731921, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.074 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.076 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.081 2 INFO nova.virt.libvirt.driver [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance spawned successfully.#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.082 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.112 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.122 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.129 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.130 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.131 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.131 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.132 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.133 2 DEBUG nova.virt.libvirt.driver [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.190 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.254 2 INFO nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 11.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.255 2 DEBUG nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:26:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707962533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.278 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.280 2 DEBUG nova.virt.libvirt.vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:47Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.280 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.282 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.284 2 DEBUG nova.objects.instance [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'pci_devices' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.310 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <uuid>e16af982-3cd8-4600-99c4-aeec45986dda</uuid>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <name>instance-00000080</name>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451</nova:name>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:26:55</nova:creationTime>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:user uuid="a250f9c11f864fb49faf97cbb4399ece">tempest-TestSecurityGroupsBasicOps-211623791-project-member</nova:user>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:project uuid="261a0f5c61f04b77863377f034e70f01">tempest-TestSecurityGroupsBasicOps-211623791</nova:project>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <nova:port uuid="563493a8-f727-4a25-97de-548a04398264">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="serial">e16af982-3cd8-4600-99c4-aeec45986dda</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="uuid">e16af982-3cd8-4600-99c4-aeec45986dda</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e16af982-3cd8-4600-99c4-aeec45986dda_disk">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e16af982-3cd8-4600-99c4-aeec45986dda_disk.config">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8d:ee:f5"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <target dev="tap563493a8-f7"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/console.log" append="off"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:26:56 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:26:56 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:26:56 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:26:56 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.313 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Preparing to wait for external event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.313 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.314 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.314 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.316 2 DEBUG nova.virt.libvirt.vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:26:47Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.317 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.318 2 DEBUG nova.network.os_vif_util [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.319 2 DEBUG os_vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap563493a8-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap563493a8-f7, col_values=(('external_ids', {'iface-id': '563493a8-f727-4a25-97de-548a04398264', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:ee:f5', 'vm-uuid': 'e16af982-3cd8-4600-99c4-aeec45986dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:56 np0005486808 NetworkManager[44885]: <info>  [1760434016.3397] manager: (tap563493a8-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.340 2 INFO nova.compute.manager [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 12.22 seconds to build instance.#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.346 2 INFO os_vif [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7')#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.356 2 DEBUG oslo_concurrency.lockutils [None req-eedd2c0b-6fc5-4521-b31b-4b894017f3a7 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.409 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.410 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.410 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] No VIF found with MAC fa:16:3e:8d:ee:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.411 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Using config drive#033[00m
Oct 14 05:26:56 np0005486808 nova_compute[259627]: 2025-10-14 09:26:56.436 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.028 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Creating config drive at /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.037 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfpu45sa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.201 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyfpu45sa" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.248 2 DEBUG nova.storage.rbd_utils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] rbd image e16af982-3cd8-4600-99c4-aeec45986dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.255 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config e16af982-3cd8-4600-99c4-aeec45986dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.340 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.341 2 DEBUG nova.network.neutron [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.358 2 DEBUG oslo_concurrency.lockutils [req-d689539d-b74f-4bf9-bd5a-c63b00bfb149 req-8267b5b1-b282-45ae-bd52-c9a1f10587ba 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.450 2 DEBUG oslo_concurrency.processutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config e16af982-3cd8-4600-99c4-aeec45986dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.450 2 INFO nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deleting local config drive /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda/disk.config because it was imported into RBD.#033[00m
Oct 14 05:26:57 np0005486808 kernel: tap563493a8-f7: entered promiscuous mode
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.4971] manager: (tap563493a8-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:57Z|01356|binding|INFO|Claiming lport 563493a8-f727-4a25-97de-548a04398264 for this chassis.
Oct 14 05:26:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:57Z|01357|binding|INFO|563493a8-f727-4a25-97de-548a04398264: Claiming fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.508 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '2', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.510 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 bound to our chassis#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.511 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d00461c7-a787-45ae-8db1-11ba8f94e301#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.523 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85b0aef0-2bc7-4130-a2ff-92bceb7df645]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.523 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd00461c7-a1 in ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.527 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd00461c7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.527 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46b578cf-5a43-4be4-ba2c-08526ca8072a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2085ce47-fc73-42b9-9f20-e6b6f52db8d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 systemd-machined[214636]: New machine qemu-161-instance-00000080.
Oct 14 05:26:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:57Z|01358|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 ovn-installed in OVS
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.541 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5585c2c1-5a00-46a6-9d8b-4ab8c040f006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:57Z|01359|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 up in Southbound
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.554 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b96b433d-117b-4cd0-9d6f-f4bd77b35d59]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 systemd-udevd[391122]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.5828] device (tap563493a8-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.5837] device (tap563493a8-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.596 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec78492-9916-4ce3-bbc4-54c1175dc5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.601 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[187bf499-5796-46be-8b06-c1954185f231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.6029] manager: (tapd00461c7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/558)
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.636 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79ac8d7f-cb93-4cdb-a281-02211f5b11b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.640 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[53e22cf1-6b22-4c25-b59a-a4e15fc0dd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.6585] device (tapd00461c7-a0): carrier: link connected
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.662 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aa637a32-d32e-4970-a7bb-d6777f3eedd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.677 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1b050dad-9e69-4a3b-adbb-f67c728f5d9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd00461c7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779642, 'reachable_time': 22653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391151, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.696 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[011e1a87-3a7b-4eda-be96-f9ed47f3d064]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:a47e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779642, 'tstamp': 779642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391152, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6742087a-badd-4eef-b7c4-046a4ab36f53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd00461c7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779642, 'reachable_time': 22653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391153, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.750 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[89e1e5f5-4a71-4ba6-8f55-3a870add7178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.795 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d21996-ed8a-4e55-8b13-e0a21405bda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.796 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd00461c7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd00461c7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:57 np0005486808 kernel: tapd00461c7-a0: entered promiscuous mode
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 NetworkManager[44885]: <info>  [1760434017.7999] manager: (tapd00461c7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.803 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd00461c7-a0, col_values=(('external_ids', {'iface-id': '0b564910-dd70-44c6-926f-33b940d515bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 ovn_controller[152662]: 2025-10-14T09:26:57Z|01360|binding|INFO|Releasing lport 0b564910-dd70-44c6-926f-33b940d515bb from this chassis (sb_readonly=0)
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 nova_compute[259627]: 2025-10-14 09:26:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.822 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.823 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[104f84ec-6e43-438b-b386-f408fc190c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.824 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-d00461c7-a787-45ae-8db1-11ba8f94e301
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/d00461c7-a787-45ae-8db1-11ba8f94e301.pid.haproxy
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID d00461c7-a787-45ae-8db1-11ba8f94e301
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:26:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:26:57.826 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'env', 'PROCESS_TAG=haproxy-d00461c7-a787-45ae-8db1-11ba8f94e301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d00461c7-a787-45ae-8db1-11ba8f94e301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:26:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.149 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.149 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Processing event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.150 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG oslo_concurrency.lockutils [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.151 2 DEBUG nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.151 2 WARNING nova.compute.manager [req-a321153f-bb66-4d08-b3a1-542d7789fad7 req-1e7996b5-11e9-43d6-9a6f-016154a3c11a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:26:58 np0005486808 podman[391192]: 2025-10-14 09:26:58.242527869 +0000 UTC m=+0.053978615 container create 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:26:58 np0005486808 systemd[1]: Started libpod-conmon-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope.
Oct 14 05:26:58 np0005486808 podman[391192]: 2025-10-14 09:26:58.219548391 +0000 UTC m=+0.030999147 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:26:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:26:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 14 05:26:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1209d6b30b19e606c66aff50aa191b9774b9d3e66e60de7f9a75b20180f215f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:26:58 np0005486808 podman[391192]: 2025-10-14 09:26:58.347442122 +0000 UTC m=+0.158892888 container init 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:26:58 np0005486808 podman[391192]: 2025-10-14 09:26:58.355195873 +0000 UTC m=+0.166646609 container start 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:26:58 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : New worker (391248) forked
Oct 14 05:26:58 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : Loading success.
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.755 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.7549312, e16af982-3cd8-4600-99c4-aeec45986dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.756 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Started (Lifecycle Event)#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.762 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.766 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.773 2 INFO nova.virt.libvirt.driver [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance spawned successfully.#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.774 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.812 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.819 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.828 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.829 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.829 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.830 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.831 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.832 2 DEBUG nova.virt.libvirt.driver [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.840 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.841 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.7550426, e16af982-3cd8-4600-99c4-aeec45986dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.841 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.873 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.877 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434018.766276, e16af982-3cd8-4600-99c4-aeec45986dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.877 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.902 2 DEBUG nova.compute.manager [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.902 2 DEBUG nova.compute.manager [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.903 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.903 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.904 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.908 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.912 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.918 2 INFO nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 10.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.919 2 DEBUG nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.935 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:26:58 np0005486808 nova_compute[259627]: 2025-10-14 09:26:58.987 2 INFO nova.compute.manager [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 12.16 seconds to build instance.#033[00m
Oct 14 05:26:59 np0005486808 nova_compute[259627]: 2025-10-14 09:26:59.021 2 DEBUG oslo_concurrency.lockutils [None req-a945d3cf-0013-46c4-9b50-42baaaf09482 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:26:59 np0005486808 nova_compute[259627]: 2025-10-14 09:26:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:00 np0005486808 nova_compute[259627]: 2025-10-14 09:27:00.223 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:00 np0005486808 nova_compute[259627]: 2025-10-14 09:27:00.224 2 DEBUG nova.network.neutron [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:00 np0005486808 nova_compute[259627]: 2025-10-14 09:27:00.249 2 DEBUG oslo_concurrency.lockutils [req-5918d541-2227-48d9-8924-cf03e1616a43 req-a6ac49db-08f6-40cf-9d31-5001040108ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Oct 14 05:27:01 np0005486808 nova_compute[259627]: 2025-10-14 09:27:01.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:02 np0005486808 nova_compute[259627]: 2025-10-14 09:27:02.953 2 DEBUG nova.compute.manager [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:02 np0005486808 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG nova.compute.manager [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:02 np0005486808 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:02 np0005486808 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:02 np0005486808 nova_compute[259627]: 2025-10-14 09:27:02.955 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.072 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.073 2 DEBUG nova.network.neutron [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.097 2 DEBUG oslo_concurrency.lockutils [req-21d90bc4-1dc0-40d0-ae12-340daf09bdd0 req-e6b01d42-efb3-4169-a372-68f2b8799389 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.330 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.331 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.349 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.426 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.427 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.436 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.437 2 INFO nova.compute.claims [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.631 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:04 np0005486808 nova_compute[259627]: 2025-10-14 09:27:04.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399357510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.138 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.148 2 DEBUG nova.compute.provider_tree [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.170 2 DEBUG nova.scheduler.client.report [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.201 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.202 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.257 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.258 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.276 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.316 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.417 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.420 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.421 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating image(s)#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.463 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.501 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.529 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.534 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:27:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213667799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.641 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.642 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.670 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.673 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1b7f03a3-6b73-478f-bf13-cf062714faef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.848 2 DEBUG nova.policy [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:27:05 np0005486808 nova_compute[259627]: 2025-10-14 09:27:05.962 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1b7f03a3-6b73-478f-bf13-cf062714faef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.033 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.116 2 DEBUG nova.objects.instance [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Ensure instance console log exists: /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.157 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.158 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Oct 14 05:27:06 np0005486808 nova_compute[259627]: 2025-10-14 09:27:06.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.041 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.042 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:07.043 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:07 np0005486808 nova_compute[259627]: 2025-10-14 09:27:07.248 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Successfully created port: c84419ee-1585-485f-ae91-116f2123dadf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:27:07 np0005486808 nova_compute[259627]: 2025-10-14 09:27:07.937 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Successfully updated port: c84419ee-1585-485f-ae91-116f2123dadf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:27:07 np0005486808 nova_compute[259627]: 2025-10-14 09:27:07.957 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:07 np0005486808 nova_compute[259627]: 2025-10-14 09:27:07.958 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:07 np0005486808 nova_compute[259627]: 2025-10-14 09:27:07.958 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:27:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:08 np0005486808 nova_compute[259627]: 2025-10-14 09:27:08.022 2 DEBUG nova.compute.manager [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:08 np0005486808 nova_compute[259627]: 2025-10-14 09:27:08.025 2 DEBUG nova.compute.manager [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:08 np0005486808 nova_compute[259627]: 2025-10-14 09:27:08.025 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:08 np0005486808 nova_compute[259627]: 2025-10-14 09:27:08.118 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:27:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct 14 05:27:08 np0005486808 podman[391446]: 2025-10-14 09:27:08.720084113 +0000 UTC m=+0.105774664 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 05:27:08 np0005486808 podman[391445]: 2025-10-14 09:27:08.72074401 +0000 UTC m=+0.112835950 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:27:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:09Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:8f:64 10.100.0.3
Oct 14 05:27:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:09Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:8f:64 10.100.0.3
Oct 14 05:27:09 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.959 2 DEBUG nova.network.neutron [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.985 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance network_info: |[{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.986 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.992 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start _get_guest_xml network_info=[{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:09 np0005486808 nova_compute[259627]: 2025-10-14 09:27:09.999 2 WARNING nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.005 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.006 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.009 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.010 2 DEBUG nova.virt.libvirt.host [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.010 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.011 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.011 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.012 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.013 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.013 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.014 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.015 2 DEBUG nova.virt.hardware [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.018 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 256 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Oct 14 05:27:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:27:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606654335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.493 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.514 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.518 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:10Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 05:27:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:10Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 05:27:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:27:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431566766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.956 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.959 2 DEBUG nova.virt.libvirt.vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:05Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.960 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.962 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.964 2 DEBUG nova.objects.instance [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:10 np0005486808 nova_compute[259627]: 2025-10-14 09:27:10.990 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <uuid>1b7f03a3-6b73-478f-bf13-cf062714faef</uuid>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <name>instance-00000081</name>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-2128241833</nova:name>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:27:10</nova:creationTime>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <nova:port uuid="c84419ee-1585-485f-ae91-116f2123dadf">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="serial">1b7f03a3-6b73-478f-bf13-cf062714faef</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="uuid">1b7f03a3-6b73-478f-bf13-cf062714faef</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1b7f03a3-6b73-478f-bf13-cf062714faef_disk">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:44:6d:b2"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <target dev="tapc84419ee-15"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/console.log" append="off"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:27:10 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:10 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:27:11 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:27:11 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:27:11 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:27:11 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.002 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Preparing to wait for external event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.002 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.003 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.003 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.004 2 DEBUG nova.virt.libvirt.vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:05Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.005 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.007 2 DEBUG nova.network.os_vif_util [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.008 2 DEBUG os_vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc84419ee-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc84419ee-15, col_values=(('external_ids', {'iface-id': 'c84419ee-1585-485f-ae91-116f2123dadf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:6d:b2', 'vm-uuid': '1b7f03a3-6b73-478f-bf13-cf062714faef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:11 np0005486808 NetworkManager[44885]: <info>  [1760434031.0626] manager: (tapc84419ee-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.076 2 INFO os_vif [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15')#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.155 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.156 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:44:6d:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.157 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Using config drive#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.192 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.367 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updated VIF entry in instance network info cache for port c84419ee-1585-485f-ae91-116f2123dadf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.368 2 DEBUG nova.network.neutron [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.383 2 DEBUG oslo_concurrency.lockutils [req-89981c97-5eb0-46d5-981b-e3d5eaf9fe4a req-830f4c49-3017-44d3-9ed4-657d47ada30b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.804 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Creating config drive at /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.809 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnqi2s1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:11 np0005486808 nova_compute[259627]: 2025-10-14 09:27:11.965 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnqi2s1s" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.000 2 DEBUG nova.storage.rbd_utils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.003 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.180 2 DEBUG oslo_concurrency.processutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config 1b7f03a3-6b73-478f-bf13-cf062714faef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.182 2 INFO nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deleting local config drive /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef/disk.config because it was imported into RBD.#033[00m
Oct 14 05:27:12 np0005486808 kernel: tapc84419ee-15: entered promiscuous mode
Oct 14 05:27:12 np0005486808 NetworkManager[44885]: <info>  [1760434032.2421] manager: (tapc84419ee-15): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Oct 14 05:27:12 np0005486808 systemd-udevd[391619]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:12Z|01361|binding|INFO|Claiming lport c84419ee-1585-485f-ae91-116f2123dadf for this chassis.
Oct 14 05:27:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:12Z|01362|binding|INFO|c84419ee-1585-485f-ae91-116f2123dadf: Claiming fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 05:27:12 np0005486808 NetworkManager[44885]: <info>  [1760434032.2986] device (tapc84419ee-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:27:12 np0005486808 NetworkManager[44885]: <info>  [1760434032.2996] device (tapc84419ee-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.301 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6d:b2 10.100.0.9'], port_security=['fa:16:3e:44:6d:b2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1b7f03a3-6b73-478f-bf13-cf062714faef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '471f6dc6-ea8e-4b1e-a678-6d128cef3d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c84419ee-1585-485f-ae91-116f2123dadf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.302 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c84419ee-1585-485f-ae91-116f2123dadf in datapath 44344b65-f325-470a-bd36-6f52ed03d317 bound to our chassis#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.304 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317#033[00m
Oct 14 05:27:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:12Z|01363|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf ovn-installed in OVS
Oct 14 05:27:12 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:12Z|01364|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf up in Southbound
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:12 np0005486808 systemd-machined[214636]: New machine qemu-162-instance-00000081.
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9994f344-bbc6-43f4-8c3e-7a20df7eb95b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 systemd[1]: Started Virtual Machine qemu-162-instance-00000081.
Oct 14 05:27:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 187 op/s
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.369 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[320d6180-fb81-4e56-85b6-dce962f20bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.372 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8d981dc3-25e3-45b1-82da-0bb051708266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.415 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[50414cf6-eb6f-4903-9e19-931c2cb252bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[da9206ab-0cb3-4861-8d5a-adc444d23dc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391636, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[996a6870-1b7e-42ee-9a7f-e4d53af4d83c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779351, 'tstamp': 779351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391637, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779355, 'tstamp': 779355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391637, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.464 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.468 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:12 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:12.469 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.530 2 DEBUG nova.compute.manager [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.530 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG oslo_concurrency.lockutils [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:12 np0005486808 nova_compute[259627]: 2025-10-14 09:27:12.531 2 DEBUG nova.compute.manager [req-c70e499f-f254-40ce-b11d-fa9c03e0f632 req-14f792fd-dd93-41b7-b555-bb4033f76b26 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Processing event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:27:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.522 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.5219579, 1b7f03a3-6b73-478f-bf13-cf062714faef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Started (Lifecycle Event)#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.527 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.531 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.536 2 INFO nova.virt.libvirt.driver [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance spawned successfully.#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.536 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.546 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.552 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.568 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.568 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.569 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.570 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.570 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.571 2 DEBUG nova.virt.libvirt.driver [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.585 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.586 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.5221717, 1b7f03a3-6b73-478f-bf13-cf062714faef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.586 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.615 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.620 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434033.531478, 1b7f03a3-6b73-478f-bf13-cf062714faef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.620 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.641 2 INFO nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.642 2 DEBUG nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.643 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.654 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.678 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.706 2 INFO nova.compute.manager [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 9.31 seconds to build instance.#033[00m
Oct 14 05:27:13 np0005486808 nova_compute[259627]: 2025-10-14 09:27:13.721 2 DEBUG oslo_concurrency.lockutils [None req-e4fab517-b8e9-419a-9985-8b7ff2505471 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 318 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.657 2 DEBUG nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG oslo_concurrency.lockutils [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.658 2 DEBUG nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.659 2 WARNING nova.compute.manager [req-ac06b5ad-50f8-4ed2-84c3-10835dc11219 req-4cd5a284-e4c5-4d87-87d3-9cf49da8c6a1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received unexpected event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with vm_state active and task_state None.#033[00m
Oct 14 05:27:14 np0005486808 nova_compute[259627]: 2025-10-14 09:27:14.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:16 np0005486808 nova_compute[259627]: 2025-10-14 09:27:16.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 05:27:17 np0005486808 nova_compute[259627]: 2025-10-14 09:27:17.866 2 DEBUG nova.compute.manager [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:17 np0005486808 nova_compute[259627]: 2025-10-14 09:27:17.866 2 DEBUG nova.compute.manager [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:17 np0005486808 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:17 np0005486808 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:17 np0005486808 nova_compute[259627]: 2025-10-14 09:27:17.867 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 229 op/s
Oct 14 05:27:19 np0005486808 nova_compute[259627]: 2025-10-14 09:27:19.714 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updated VIF entry in instance network info cache for port c84419ee-1585-485f-ae91-116f2123dadf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:19 np0005486808 nova_compute[259627]: 2025-10-14 09:27:19.715 2 DEBUG nova.network.neutron [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [{"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:19 np0005486808 nova_compute[259627]: 2025-10-14 09:27:19.742 2 DEBUG oslo_concurrency.lockutils [req-c895f2e9-ef5c-4cd4-82b3-1d6f1abd87ce req-55eb0779-c0f4-4bf1-ba24-84abcd1e7d63 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:19 np0005486808 nova_compute[259627]: 2025-10-14 09:27:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Oct 14 05:27:21 np0005486808 nova_compute[259627]: 2025-10-14 09:27:21.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:21 np0005486808 podman[391681]: 2025-10-14 09:27:21.652230641 +0000 UTC m=+0.064011193 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 05:27:21 np0005486808 podman[391680]: 2025-10-14 09:27:21.68698908 +0000 UTC m=+0.098804113 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 14 05:27:21 np0005486808 nova_compute[259627]: 2025-10-14 09:27:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:21.954 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:21.956 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:27:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 172 op/s
Oct 14 05:27:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 133 KiB/s wr, 92 op/s
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.399 2 DEBUG nova.compute.manager [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-changed-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG nova.compute.manager [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing instance network info cache due to event network-changed-563493a8-f727-4a25-97de-548a04398264. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.400 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.401 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Refreshing network info cache for port 563493a8-f727-4a25-97de-548a04398264 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.436 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.437 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.438 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.440 2 INFO nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Terminating instance#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.441 2 DEBUG nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:27:24 np0005486808 kernel: tap563493a8-f7 (unregistering): left promiscuous mode
Oct 14 05:27:24 np0005486808 NetworkManager[44885]: <info>  [1760434044.5240] device (tap563493a8-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01365|binding|INFO|Releasing lport 563493a8-f727-4a25-97de-548a04398264 from this chassis (sb_readonly=0)
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01366|binding|INFO|Setting lport 563493a8-f727-4a25-97de-548a04398264 down in Southbound
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01367|binding|INFO|Removing iface tap563493a8-f7 ovn-installed in OVS
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.553 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.554 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.556 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.558 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5315c4d1-f528-4c82-86de-d9ead3b38eae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.559 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 namespace which is not needed anymore#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 14 05:27:24 np0005486808 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 13.395s CPU time.
Oct 14 05:27:24 np0005486808 systemd-machined[214636]: Machine qemu-161-instance-00000080 terminated.
Oct 14 05:27:24 np0005486808 kernel: tap563493a8-f7: entered promiscuous mode
Oct 14 05:27:24 np0005486808 NetworkManager[44885]: <info>  [1760434044.6584] manager: (tap563493a8-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Oct 14 05:27:24 np0005486808 kernel: tap563493a8-f7 (unregistering): left promiscuous mode
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01368|binding|INFO|Claiming lport 563493a8-f727-4a25-97de-548a04398264 for this chassis.
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01369|binding|INFO|563493a8-f727-4a25-97de-548a04398264: Claiming fa:16:3e:8d:ee:f5 10.100.0.8
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.665 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.679 2 INFO nova.virt.libvirt.driver [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Instance destroyed successfully.#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.679 2 DEBUG nova.objects.instance [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lazy-loading 'resources' on Instance uuid e16af982-3cd8-4600-99c4-aeec45986dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:24Z|01370|binding|INFO|Releasing lport 563493a8-f727-4a25-97de-548a04398264 from this chassis (sb_readonly=0)
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.691 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:ee:f5 10.100.0.8'], port_security=['fa:16:3e:8d:ee:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e16af982-3cd8-4600-99c4-aeec45986dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d00461c7-a787-45ae-8db1-11ba8f94e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '261a0f5c61f04b77863377f034e70f01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '662e95f2-6ec1-4f95-993e-421550aa8e7c 6cefb658-a903-42cb-a2fb-84294f9eff2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=789ab81e-5f79-4555-8f72-bf440f2a44f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=563493a8-f727-4a25-97de-548a04398264) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.697 2 DEBUG nova.virt.libvirt.vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-211623791-access_point-467670451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-211623791-acc',id=128,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KjYB8TOdxJDBX1/D4wnlTYCVpAUwpAx9R+OpzTEM4NvI2eIFjA8TxpNcbv510O1pJN+wHpAyy5P1FA9H3PUquH/ijntIYtfRD9NCYpHRGTApBy/zfs0NfRBu1//Wkcw==',key_name='tempest-TestSecurityGroupsBasicOps-887135051',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='261a0f5c61f04b77863377f034e70f01',ramdisk_id='',reservation_id='r-95hbfz6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-211623791',owner_user_name='tempest-TestSecurityGroupsBasicOps-211623791-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:58Z,user_data=None,user_id='a250f9c11f864fb49faf97cbb4399ece',uuid=e16af982-3cd8-4600-99c4-aeec45986dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.698 2 DEBUG nova.network.os_vif_util [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converting VIF {"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.698 2 DEBUG nova.network.os_vif_util [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.699 2 DEBUG os_vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap563493a8-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.708 2 INFO os_vif [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:ee:f5,bridge_name='br-int',has_traffic_filtering=True,id=563493a8-f727-4a25-97de-548a04398264,network=Network(d00461c7-a787-45ae-8db1-11ba8f94e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap563493a8-f7')#033[00m
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : haproxy version is 2.8.14-c23fe91
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [NOTICE]   (391246) : path to executable is /usr/sbin/haproxy
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : Exiting Master process...
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : Exiting Master process...
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [ALERT]    (391246) : Current worker (391248) exited with code 143 (Terminated)
Oct 14 05:27:24 np0005486808 neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301[391242]: [WARNING]  (391246) : All workers exited. Exiting... (0)
Oct 14 05:27:24 np0005486808 systemd[1]: libpod-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope: Deactivated successfully.
Oct 14 05:27:24 np0005486808 podman[391749]: 2025-10-14 09:27:24.750160358 +0000 UTC m=+0.063214683 container died 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:27:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18-userdata-shm.mount: Deactivated successfully.
Oct 14 05:27:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b1209d6b30b19e606c66aff50aa191b9774b9d3e66e60de7f9a75b20180f215f-merged.mount: Deactivated successfully.
Oct 14 05:27:24 np0005486808 podman[391749]: 2025-10-14 09:27:24.802100662 +0000 UTC m=+0.115154987 container cleanup 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 14 05:27:24 np0005486808 systemd[1]: libpod-conmon-05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18.scope: Deactivated successfully.
Oct 14 05:27:24 np0005486808 podman[391797]: 2025-10-14 09:27:24.885769729 +0000 UTC m=+0.062848694 container remove 05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.894 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1361b30a-e04c-45c3-a369-0cb5bc0273df]: (4, ('Tue Oct 14 09:27:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 (05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18)\n05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18\nTue Oct 14 09:27:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 (05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18)\n05c9b3a9bf2a157d842400147b82acd7ff81b1cf192b51decd7d65eeaad74d18\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.896 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[199b151d-644a-4617-9d7e-80e8d970d505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.897 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd00461c7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:24 np0005486808 kernel: tapd00461c7-a0: left promiscuous mode
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.920 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244aa472-2e7b-43f6-baee-409c3ec686ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.944 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3641f7cb-b1e6-476a-89a3-78aea62cf2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.946 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[92c98f13-2568-4fb0-88c7-7999fdf063a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[48267b23-20c4-40ed-bba9-19131fd70cdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779635, 'reachable_time': 32329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391813, 'error': None, 'target': 'ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 systemd[1]: run-netns-ovnmeta\x2dd00461c7\x2da787\x2d45ae\x2d8db1\x2d11ba8f94e301.mount: Deactivated successfully.
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.979 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d00461c7-a787-45ae-8db1-11ba8f94e301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.979 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[564821e3-9427-4f05-9ba3-bf7d6cd43329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.981 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.983 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.989 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb04dbb8-8256-4fac-a1ee-4b1e868ac7cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.990 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 563493a8-f727-4a25-97de-548a04398264 in datapath d00461c7-a787-45ae-8db1-11ba8f94e301 unbound from our chassis#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.991 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d00461c7-a787-45ae-8db1-11ba8f94e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:24.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1f42399a-a34d-45a6-806d-4912fe11ab7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:24.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.079 2 INFO nova.virt.libvirt.driver [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deleting instance files /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda_del#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.080 2 INFO nova.virt.libvirt.driver [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deletion of /var/lib/nova/instances/e16af982-3cd8-4600-99c4-aeec45986dda_del complete#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.136 2 INFO nova.compute.manager [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.136 2 DEBUG oslo.service.loopingcall [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.137 2 DEBUG nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:27:25 np0005486808 nova_compute[259627]: 2025-10-14 09:27:25.137 2 DEBUG nova.network.neutron [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:27:25 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 05:27:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:25Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 05:27:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:25Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:6d:b2 10.100.0.9
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.004 2 DEBUG nova.network.neutron [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.022 2 INFO nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.065 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.066 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.166 2 DEBUG oslo_concurrency.processutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.207 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updated VIF entry in instance network info cache for port 563493a8-f727-4a25-97de-548a04398264. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.208 2 DEBUG nova.network.neutron [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [{"id": "563493a8-f727-4a25-97de-548a04398264", "address": "fa:16:3e:8d:ee:f5", "network": {"id": "d00461c7-a787-45ae-8db1-11ba8f94e301", "bridge": "br-int", "label": "tempest-network-smoke--529638516", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "261a0f5c61f04b77863377f034e70f01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap563493a8-f7", "ovs_interfaceid": "563493a8-f727-4a25-97de-548a04398264", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.232 2 DEBUG oslo_concurrency.lockutils [req-b4f57d2b-8b87-4ecd-a060-cb3d91c818b4 req-38812c8c-10d5-4a8b-93c1-32b9363a1c37 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e16af982-3cd8-4600-99c4-aeec45986dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.523 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.523 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 WARNING nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-unplugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.524 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG oslo_concurrency.lockutils [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] No waiting events found dispatching network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 WARNING nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received unexpected event network-vif-plugged-563493a8-f727-4a25-97de-548a04398264 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Received event network-vif-deleted-563493a8-f727-4a25-97de-548a04398264 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 INFO nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Neutron deleted interface 563493a8-f727-4a25-97de-548a04398264; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.525 2 DEBUG nova.network.neutron [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.542 2 DEBUG nova.compute.manager [req-abf58040-d17a-4d57-ab35-8697d5a77d7f req-46f49ed2-b17c-45c0-a25d-9cf664ae5b3c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Detach interface failed, port_id=563493a8-f727-4a25-97de-548a04398264, reason: Instance e16af982-3cd8-4600-99c4-aeec45986dda could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:27:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897051182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.605 2 DEBUG oslo_concurrency.processutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.613 2 DEBUG nova.compute.provider_tree [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.629 2 DEBUG nova.scheduler.client.report [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.652 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.687 2 INFO nova.scheduler.client.report [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Deleted allocations for instance e16af982-3cd8-4600-99c4-aeec45986dda#033[00m
Oct 14 05:27:26 np0005486808 nova_compute[259627]: 2025-10-14 09:27:26.749 2 DEBUG oslo_concurrency.lockutils [None req-ad3376d5-9654-46cd-8762-7660d0bce533 a250f9c11f864fb49faf97cbb4399ece 261a0f5c61f04b77863377f034e70f01 - - default default] Lock "e16af982-3cd8-4600-99c4-aeec45986dda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 268 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 14 05:27:29 np0005486808 nova_compute[259627]: 2025-10-14 09:27:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:30 np0005486808 nova_compute[259627]: 2025-10-14 09:27:30.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:30Z|01371|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 05:27:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:30Z|01372|binding|INFO|Releasing lport a300b10f-f6fd-47ab-bc03-160d747e5ac0 from this chassis (sb_readonly=0)
Oct 14 05:27:30 np0005486808 nova_compute[259627]: 2025-10-14 09:27:30.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 275 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.303 2 DEBUG nova.compute.manager [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG nova.compute.manager [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing instance network info cache due to event network-changed-ea25832f-13d3-41ec-874c-e622d24c912e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.304 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Refreshing network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.357 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.357 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.358 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.358 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.359 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.360 2 INFO nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Terminating instance#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.362 2 DEBUG nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:27:31 np0005486808 kernel: tapea25832f-13 (unregistering): left promiscuous mode
Oct 14 05:27:31 np0005486808 NetworkManager[44885]: <info>  [1760434051.4319] device (tapea25832f-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:27:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:31Z|01373|binding|INFO|Releasing lport ea25832f-13d3-41ec-874c-e622d24c912e from this chassis (sb_readonly=0)
Oct 14 05:27:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:31Z|01374|binding|INFO|Setting lport ea25832f-13d3-41ec-874c-e622d24c912e down in Southbound
Oct 14 05:27:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:31Z|01375|binding|INFO|Removing iface tapea25832f-13 ovn-installed in OVS
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.451 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:45:3f 10.100.0.12'], port_security=['fa:16:3e:fe:45:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4310595f-2280-438c-97ca-f2de57527501', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ea5a077-a2c7-41d4-9c82-971893cbca2e 5ded162a-2a98-4fc1-94d1-b742c1816f61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57f23724-2b34-445a-b3d0-46a0f0ee87c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=ea25832f-13d3-41ec-874c-e622d24c912e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.453 162547 INFO neutron.agent.ovn.metadata.agent [-] Port ea25832f-13d3-41ec-874c-e622d24c912e in datapath 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c unbound from our chassis#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.455 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.457 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7073eb21-77e5-4aec-806f-18dfa98ad0b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.458 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c namespace which is not needed anymore#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 14 05:27:31 np0005486808 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 15.827s CPU time.
Oct 14 05:27:31 np0005486808 systemd-machined[214636]: Machine qemu-159-instance-0000007e terminated.
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.614 2 INFO nova.virt.libvirt.driver [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Instance destroyed successfully.#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.616 2 DEBUG nova.objects.instance [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4310595f-2280-438c-97ca-f2de57527501 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.629 2 DEBUG nova.virt.libvirt.vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-136745986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=126,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHc9phIuU5FXYInHFmneK7ofu0Hronr3GOHgS3ZKrK8UZEcxqPRrwvV2ktBWbk2vf9CswqByMiWPlH6Y1ffYCmRhb+LdZFzcPKCiYu31yXKGqBJ2r6m/arw2a5HgrQ1Icw==',key_name='tempest-TestSecurityGroupsBasicOps-69160241',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-pph0582m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:11Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4310595f-2280-438c-97ca-f2de57527501,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : haproxy version is 2.8.14-c23fe91
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [NOTICE]   (389150) : path to executable is /usr/sbin/haproxy
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : Exiting Master process...
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : Exiting Master process...
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.630 2 DEBUG nova.network.os_vif_util [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.631 2 DEBUG nova.network.os_vif_util [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.632 2 DEBUG os_vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [ALERT]    (389150) : Current worker (389152) exited with code 143 (Terminated)
Oct 14 05:27:31 np0005486808 neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c[389146]: [WARNING]  (389150) : All workers exited. Exiting... (0)
Oct 14 05:27:31 np0005486808 systemd[1]: libpod-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope: Deactivated successfully.
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea25832f-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:31 np0005486808 podman[391860]: 2025-10-14 09:27:31.641309825 +0000 UTC m=+0.051701499 container died a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.650 2 INFO os_vif [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:45:3f,bridge_name='br-int',has_traffic_filtering=True,id=ea25832f-13d3-41ec-874c-e622d24c912e,network=Network(69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea25832f-13')#033[00m
Oct 14 05:27:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1-userdata-shm.mount: Deactivated successfully.
Oct 14 05:27:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e1a75670ad0d206a928ee627a44dd9949ee1ddfbb2472618a373916a2273c134-merged.mount: Deactivated successfully.
Oct 14 05:27:31 np0005486808 podman[391860]: 2025-10-14 09:27:31.683110988 +0000 UTC m=+0.093502662 container cleanup a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.683 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.683 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG oslo_concurrency.lockutils [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.684 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.685 2 DEBUG nova.compute.manager [req-b12cc239-a0fc-49ee-be67-ea9898f798c5 req-d9d6779a-d869-44f3-8a8f-fc4b709dae35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-unplugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:27:31 np0005486808 systemd[1]: libpod-conmon-a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1.scope: Deactivated successfully.
Oct 14 05:27:31 np0005486808 podman[391913]: 2025-10-14 09:27:31.759913946 +0000 UTC m=+0.049165306 container remove a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.765 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1deb2ae4-3da8-4dbe-b8b9-4936af5a9cd4]: (4, ('Tue Oct 14 09:27:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c (a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1)\na44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1\nTue Oct 14 09:27:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c (a44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1)\na44a511e51cc783fd3902e2425448585c0bfa4144e3c0ed2308938b41cd778d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.766 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35467f2e-bf67-4604-9735-01bc6fcc7ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.767 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69d5e5f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 kernel: tap69d5e5f4-00: left promiscuous mode
Oct 14 05:27:31 np0005486808 nova_compute[259627]: 2025-10-14 09:27:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.833 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fac34b18-5fbc-4d50-8925-96ee80ca5a54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.859 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd98993-148d-4e15-9440-4530bf5206f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.860 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5672b267-2544-4248-bc9b-db59cdb543ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2db38-5f57-40f3-9fa9-45254a26b559]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774860, 'reachable_time': 38782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391933, 'error': None, 'target': 'ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.879 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:27:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:31.879 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f0689c-4e21-4f84-8643-a5a27a369901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:31 np0005486808 systemd[1]: run-netns-ovnmeta\x2d69d5e5f4\x2d0ba9\x2d4874\x2d9f8a\x2dcd8b2484f91c.mount: Deactivated successfully.
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.127 2 INFO nova.virt.libvirt.driver [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deleting instance files /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501_del#033[00m
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.129 2 INFO nova.virt.libvirt.driver [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deletion of /var/lib/nova/instances/4310595f-2280-438c-97ca-f2de57527501_del complete#033[00m
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.212 2 INFO nova.compute.manager [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.214 2 DEBUG oslo.service.loopingcall [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.214 2 DEBUG nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:27:32 np0005486808 nova_compute[259627]: 2025-10-14 09:27:32.215 2 DEBUG nova.network.neutron [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:27:32
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups']
Oct 14 05:27:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:27:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:27:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.255 2 DEBUG nova.network.neutron [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.286 2 INFO nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] Took 1.07 seconds to deallocate network for instance.#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.295 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updated VIF entry in instance network info cache for port ea25832f-13d3-41ec-874c-e622d24c912e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.295 2 DEBUG nova.network.neutron [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [{"id": "ea25832f-13d3-41ec-874c-e622d24c912e", "address": "fa:16:3e:fe:45:3f", "network": {"id": "69d5e5f4-0ba9-4874-9f8a-cd8b2484f91c", "bridge": "br-int", "label": "tempest-network-smoke--1422018789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea25832f-13", "ovs_interfaceid": "ea25832f-13d3-41ec-874c-e622d24c912e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.323 2 DEBUG oslo_concurrency.lockutils [req-102500e4-8ffa-47dd-90cb-63a8dd36a94b req-ac89bac5-7506-47ce-8a46-5becabf16946 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4310595f-2280-438c-97ca-f2de57527501" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.334 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.335 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.404 2 INFO nova.compute.manager [None req-cf961240-1598-44ce-b6f5-a0243f2a7e4f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.410 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.428 2 DEBUG oslo_concurrency.processutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.762 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.763 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4310595f-2280-438c-97ca-f2de57527501-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.763 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.764 2 DEBUG oslo_concurrency.lockutils [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.764 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] No waiting events found dispatching network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.765 2 WARNING nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received unexpected event network-vif-plugged-ea25832f-13d3-41ec-874c-e622d24c912e for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.765 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Received event network-vif-deleted-ea25832f-13d3-41ec-874c-e622d24c912e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.766 2 INFO nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Neutron deleted interface ea25832f-13d3-41ec-874c-e622d24c912e; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.766 2 DEBUG nova.network.neutron [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.795 2 DEBUG nova.compute.manager [req-c54f242a-13b1-4af2-b2e7-46c80c6c3c95 req-b5f9bc3e-fc5e-4d04-8e34-48ada0ae3699 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4310595f-2280-438c-97ca-f2de57527501] Detach interface failed, port_id=ea25832f-13d3-41ec-874c-e622d24c912e, reason: Instance 4310595f-2280-438c-97ca-f2de57527501 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:27:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604392729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.891 2 DEBUG oslo_concurrency.processutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.904 2 DEBUG nova.compute.provider_tree [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.925 2 DEBUG nova.scheduler.client.report [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.946 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:33 np0005486808 nova_compute[259627]: 2025-10-14 09:27:33.985 2 INFO nova.scheduler.client.report [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4310595f-2280-438c-97ca-f2de57527501#033[00m
Oct 14 05:27:34 np0005486808 nova_compute[259627]: 2025-10-14 09:27:34.101 2 DEBUG oslo_concurrency.lockutils [None req-ef3a9da1-7950-4899-8f08-2c13737e4af8 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4310595f-2280-438c-97ca-f2de57527501" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:34 np0005486808 podman[392129]: 2025-10-14 09:27:34.132289446 +0000 UTC m=+0.070446851 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 14 05:27:34 np0005486808 podman[392129]: 2025-10-14 09:27:34.227500789 +0000 UTC m=+0.165658164 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:27:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.306 2 INFO nova.compute.manager [None req-a5f7e701-0f04-41de-bdbc-e1ac188343a9 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.317 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.854 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.855 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.855 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.856 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:35 np0005486808 nova_compute[259627]: 2025-10-14 09:27:35.856 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d886b599-0801-4e24-8ed8-08fe500c5f36 does not exist
Oct 14 05:27:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f5e2cd36-cde0-4824-8d39-51004207e5fd does not exist
Oct 14 05:27:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f9583b4d-d82a-47a8-95b8-af3872ae75d3 does not exist
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 14 05:27:36 np0005486808 nova_compute[259627]: 2025-10-14 09:27:36.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:27:36 np0005486808 podman[392559]: 2025-10-14 09:27:36.967624956 +0000 UTC m=+0.045733011 container create 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:27:37 np0005486808 systemd[1]: Started libpod-conmon-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope.
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:36.946555785 +0000 UTC m=+0.024663880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.052 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.053 2 DEBUG nova.network.neutron [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:37.068125369 +0000 UTC m=+0.146233444 container init 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:37.075828779 +0000 UTC m=+0.153936844 container start 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.080 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.080 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.081 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.082 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.082 2 WARNING nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:37.082667248 +0000 UTC m=+0.160775303 container attach 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.082 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:37 np0005486808 determined_bhabha[392575]: 167 167
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG oslo_concurrency.lockutils [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.083 2 DEBUG nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.084 2 WARNING nova.compute.manager [req-9591c385-4b4f-49d2-83dc-a3210f0e45d5 req-557aecc7-39e1-443a-adcc-f73b37338e46 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:27:37 np0005486808 systemd[1]: libpod-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope: Deactivated successfully.
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:37.08639742 +0000 UTC m=+0.164505475 container died 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:27:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0b27d190ab211c3e88b9c157e0d0b7ac2d7db4853520a7c1b20007c92825bd45-merged.mount: Deactivated successfully.
Oct 14 05:27:37 np0005486808 podman[392559]: 2025-10-14 09:27:37.126192214 +0000 UTC m=+0.204300279 container remove 347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bhabha, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:27:37 np0005486808 systemd[1]: libpod-conmon-347bb7bdcd81c3d2b019d997b79f5d0b7a3758a1139f713041445d6b5e787043.scope: Deactivated successfully.
Oct 14 05:27:37 np0005486808 podman[392599]: 2025-10-14 09:27:37.323058498 +0000 UTC m=+0.053268537 container create 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.336 2 INFO nova.compute.manager [None req-2dd7f0e6-65da-4b7f-9955-570aa8348b34 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Get console output#033[00m
Oct 14 05:27:37 np0005486808 nova_compute[259627]: 2025-10-14 09:27:37.345 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:27:37 np0005486808 systemd[1]: Started libpod-conmon-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope.
Oct 14 05:27:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:37 np0005486808 podman[392599]: 2025-10-14 09:27:37.30574048 +0000 UTC m=+0.035950549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:37 np0005486808 podman[392599]: 2025-10-14 09:27:37.402949342 +0000 UTC m=+0.133159411 container init 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:27:37 np0005486808 podman[392599]: 2025-10-14 09:27:37.41258427 +0000 UTC m=+0.142794359 container start 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:27:37 np0005486808 podman[392599]: 2025-10-14 09:27:37.416429805 +0000 UTC m=+0.146639864 container attach 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 14 05:27:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.158 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.159 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.160 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 111 KiB/s wr, 51 op/s
Oct 14 05:27:38 np0005486808 magical_rhodes[392616]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:27:38 np0005486808 magical_rhodes[392616]: --> relative data size: 1.0
Oct 14 05:27:38 np0005486808 magical_rhodes[392616]: --> All data devices are unavailable
Oct 14 05:27:38 np0005486808 systemd[1]: libpod-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Deactivated successfully.
Oct 14 05:27:38 np0005486808 podman[392599]: 2025-10-14 09:27:38.540073949 +0000 UTC m=+1.270284018 container died 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:27:38 np0005486808 systemd[1]: libpod-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Consumed 1.070s CPU time.
Oct 14 05:27:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1bb7812d25e6504f31de002e154e3fd3734725dc7b0eefc441f101f83090a489-merged.mount: Deactivated successfully.
Oct 14 05:27:38 np0005486808 podman[392599]: 2025-10-14 09:27:38.602140283 +0000 UTC m=+1.332350372 container remove 1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:27:38 np0005486808 systemd[1]: libpod-conmon-1cfdd32960f4e851e9590bbb901c033121ed9567ff32a231a2c0581db4b38776.scope: Deactivated successfully.
Oct 14 05:27:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:38Z|01376|binding|INFO|Releasing lport dabdfa0b-267a-4754-a026-601ab2593a32 from this chassis (sb_readonly=0)
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.882 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.883 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.883 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.884 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.884 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.886 2 INFO nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Terminating instance#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.888 2 DEBUG nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:27:38 np0005486808 podman[392683]: 2025-10-14 09:27:38.894838346 +0000 UTC m=+0.101945201 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:27:38 np0005486808 podman[392682]: 2025-10-14 09:27:38.895325008 +0000 UTC m=+0.102737090 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:27:38 np0005486808 kernel: tapc84419ee-15 (unregistering): left promiscuous mode
Oct 14 05:27:38 np0005486808 NetworkManager[44885]: <info>  [1760434058.9527] device (tapc84419ee-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:27:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:38Z|01377|binding|INFO|Releasing lport c84419ee-1585-485f-ae91-116f2123dadf from this chassis (sb_readonly=0)
Oct 14 05:27:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:38Z|01378|binding|INFO|Setting lport c84419ee-1585-485f-ae91-116f2123dadf down in Southbound
Oct 14 05:27:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:38Z|01379|binding|INFO|Removing iface tapc84419ee-15 ovn-installed in OVS
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.973 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6d:b2 10.100.0.9'], port_security=['fa:16:3e:44:6d:b2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1b7f03a3-6b73-478f-bf13-cf062714faef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '471f6dc6-ea8e-4b1e-a678-6d128cef3d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c84419ee-1585-485f-ae91-116f2123dadf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.974 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c84419ee-1585-485f-ae91-116f2123dadf in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis#033[00m
Oct 14 05:27:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.975 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44344b65-f325-470a-bd36-6f52ed03d317#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:38 np0005486808 nova_compute[259627]: 2025-10-14 09:27:38.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:38.995 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[933b21b0-1d98-4516-8abd-b5890fb8227d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 14 05:27:39 np0005486808 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Consumed 13.312s CPU time.
Oct 14 05:27:39 np0005486808 systemd-machined[214636]: Machine qemu-162-instance-00000081 terminated.
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.028 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[af7031e4-d026-4113-96db-d17119adb4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.033 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66165316-3137-449f-ae8e-ee5f6a6942bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.078 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4699eb97-945d-44c5-99fb-080f3536073f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.108 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ea63d5cf-e7b0-4573-ab6f-ff578a1c4f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44344b65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:40:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779335, 'reachable_time': 36403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392808, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b41153-630b-4d79-934e-c1b6f8bd28ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779351, 'tstamp': 779351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392811, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap44344b65-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779355, 'tstamp': 779355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392811, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.128 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.160 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44344b65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44344b65-f0, col_values=(('external_ids', {'iface-id': 'dabdfa0b-267a-4754-a026-601ab2593a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:39.161 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.165 2 INFO nova.virt.libvirt.driver [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance destroyed successfully.#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.166 2 DEBUG nova.objects.instance [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 1b7f03a3-6b73-478f-bf13-cf062714faef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.183 2 DEBUG nova.virt.libvirt.vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2128241833',display_name='tempest-TestNetworkBasicOps-server-2128241833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2128241833',id=129,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOLhDC9VoGp8WYPVj/2CKqBQNdjKW4OK8XHbingpjAJEpUTBDLqjgHyIpa9e7zDDaRmI0fbF5BkwR2QzH+89ULkia3qer+DRxqv2g2mzU6TZpkV7v9oBLNlvSUqf9rYag==',key_name='tempest-TestNetworkBasicOps-1059153159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:27:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-3kh0vol0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:27:13Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=1b7f03a3-6b73-478f-bf13-cf062714faef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.184 2 DEBUG nova.network.os_vif_util [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "c84419ee-1585-485f-ae91-116f2123dadf", "address": "fa:16:3e:44:6d:b2", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc84419ee-15", "ovs_interfaceid": "c84419ee-1585-485f-ae91-116f2123dadf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.185 2 DEBUG nova.network.os_vif_util [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.185 2 DEBUG os_vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc84419ee-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.196 2 INFO os_vif [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6d:b2,bridge_name='br-int',has_traffic_filtering=True,id=c84419ee-1585-485f-ae91-116f2123dadf,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc84419ee-15')#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.268 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.270 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.270 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.271 2 DEBUG oslo_concurrency.lockutils [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.271 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.272 2 DEBUG nova.compute.manager [req-a7be8b37-3707-4dc5-80df-8b804ba0639d req-533d435e-27d5-4c44-990a-2cba1c23420d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-unplugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.455667063 +0000 UTC m=+0.038910172 container create 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:27:39 np0005486808 systemd[1]: Started libpod-conmon-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope.
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.437125705 +0000 UTC m=+0.020368834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.575287049 +0000 UTC m=+0.158530248 container init 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.575 2 INFO nova.virt.libvirt.driver [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deleting instance files /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef_del#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.576 2 INFO nova.virt.libvirt.driver [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deletion of /var/lib/nova/instances/1b7f03a3-6b73-478f-bf13-cf062714faef_del complete#033[00m
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.581938763 +0000 UTC m=+0.165181872 container start 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.585154993 +0000 UTC m=+0.168398152 container attach 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 05:27:39 np0005486808 ecstatic_shtern[392896]: 167 167
Oct 14 05:27:39 np0005486808 systemd[1]: libpod-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope: Deactivated successfully.
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.589789618 +0000 UTC m=+0.173032767 container died 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.620 2 INFO nova.compute.manager [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:27:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fba113757ec8e24d2a1b24f02ba1aafe218f6586fad5fa9ef9d586ca3cd32a59-merged.mount: Deactivated successfully.
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.621 2 DEBUG oslo.service.loopingcall [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.622 2 DEBUG nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.622 2 DEBUG nova.network.neutron [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.630 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.631 2 DEBUG nova.network.neutron [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:39 np0005486808 podman[392879]: 2025-10-14 09:27:39.635975659 +0000 UTC m=+0.219218768 container remove 73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shtern, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.651 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.652 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.652 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:39 np0005486808 systemd[1]: libpod-conmon-73144eb9988ad90abac0df15b1dbdd219c959fe92b4361fd21e7d8362ed49fc2.scope: Deactivated successfully.
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.653 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.654 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.654 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.654 2 WARNING nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.655 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.655 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG oslo_concurrency.lockutils [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.656 2 DEBUG nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.657 2 WARNING nova.compute.manager [req-06f502b1-1ee0-4ac2-8490-c4009214b37c req-1ac7f6b5-603b-4f79-bf6b-4174e143a7dd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434044.6773894, e16af982-3cd8-4600-99c4-aeec45986dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.678 2 INFO nova.compute.manager [-] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:27:39 np0005486808 nova_compute[259627]: 2025-10-14 09:27:39.698 2 DEBUG nova.compute.manager [None req-e3d240a9-4f3d-4bb0-8947-f47bfb8cabe2 - - - - - -] [instance: e16af982-3cd8-4600-99c4-aeec45986dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:39 np0005486808 podman[392922]: 2025-10-14 09:27:39.894033565 +0000 UTC m=+0.070498043 container create 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:27:39 np0005486808 systemd[1]: Started libpod-conmon-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope.
Oct 14 05:27:39 np0005486808 podman[392922]: 2025-10-14 09:27:39.8622681 +0000 UTC m=+0.038732648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:40 np0005486808 podman[392922]: 2025-10-14 09:27:40.010347029 +0000 UTC m=+0.186811547 container init 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:27:40 np0005486808 podman[392922]: 2025-10-14 09:27:40.023525115 +0000 UTC m=+0.199989623 container start 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:27:40 np0005486808 podman[392922]: 2025-10-14 09:27:40.031196315 +0000 UTC m=+0.207660823 container attach 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.276 2 DEBUG nova.network.neutron [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.285 2 DEBUG nova.compute.manager [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-changed-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.285 2 DEBUG nova.compute.manager [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing instance network info cache due to event network-changed-c84419ee-1585-485f-ae91-116f2123dadf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.286 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Refreshing network info cache for port c84419ee-1585-485f-ae91-116f2123dadf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.302 2 INFO nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct 14 05:27:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 153 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 114 KiB/s wr, 65 op/s
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.356 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.357 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.419916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060419976, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2069, "num_deletes": 251, "total_data_size": 3397341, "memory_usage": 3446584, "flush_reason": "Manual Compaction"}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060442702, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3308248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45171, "largest_seqno": 47239, "table_properties": {"data_size": 3298863, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19236, "raw_average_key_size": 20, "raw_value_size": 3280173, "raw_average_value_size": 3445, "num_data_blocks": 261, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760433847, "oldest_key_time": 1760433847, "file_creation_time": 1760434060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 22860 microseconds, and 15089 cpu microseconds.
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.445 2 DEBUG oslo_concurrency.processutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.442772) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3308248 bytes OK
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.442800) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449425) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449469) EVENT_LOG_v1 {"time_micros": 1760434060449459, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.449493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3388631, prev total WAL file size 3388631, number of live WAL files 2.
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.450659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3230KB)], [104(8463KB)]
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060450690, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11975056, "oldest_snapshot_seqno": -1}
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.489 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7045 keys, 10271158 bytes, temperature: kUnknown
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060500040, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10271158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10222651, "index_size": 29771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 181672, "raw_average_key_size": 25, "raw_value_size": 10095187, "raw_average_value_size": 1432, "num_data_blocks": 1172, "num_entries": 7045, "num_filter_entries": 7045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.500264) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10271158 bytes
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.501639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.3 rd, 207.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7559, records dropped: 514 output_compression: NoCompression
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.501663) EVENT_LOG_v1 {"time_micros": 1760434060501649, "job": 62, "event": "compaction_finished", "compaction_time_micros": 49414, "compaction_time_cpu_micros": 24755, "output_level": 6, "num_output_files": 1, "total_output_size": 10271158, "num_input_records": 7559, "num_output_records": 7045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060502378, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434060504004, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.450589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:27:40.504120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]: {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    "0": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "devices": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "/dev/loop3"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            ],
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_name": "ceph_lv0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_size": "21470642176",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "name": "ceph_lv0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "tags": {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_name": "ceph",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.crush_device_class": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.encrypted": "0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_id": "0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.vdo": "0"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            },
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "vg_name": "ceph_vg0"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        }
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    ],
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    "1": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "devices": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "/dev/loop4"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            ],
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_name": "ceph_lv1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_size": "21470642176",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "name": "ceph_lv1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "tags": {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_name": "ceph",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.crush_device_class": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.encrypted": "0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_id": "1",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.vdo": "0"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            },
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "vg_name": "ceph_vg1"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        }
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    ],
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    "2": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "devices": [
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "/dev/loop5"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            ],
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_name": "ceph_lv2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_size": "21470642176",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "name": "ceph_lv2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "tags": {
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.cluster_name": "ceph",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.crush_device_class": "",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.encrypted": "0",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osd_id": "2",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:                "ceph.vdo": "0"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            },
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "type": "block",
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:            "vg_name": "ceph_vg2"
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:        }
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]:    ]
Oct 14 05:27:40 np0005486808 compassionate_kowalevski[392938]: }
Oct 14 05:27:40 np0005486808 systemd[1]: libpod-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope: Deactivated successfully.
Oct 14 05:27:40 np0005486808 podman[392922]: 2025-10-14 09:27:40.811135587 +0000 UTC m=+0.987600055 container died 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:27:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-88e652c71bd08c806139465e8a7b7964056568a8f965511161a23932c2b75049-merged.mount: Deactivated successfully.
Oct 14 05:27:40 np0005486808 podman[392922]: 2025-10-14 09:27:40.86388023 +0000 UTC m=+1.040344688 container remove 6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:27:40 np0005486808 systemd[1]: libpod-conmon-6ec5547495ec59683d7e8f020258d642a7a2395527d9a987c780f5f943fa969e.scope: Deactivated successfully.
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960186832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.947 2 DEBUG oslo_concurrency.processutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.954 2 DEBUG nova.compute.provider_tree [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:40 np0005486808 nova_compute[259627]: 2025-10-14 09:27:40.978 2 DEBUG nova.scheduler.client.report [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.019 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.021 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.070 2 DEBUG nova.network.neutron [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.095 2 DEBUG oslo_concurrency.lockutils [req-6e431e43-f67c-4f25-bebf-be316f537ca9 req-f3b2f5f0-2349-45ac-92f0-4ebe4348d668 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1b7f03a3-6b73-478f-bf13-cf062714faef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.109 2 INFO nova.scheduler.client.report [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 1b7f03a3-6b73-478f-bf13-cf062714faef#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.181 2 DEBUG oslo_concurrency.lockutils [None req-964d5cda-2a8d-4cbf-a27a-571024a2dfc0 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.385 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.387 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.387 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.388 2 DEBUG oslo_concurrency.lockutils [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1b7f03a3-6b73-478f-bf13-cf062714faef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.388 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] No waiting events found dispatching network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.389 2 WARNING nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received unexpected event network-vif-plugged-c84419ee-1585-485f-ae91-116f2123dadf for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.390 2 DEBUG nova.compute.manager [req-d15b6de8-7ce3-49c2-9d4b-c1072fd4b5c5 req-458691f6-15aa-454f-9709-d5e36d3741e4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Received event network-vif-deleted-c84419ee-1585-485f-ae91-116f2123dadf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248073203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.488 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.581 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.583 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.597468576 +0000 UTC m=+0.052669313 container create aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:27:41 np0005486808 systemd[1]: Started libpod-conmon-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope.
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.569166416 +0000 UTC m=+0.024367233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.689990022 +0000 UTC m=+0.145190759 container init aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.696976884 +0000 UTC m=+0.152177621 container start aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.700178664 +0000 UTC m=+0.155379401 container attach aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:27:41 np0005486808 affectionate_mendel[393160]: 167 167
Oct 14 05:27:41 np0005486808 systemd[1]: libpod-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope: Deactivated successfully.
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.704784427 +0000 UTC m=+0.159985174 container died aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:27:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b883645ed3837434afe8c6401a9c340c11bf4b1bc8e68343dc89c12d82ef9703-merged.mount: Deactivated successfully.
Oct 14 05:27:41 np0005486808 podman[393144]: 2025-10-14 09:27:41.752140688 +0000 UTC m=+0.207341435 container remove aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:27:41 np0005486808 systemd[1]: libpod-conmon-aba068cbaa4aa018842f03b45b6736a9a41678749219e2ad56a3a7eb5a8f2a66.scope: Deactivated successfully.
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.801 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.802 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3402MB free_disk=59.92548370361328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.803 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.803 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.894 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2595dec0-9170-4e8f-a6bc-9179d30519a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.895 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.895 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:27:41 np0005486808 nova_compute[259627]: 2025-10-14 09:27:41.958 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:41 np0005486808 podman[393183]: 2025-10-14 09:27:41.960563258 +0000 UTC m=+0.058945148 container create ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:27:42 np0005486808 systemd[1]: Started libpod-conmon-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope.
Oct 14 05:27:42 np0005486808 podman[393183]: 2025-10-14 09:27:41.93071226 +0000 UTC m=+0.029094220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:27:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:27:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:27:42 np0005486808 podman[393183]: 2025-10-14 09:27:42.074228326 +0000 UTC m=+0.172610266 container init ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:27:42 np0005486808 podman[393183]: 2025-10-14 09:27:42.090137029 +0000 UTC m=+0.188518939 container start ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:27:42 np0005486808 podman[393183]: 2025-10-14 09:27:42.094279752 +0000 UTC m=+0.192661632 container attach ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:27:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 29 KiB/s wr, 64 op/s
Oct 14 05:27:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042198546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:42 np0005486808 nova_compute[259627]: 2025-10-14 09:27:42.456 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:42 np0005486808 nova_compute[259627]: 2025-10-14 09:27:42.467 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:42 np0005486808 nova_compute[259627]: 2025-10-14 09:27:42.488 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:42 np0005486808 nova_compute[259627]: 2025-10-14 09:27:42.511 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:27:42 np0005486808 nova_compute[259627]: 2025-10-14 09:27:42.512 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]: {
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_id": 2,
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "type": "bluestore"
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    },
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_id": 1,
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "type": "bluestore"
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    },
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_id": 0,
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:        "type": "bluestore"
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]:    }
Oct 14 05:27:43 np0005486808 hardcore_zhukovsky[393200]: }
Oct 14 05:27:43 np0005486808 systemd[1]: libpod-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Deactivated successfully.
Oct 14 05:27:43 np0005486808 systemd[1]: libpod-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Consumed 1.076s CPU time.
Oct 14 05:27:43 np0005486808 podman[393183]: 2025-10-14 09:27:43.165089951 +0000 UTC m=+1.263471911 container died ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.201 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e3c80e666ad34df73414557d76c856e55dda9f17cbea23ef027c3049e084e208-merged.mount: Deactivated successfully.
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.202 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.203 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.205 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.205 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.208 2 INFO nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Terminating instance#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.210 2 DEBUG nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:27:43 np0005486808 podman[393183]: 2025-10-14 09:27:43.256168022 +0000 UTC m=+1.354549932 container remove ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:27:43 np0005486808 systemd[1]: libpod-conmon-ef2bc40a6a01703bb9e38d8d8d43c0c223b5c7191fe19243b6068b70e20f4559.scope: Deactivated successfully.
Oct 14 05:27:43 np0005486808 kernel: tap9ecc8f01-43 (unregistering): left promiscuous mode
Oct 14 05:27:43 np0005486808 NetworkManager[44885]: <info>  [1760434063.2881] device (tap9ecc8f01-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01380|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=0)
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01381|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down in Southbound
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01382|binding|INFO|Removing iface tap9ecc8f01-43 ovn-installed in OVS
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.311 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.312 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.313 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1d7206-0e96-4207-920d-04893e9c9da6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.316 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 namespace which is not needed anymore#033[00m
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:43 np0005486808 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 14 05:27:43 np0005486808 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 14.215s CPU time.
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7e22ddb7-6ab6-443f-b8c2-563acfe6bf28 does not exist
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 761063e5-1bff-4b92-880f-755ac040b640 does not exist
Oct 14 05:27:43 np0005486808 systemd-machined[214636]: Machine qemu-160-instance-0000007f terminated.
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:27:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:27:43 np0005486808 kernel: tap9ecc8f01-43: entered promiscuous mode
Oct 14 05:27:43 np0005486808 NetworkManager[44885]: <info>  [1760434063.4350] manager: (tap9ecc8f01-43): new Tun device (/org/freedesktop/NetworkManager/Devices/563)
Oct 14 05:27:43 np0005486808 kernel: tap9ecc8f01-43 (unregistering): left promiscuous mode
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01383|binding|INFO|Claiming lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c for this chassis.
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01384|binding|INFO|9ecc8f01-430d-4714-8d7e-e60d7edaa73c: Claiming fa:16:3e:50:8f:64 10.100.0.3
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.455 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.475 2 INFO nova.virt.libvirt.driver [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Instance destroyed successfully.#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.477 2 DEBUG nova.objects.instance [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 2595dec0-9170-4e8f-a6bc-9179d30519a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01385|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c ovn-installed in OVS
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01386|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c up in Southbound
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01387|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=1)
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01388|if_status|INFO|Dropped 2 log messages in last 633 seconds (most recently, 633 seconds ago) due to excessive rate
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01389|if_status|INFO|Not setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down as sb is readonly
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01390|binding|INFO|Removing iface tap9ecc8f01-43 ovn-installed in OVS
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01391|binding|INFO|Releasing lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c from this chassis (sb_readonly=0)
Oct 14 05:27:43 np0005486808 ovn_controller[152662]: 2025-10-14T09:27:43Z|01392|binding|INFO|Setting lport 9ecc8f01-430d-4714-8d7e-e60d7edaa73c down in Southbound
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.508 2 DEBUG nova.virt.libvirt.vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-358564178',display_name='tempest-TestNetworkBasicOps-server-358564178',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-358564178',id=127,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHcVlyHKVFHxb0hriNyI1hppvpwNJ/aTRlLE7dBDeajB0uM5sP+bnasOk+ko2DL77CLK3QbWVr/+3RKN6o4h1D1BJ0FS9znP9UgUkNA33oyzkv3sPnYQc7bgh/xOganUMg==',key_name='tempest-TestNetworkBasicOps-1632567993',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:26:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-nbahr6ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:26:56Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=2595dec0-9170-4e8f-a6bc-9179d30519a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.508 2 DEBUG nova.network.os_vif_util [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.509 2 DEBUG nova.network.os_vif_util [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.509 2 DEBUG os_vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ecc8f01-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.513 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.517 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.520 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:8f:64 10.100.0.3'], port_security=['fa:16:3e:50:8f:64 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2595dec0-9170-4e8f-a6bc-9179d30519a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44344b65-f325-470a-bd36-6f52ed03d317', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e192907-665a-4f92-bc1f-6ecbfbe8292b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dceacdb-1a50-419e-9ee9-f149e7094b34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=9ecc8f01-430d-4714-8d7e-e60d7edaa73c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.520 2 DEBUG nova.compute.manager [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG nova.compute.manager [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing instance network info cache due to event network-changed-9ecc8f01-430d-4714-8d7e-e60d7edaa73c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.521 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Refreshing network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.525 2 INFO os_vif [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:8f:64,bridge_name='br-int',has_traffic_filtering=True,id=9ecc8f01-430d-4714-8d7e-e60d7edaa73c,network=Network(44344b65-f325-470a-bd36-6f52ed03d317),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ecc8f01-43')#033[00m
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : haproxy version is 2.8.14-c23fe91
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [NOTICE]   (390972) : path to executable is /usr/sbin/haproxy
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : Exiting Master process...
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : Exiting Master process...
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [ALERT]    (390972) : Current worker (390974) exited with code 143 (Terminated)
Oct 14 05:27:43 np0005486808 neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317[390967]: [WARNING]  (390972) : All workers exited. Exiting... (0)
Oct 14 05:27:43 np0005486808 systemd[1]: libpod-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope: Deactivated successfully.
Oct 14 05:27:43 np0005486808 podman[393315]: 2025-10-14 09:27:43.537525184 +0000 UTC m=+0.066886604 container died 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:27:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03-userdata-shm.mount: Deactivated successfully.
Oct 14 05:27:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1d2b1df47e436b2c8cb55d0e32cd763db11840ac3a9a7beaf43710acbfe9e885-merged.mount: Deactivated successfully.
Oct 14 05:27:43 np0005486808 podman[393315]: 2025-10-14 09:27:43.58836055 +0000 UTC m=+0.117721960 container cleanup 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:27:43 np0005486808 systemd[1]: libpod-conmon-5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03.scope: Deactivated successfully.
Oct 14 05:27:43 np0005486808 podman[393395]: 2025-10-14 09:27:43.66566894 +0000 UTC m=+0.049989076 container remove 5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.672 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc5a587-4037-4cd5-8de7-eb3cfc015bf1]: (4, ('Tue Oct 14 09:27:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 (5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03)\n5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03\nTue Oct 14 09:27:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 (5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03)\n5fdd23863c0b7f91542b5912dc88c90cdcb240dffa368e5f06ac1f336316bc03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.674 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeadc33-9ba6-49ae-a8f2-e50e0a7cfd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.675 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44344b65-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:43 np0005486808 kernel: tap44344b65-f0: left promiscuous mode
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7116b1e8-1ed2-4a1c-9cf7-5bc8e53dc851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f980644e-5f70-4b49-94e5-a75e613c7f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99b46979-6a1f-4182-ba71-8d38a73edd4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.743 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5b3478-403b-4445-a64d-00591b3b8fe2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779325, 'reachable_time': 35808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393410, 'error': None, 'target': 'ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 systemd[1]: run-netns-ovnmeta\x2d44344b65\x2df325\x2d470a\x2dbd36\x2d6f52ed03d317.mount: Deactivated successfully.
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.746 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44344b65-f325-470a-bd36-6f52ed03d317 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.746 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[964efcdb-7d53-404d-945b-148377056a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.748 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.749 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.750 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[290fef6f-8790-4199-b2ff-8f8d27cb1b26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.750 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c in datapath 44344b65-f325-470a-bd36-6f52ed03d317 unbound from our chassis#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.751 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44344b65-f325-470a-bd36-6f52ed03d317, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:27:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:27:43.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1735f23e-a2bd-41df-b5a1-8f05ed7dc0a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.871 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.871 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.872 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.873 2 DEBUG oslo_concurrency.lockutils [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.873 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.874 2 DEBUG nova.compute.manager [req-007d2205-d60a-4970-b7f6-58c17b83e786 req-d53029a7-350d-459b-aa4d-4f81fd98f051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.976 2 INFO nova.virt.libvirt.driver [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deleting instance files /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9_del#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.977 2 INFO nova.virt.libvirt.driver [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deletion of /var/lib/nova/instances/2595dec0-9170-4e8f-a6bc-9179d30519a9_del complete#033[00m
Oct 14 05:27:43 np0005486808 nova_compute[259627]: 2025-10-14 09:27:43.981 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.057 2 INFO nova.compute.manager [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.058 2 DEBUG oslo.service.loopingcall [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.059 2 DEBUG nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.059 2 DEBUG nova.network.neutron [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:27:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 57 op/s
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.934 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updated VIF entry in instance network info cache for port 9ecc8f01-430d-4714-8d7e-e60d7edaa73c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.935 2 DEBUG nova.network.neutron [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [{"id": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "address": "fa:16:3e:50:8f:64", "network": {"id": "44344b65-f325-470a-bd36-6f52ed03d317", "bridge": "br-int", "label": "tempest-network-smoke--137942009", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ecc8f01-43", "ovs_interfaceid": "9ecc8f01-430d-4714-8d7e-e60d7edaa73c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:44 np0005486808 nova_compute[259627]: 2025-10-14 09:27:44.964 2 DEBUG oslo_concurrency.lockutils [req-247849e6-5bd4-453d-814a-89befe5ca099 req-f28b0ea9-29b9-4b87-b077-38c587d705bb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2595dec0-9170-4e8f-a6bc-9179d30519a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.259 2 DEBUG nova.network.neutron [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.278 2 INFO nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.346 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.347 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.398 2 DEBUG oslo_concurrency.processutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780509116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.935 2 DEBUG oslo_concurrency.processutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.944 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.944 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.945 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.945 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.946 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.946 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.947 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.947 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.948 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.949 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.949 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.950 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.951 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.951 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.952 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.952 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-unplugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-deleted-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.953 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG oslo_concurrency.lockutils [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.954 2 DEBUG nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] No waiting events found dispatching network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.954 2 WARNING nova.compute.manager [req-8cbb0f0e-2334-4419-8046-dca14fd8829f req-46fe6de9-49ce-4a49-9a7b-32aeffbb90d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Received unexpected event network-vif-plugged-9ecc8f01-430d-4714-8d7e-e60d7edaa73c for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.957 2 DEBUG nova.compute.provider_tree [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:45 np0005486808 nova_compute[259627]: 2025-10-14 09:27:45.977 2 DEBUG nova.scheduler.client.report [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.002 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.026 2 INFO nova.scheduler.client.report [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 2595dec0-9170-4e8f-a6bc-9179d30519a9#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.102 2 DEBUG oslo_concurrency.lockutils [None req-688e6b5c-b2ea-472d-8149-8bc0fed2274d 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "2595dec0-9170-4e8f-a6bc-9179d30519a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 22 KiB/s wr, 85 op/s
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.609 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434051.6084642, 4310595f-2280-438c-97ca-f2de57527501 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.610 2 INFO nova.compute.manager [-] [instance: 4310595f-2280-438c-97ca-f2de57527501] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.634 2 DEBUG nova.compute.manager [None req-fdf9de99-a5eb-4653-a433-bef533947476 - - - - - -] [instance: 4310595f-2280-438c-97ca-f2de57527501] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:27:46 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:47 np0005486808 nova_compute[259627]: 2025-10-14 09:27:46.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:27:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 05:27:48 np0005486808 nova_compute[259627]: 2025-10-14 09:27:48.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.153 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.154 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.171 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.273 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.274 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.283 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.284 2 INFO nova.compute.claims [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.399 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:27:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046509908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.919 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.929 2 DEBUG nova.compute.provider_tree [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.945 2 DEBUG nova.scheduler.client.report [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.967 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:49 np0005486808 nova_compute[259627]: 2025-10-14 09:27:49.968 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.015 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.016 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.034 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.052 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.138 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.139 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.140 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating image(s)#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.177 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.214 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.248 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.253 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.338 2 DEBUG nova.policy [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:27:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.361 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.362 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.363 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.363 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.396 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.400 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.691 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.749 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.837 2 DEBUG nova.objects.instance [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.859 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.860 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Ensure instance console log exists: /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.862 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:50 np0005486808 nova_compute[259627]: 2025-10-14 09:27:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:51 np0005486808 nova_compute[259627]: 2025-10-14 09:27:51.638 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Successfully created port: 3fc32773-5083-4341-9838-5282b7963f56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:27:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.3 KiB/s wr, 42 op/s
Oct 14 05:27:52 np0005486808 podman[393624]: 2025-10-14 09:27:52.673493633 +0000 UTC m=+0.077182328 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:27:52 np0005486808 podman[393623]: 2025-10-14 09:27:52.707513524 +0000 UTC m=+0.109376704 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 05:27:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:53 np0005486808 nova_compute[259627]: 2025-10-14 09:27:53.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:54 np0005486808 nova_compute[259627]: 2025-10-14 09:27:54.162 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434059.1603062, 1b7f03a3-6b73-478f-bf13-cf062714faef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:54 np0005486808 nova_compute[259627]: 2025-10-14 09:27:54.163 2 INFO nova.compute.manager [-] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:27:54 np0005486808 nova_compute[259627]: 2025-10-14 09:27:54.192 2 DEBUG nova.compute.manager [None req-da76d5bc-a370-4b1f-94bf-2799d894a075 - - - - - -] [instance: 1b7f03a3-6b73-478f-bf13-cf062714faef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:27:54 np0005486808 nova_compute[259627]: 2025-10-14 09:27:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.602 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Successfully updated port: 3fc32773-5083-4341-9838-5282b7963f56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.616 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.709 2 DEBUG nova.compute.manager [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.709 2 DEBUG nova.compute.manager [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.710 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:27:55 np0005486808 nova_compute[259627]: 2025-10-14 09:27:55.804 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:27:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.851 2 DEBUG nova.network.neutron [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.888 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.888 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance network_info: |[{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.889 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.889 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.894 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start _get_guest_xml network_info=[{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.900 2 WARNING nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.906 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.907 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.915 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.916 2 DEBUG nova.virt.libvirt.host [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.917 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.917 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.918 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.919 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.919 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.920 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.920 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.921 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.921 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.922 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.922 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.923 2 DEBUG nova.virt.hardware [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:27:56 np0005486808 nova_compute[259627]: 2025-10-14 09:27:56.928 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:27:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803485620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.432 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.460 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.465 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:27:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713904424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.931 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.933 2 DEBUG nova.virt.libvirt.vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:50Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.934 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.935 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.936 2 DEBUG nova.objects.instance [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.955 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <uuid>7a110a3c-a2ca-4314-a190-28a4505cc26c</uuid>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <name>instance-00000082</name>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167</nova:name>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:27:56</nova:creationTime>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <nova:port uuid="3fc32773-5083-4341-9838-5282b7963f56">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="serial">7a110a3c-a2ca-4314-a190-28a4505cc26c</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="uuid">7a110a3c-a2ca-4314-a190-28a4505cc26c</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7a110a3c-a2ca-4314-a190-28a4505cc26c_disk">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b3:5d:72"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <target dev="tap3fc32773-50"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/console.log" append="off"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:27:57 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:27:57 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:27:57 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:27:57 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.957 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Preparing to wait for external event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.958 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.958 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.959 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.960 2 DEBUG nova.virt.libvirt.vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:27:50Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.961 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.962 2 DEBUG nova.network.os_vif_util [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.962 2 DEBUG os_vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fc32773-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:57 np0005486808 nova_compute[259627]: 2025-10-14 09:27:57.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fc32773-50, col_values=(('external_ids', {'iface-id': '3fc32773-5083-4341-9838-5282b7963f56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5d:72', 'vm-uuid': '7a110a3c-a2ca-4314-a190-28a4505cc26c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:58 np0005486808 NetworkManager[44885]: <info>  [1760434078.0169] manager: (tap3fc32773-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.025 2 INFO os_vif [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50')#033[00m
Oct 14 05:27:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.188 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.189 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.189 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:b3:5d:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.191 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Using config drive#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.222 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.472 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434063.4699454, 2595dec0-9170-4e8f-a6bc-9179d30519a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.472 2 INFO nova.compute.manager [-] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:27:58 np0005486808 nova_compute[259627]: 2025-10-14 09:27:58.495 2 DEBUG nova.compute.manager [None req-a608c315-6725-4177-bc31-79de8ce5e48b - - - - - -] [instance: 2595dec0-9170-4e8f-a6bc-9179d30519a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:27:59 np0005486808 nova_compute[259627]: 2025-10-14 09:27:59.726 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Creating config drive at /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config#033[00m
Oct 14 05:27:59 np0005486808 nova_compute[259627]: 2025-10-14 09:27:59.734 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwbaw7l6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:27:59 np0005486808 nova_compute[259627]: 2025-10-14 09:27:59.903 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwbaw7l6" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:27:59 np0005486808 nova_compute[259627]: 2025-10-14 09:27:59.945 2 DEBUG nova.storage.rbd_utils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:27:59 np0005486808 nova_compute[259627]: 2025-10-14 09:27:59.951 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.177 2 DEBUG oslo_concurrency.processutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config 7a110a3c-a2ca-4314-a190-28a4505cc26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.179 2 INFO nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deleting local config drive /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c/disk.config because it was imported into RBD.#033[00m
Oct 14 05:28:00 np0005486808 kernel: tap3fc32773-50: entered promiscuous mode
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.2499] manager: (tap3fc32773-50): new Tun device (/org/freedesktop/NetworkManager/Devices/565)
Oct 14 05:28:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:00Z|01393|binding|INFO|Claiming lport 3fc32773-5083-4341-9838-5282b7963f56 for this chassis.
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:00Z|01394|binding|INFO|3fc32773-5083-4341-9838-5282b7963f56: Claiming fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.265 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5d:72 10.100.0.12'], port_security=['fa:16:3e:b3:5d:72 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7a110a3c-a2ca-4314-a190-28a4505cc26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24af2eac-35ae-4c02-b261-8fe378764631 e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3fc32773-5083-4341-9838-5282b7963f56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.266 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc32773-5083-4341-9838-5282b7963f56 in datapath f09a704d-6063-4e40-b690-c967cd364b32 bound to our chassis#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.267 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9b88e589-d7d3-4b76-ab1b-3247b426fff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.286 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf09a704d-61 in ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.291 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf09a704d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.291 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[065fa1c6-b062-4576-b323-83e9be0333a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.292 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb0baec-6a77-45fc-b511-7f99b13813c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.311 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d48437d5-df4a-4b45-8dab-d1677ad65163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 systemd-udevd[393807]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:28:00 np0005486808 systemd-machined[214636]: New machine qemu-163-instance-00000082.
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.3379] device (tap3fc32773-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.3389] device (tap3fc32773-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.344 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d8138b-a0b3-4411-ac63-a78a0821ff69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:00Z|01395|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 ovn-installed in OVS
Oct 14 05:28:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:00Z|01396|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 up in Southbound
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.380 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea7269d-61ba-4529-8d03-8f54f94a4ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.3872] manager: (tapf09a704d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/566)
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.386 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee463a75-2fa7-4a94-b715-78958daa5af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.427 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[466ada42-c8f4-4eb9-a145-b9734b8fe9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.431 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[454e5323-383f-411d-9caf-5515472b1daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.4630] device (tapf09a704d-60): carrier: link connected
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ae41c2c4-812c-43b5-8ab8-9e381b0347fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.503 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b08aa3ba-8ed1-42ea-a9d4-804a070991f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393838, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.529 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[020d32c4-fc09-4246-9248-bcbdd19ed59d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:f0e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785922, 'tstamp': 785922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393839, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.561 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5f35d808-a63a-4256-93fe-e2cc8fe4490b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393840, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.610 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[377be5ec-b0cd-4e11-ab1a-f156c1fd316f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.695 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1052fa53-14ad-4435-8857-2dce6e4cbb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.697 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:00 np0005486808 NetworkManager[44885]: <info>  [1760434080.7018] manager: (tapf09a704d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 kernel: tapf09a704d-60: entered promiscuous mode
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.707 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:00Z|01397|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.744 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.745 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c59de69a-0bec-428e-bfa4-8aa04d5003ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.746 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/f09a704d-6063-4e40-b690-c967cd364b32.pid.haproxy
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID f09a704d-6063-4e40-b690-c967cd364b32
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:28:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:00.747 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'env', 'PROCESS_TAG=haproxy-f09a704d-6063-4e40-b690-c967cd364b32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f09a704d-6063-4e40-b690-c967cd364b32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.860 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.861 2 DEBUG nova.network.neutron [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:00 np0005486808 nova_compute[259627]: 2025-10-14 09:28:00.880 2 DEBUG oslo_concurrency.lockutils [req-703d802a-d5c3-4e59-ad6c-53a38e327ebd req-06e472e5-13c6-42ab-a82c-55cf9de752e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:01 np0005486808 podman[393914]: 2025-10-14 09:28:01.189946312 +0000 UTC m=+0.069233002 container create a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:28:01 np0005486808 systemd[1]: Started libpod-conmon-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope.
Oct 14 05:28:01 np0005486808 podman[393914]: 2025-10-14 09:28:01.148551329 +0000 UTC m=+0.027838069 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:28:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a05a81ea8f0e7b8861af2cf251020d339678d913c2f8ee38478cf7f1b84037/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:01 np0005486808 podman[393914]: 2025-10-14 09:28:01.283003781 +0000 UTC m=+0.162290521 container init a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:28:01 np0005486808 podman[393914]: 2025-10-14 09:28:01.29387034 +0000 UTC m=+0.173157050 container start a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:28:01 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : New worker (393936) forked
Oct 14 05:28:01 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : Loading success.
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.479 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434081.478794, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.479 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Started (Lifecycle Event)#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.505 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.509 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434081.4789352, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.509 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.533 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.536 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:01 np0005486808 nova_compute[259627]: 2025-10-14 09:28:01.555 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:03 np0005486808 nova_compute[259627]: 2025-10-14 09:28:03.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:28:05 np0005486808 nova_compute[259627]: 2025-10-14 09:28:05.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:28:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:28:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:28:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444135089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:28:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:28:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.043 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.044 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.036931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088037002, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 507, "num_deletes": 256, "total_data_size": 438035, "memory_usage": 447720, "flush_reason": "Manual Compaction"}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088043280, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 433906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47240, "largest_seqno": 47746, "table_properties": {"data_size": 431072, "index_size": 806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6682, "raw_average_key_size": 18, "raw_value_size": 425372, "raw_average_value_size": 1175, "num_data_blocks": 35, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434061, "oldest_key_time": 1760434061, "file_creation_time": 1760434088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 6400 microseconds, and 3141 cpu microseconds.
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.043340) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 433906 bytes OK
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.043365) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045486) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045511) EVENT_LOG_v1 {"time_micros": 1760434088045503, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.045534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 435070, prev total WAL file size 435070, number of live WAL files 2.
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.046161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(423KB)], [107(10030KB)]
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088046213, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10705064, "oldest_snapshot_seqno": -1}
Oct 14 05:28:08 np0005486808 nova_compute[259627]: 2025-10-14 09:28:08.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6884 keys, 10570467 bytes, temperature: kUnknown
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088129228, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10570467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10522261, "index_size": 29918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179261, "raw_average_key_size": 26, "raw_value_size": 10396795, "raw_average_value_size": 1510, "num_data_blocks": 1175, "num_entries": 6884, "num_filter_entries": 6884, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.129517) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10570467 bytes
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.130899) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.8 rd, 127.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(49.0) write-amplify(24.4) OK, records in: 7407, records dropped: 523 output_compression: NoCompression
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.130920) EVENT_LOG_v1 {"time_micros": 1760434088130910, "job": 64, "event": "compaction_finished", "compaction_time_micros": 83101, "compaction_time_cpu_micros": 53027, "output_level": 6, "num_output_files": 1, "total_output_size": 10570467, "num_input_records": 7407, "num_output_records": 6884, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088131199, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434088133651, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.046070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:28:08.133844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:28:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 05:28:09 np0005486808 podman[393945]: 2025-10-14 09:28:09.713680008 +0000 UTC m=+0.119230607 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 05:28:09 np0005486808 podman[393946]: 2025-10-14 09:28:09.716430306 +0000 UTC m=+0.118569161 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.994 2 DEBUG nova.compute.manager [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.994 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG oslo_concurrency.lockutils [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.995 2 DEBUG nova.compute.manager [req-a405cd60-99bb-498b-b3b9-dca989979056 req-d58e9aa1-644b-42e4-99de-d12a7234e8ee 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Processing event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:28:10 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.996 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.999 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434090.9992287, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:10.999 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.001 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.005 2 INFO nova.virt.libvirt.driver [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance spawned successfully.#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.006 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.017 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.025 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.029 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.030 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.030 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.031 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.032 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.032 2 DEBUG nova.virt.libvirt.driver [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.052 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.109 2 INFO nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 20.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.109 2 DEBUG nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.174 2 INFO nova.compute.manager [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 21.93 seconds to build instance.#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.189 2 DEBUG oslo_concurrency.lockutils [None req-4a4bf26b-7d7d-4540-8ab5-4ea5323fe178 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:11 np0005486808 nova_compute[259627]: 2025-10-14 09:28:11.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 05:28:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.801 2 DEBUG nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.802 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG oslo_concurrency.lockutils [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.803 2 DEBUG nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] No waiting events found dispatching network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:28:13 np0005486808 nova_compute[259627]: 2025-10-14 09:28:13.804 2 WARNING nova.compute.manager [req-eacc9880-7708-4e6b-a187-d146ea103231 req-25cc89ad-e5e1-4491-bad9-92d8b7dedbd2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received unexpected event network-vif-plugged-3fc32773-5083-4341-9838-5282b7963f56 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:28:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 341 B/s wr, 6 op/s
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.247 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.249 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.268 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.389 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.390 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.400 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.401 2 INFO nova.compute.claims [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:28:15 np0005486808 nova_compute[259627]: 2025-10-14 09:28:15.513 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:28:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863749456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.020 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.029 2 DEBUG nova.compute.provider_tree [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.056 2 DEBUG nova.scheduler.client.report [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.084 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.085 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.160 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.161 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.181 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.199 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.293 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.295 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.295 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating image(s)#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.325 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.356 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 70 op/s
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.387 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.392 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.475 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.476 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.477 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.478 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.508 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.512 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9314cb71-9b9f-4379-90ba-61445b09c003_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.807 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9314cb71-9b9f-4379-90ba-61445b09c003_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.898 2 DEBUG nova.policy [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10926c27278e45e7b995f2e53b9d16f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4562699bdb1548f1bb36819107535620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:28:16 np0005486808 nova_compute[259627]: 2025-10-14 09:28:16.908 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] resizing rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.030 2 DEBUG nova.objects.instance [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'migration_context' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:17 np0005486808 NetworkManager[44885]: <info>  [1760434097.0462] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 14 05:28:17 np0005486808 NetworkManager[44885]: <info>  [1760434097.0482] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.055 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.056 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Ensure instance console log exists: /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.056 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.057 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.057 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:17Z|01398|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.592 2 DEBUG nova.compute.manager [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.593 2 DEBUG nova.compute.manager [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.594 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.594 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:17 np0005486808 nova_compute[259627]: 2025-10-14 09:28:17.595 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:28:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:18 np0005486808 nova_compute[259627]: 2025-10-14 09:28:18.095 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Successfully created port: 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:28:18 np0005486808 nova_compute[259627]: 2025-10-14 09:28:18.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 88 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.745 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.746 2 DEBUG nova.network.neutron [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.772 2 DEBUG oslo_concurrency.lockutils [req-29bf727f-f556-41a4-bf82-7f51727ed301 req-c8003fb6-2276-4fc4-b8d7-fceba314cccc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.825 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Successfully updated port: 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.840 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.841 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.841 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.925 2 DEBUG nova.compute.manager [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.926 2 DEBUG nova.compute.manager [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.926 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:19 np0005486808 nova_compute[259627]: 2025-10-14 09:28:19.986 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:28:20 np0005486808 nova_compute[259627]: 2025-10-14 09:28:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 104 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 64 op/s
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.269 2 DEBUG nova.network.neutron [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.307 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.308 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance network_info: |[{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.309 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.309 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.314 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start _get_guest_xml network_info=[{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.320 2 WARNING nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.327 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.328 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.338 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.339 2 DEBUG nova.virt.libvirt.host [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.340 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.341 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.342 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.342 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.343 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.343 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.344 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.345 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.345 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.346 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.346 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.347 2 DEBUG nova.virt.hardware [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.352 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:28:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609265715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.897 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.924 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:21 np0005486808 nova_compute[259627]: 2025-10-14 09:28:21.929 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:28:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969744071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:28:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.388 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.390 2 DEBUG nova.virt.libvirt.vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:16Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.391 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.392 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.393 2 DEBUG nova.objects.instance [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.415 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <uuid>9314cb71-9b9f-4379-90ba-61445b09c003</uuid>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <name>instance-00000083</name>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestNetworkBasicOps-server-575903537</nova:name>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:28:21</nova:creationTime>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:user uuid="10926c27278e45e7b995f2e53b9d16f9">tempest-TestNetworkBasicOps-2096526217-project-member</nova:user>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:project uuid="4562699bdb1548f1bb36819107535620">tempest-TestNetworkBasicOps-2096526217</nova:project>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <nova:port uuid="5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="serial">9314cb71-9b9f-4379-90ba-61445b09c003</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="uuid">9314cb71-9b9f-4379-90ba-61445b09c003</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9314cb71-9b9f-4379-90ba-61445b09c003_disk">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9314cb71-9b9f-4379-90ba-61445b09c003_disk.config">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:a9:0f:a6"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <target dev="tap5a63e80f-6b"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/console.log" append="off"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:28:22 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:28:22 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:28:22 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:28:22 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.417 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Preparing to wait for external event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.417 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.418 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.418 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.419 2 DEBUG nova.virt.libvirt.vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:16Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.419 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.420 2 DEBUG nova.network.os_vif_util [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.420 2 DEBUG os_vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a63e80f-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a63e80f-6b, col_values=(('external_ids', {'iface-id': '5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:0f:a6', 'vm-uuid': '9314cb71-9b9f-4379-90ba-61445b09c003'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:22 np0005486808 NetworkManager[44885]: <info>  [1760434102.4286] manager: (tap5a63e80f-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.436 2 INFO os_vif [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b')#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.504 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.505 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.505 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] No VIF found with MAC fa:16:3e:a9:0f:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.506 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Using config drive#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.524 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.986 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Creating config drive at /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config#033[00m
Oct 14 05:28:22 np0005486808 nova_compute[259627]: 2025-10-14 09:28:22.993 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tipgpue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.149 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5tipgpue" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.180 2 DEBUG nova.storage.rbd_utils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] rbd image 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.184 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.239 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.240 2 DEBUG nova.network.neutron [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.268 2 DEBUG oslo_concurrency.lockutils [req-689edcda-d2d1-4344-a989-b3d7fbf9b655 req-cb8ca5eb-c3a1-4bff-95fd-ab74f7916386 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.371 2 DEBUG oslo_concurrency.processutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config 9314cb71-9b9f-4379-90ba-61445b09c003_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.372 2 INFO nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deleting local config drive /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003/disk.config because it was imported into RBD.#033[00m
Oct 14 05:28:23 np0005486808 kernel: tap5a63e80f-6b: entered promiscuous mode
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.4278] manager: (tap5a63e80f-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/571)
Oct 14 05:28:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:23Z|01399|binding|INFO|Claiming lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for this chassis.
Oct 14 05:28:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:23Z|01400|binding|INFO|5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa: Claiming fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.442 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:a9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9314cb71-9b9f-4379-90ba-61445b09c003', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee202a2d-dda0-40f4-92dc-f1908630878a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42808c73-a7f9-4337-928b-894f78f53e75, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.443 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa in datapath 0ad5af7c-fbad-46b1-979b-db7d2639a7c3 bound to our chassis#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.444 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ad5af7c-fbad-46b1-979b-db7d2639a7c3#033[00m
Oct 14 05:28:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:23Z|01401|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa ovn-installed in OVS
Oct 14 05:28:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:23Z|01402|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa up in Southbound
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.464 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aef3a2e5-ecf4-4112-ad58-3a7805aa7ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.468 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ad5af7c-f1 in ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.470 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ad5af7c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.471 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e44e86-5ec0-4564-a21c-41a5eb712539]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.472 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2165a011-dc42-49f9-87da-dbf86580a4a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 systemd-machined[214636]: New machine qemu-164-instance-00000083.
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.487 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1e6d26-3381-405d-829f-9fbcad5e3a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Oct 14 05:28:23 np0005486808 systemd-udevd[394335]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79a5ad-84e7-4746-91ad-bc1a6680c9a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.5399] device (tap5a63e80f-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.5416] device (tap5a63e80f-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:28:23 np0005486808 podman[394307]: 2025-10-14 09:28:23.551178135 +0000 UTC m=+0.078439509 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.562 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e14a343f-d418-47cb-b83b-496d79576866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.569 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52f732fb-e621-43e5-b184-669129d0cced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.5711] manager: (tap0ad5af7c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/572)
Oct 14 05:28:23 np0005486808 systemd-udevd[394354]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:28:23 np0005486808 podman[394306]: 2025-10-14 09:28:23.626089336 +0000 UTC m=+0.163229024 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.627 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2c05430a-5314-4180-a6a3-67ee56fa9fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.630 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f517f1-d4cd-4ba7-a2ce-16b6c8a8c241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.6584] device (tap0ad5af7c-f0): carrier: link connected
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.658 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[431703c3-1767-4da7-b193-e2ca27c8dfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c9887-9d61-4b75-823c-f51bd67f3933]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ad5af7c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:03:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788241, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394388, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d800a6-bddb-413c-a21b-9a03ac16dcf9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:3be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788241, 'tstamp': 788241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394389, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.690 2 DEBUG nova.compute.manager [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG oslo_concurrency.lockutils [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.691 2 DEBUG nova.compute.manager [req-e498f7c3-5ac6-49b6-b0c2-834ea98722b1 req-1a879a05-89b6-4d55-994f-780c2489d81f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Processing event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58f0d4a4-ee8b-4fb1-93c3-f4295be31389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ad5af7c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:03:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788241, 'reachable_time': 18563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394390, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda7639-76f7-41bf-bf03-eaf55c21a2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.793 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4bce08cd-61d8-4248-bd1f-5bb8324f2077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ad5af7c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.797 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.798 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ad5af7c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 NetworkManager[44885]: <info>  [1760434103.8003] manager: (tap0ad5af7c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 14 05:28:23 np0005486808 kernel: tap0ad5af7c-f0: entered promiscuous mode
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.804 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ad5af7c-f0, col_values=(('external_ids', {'iface-id': 'b64d84a0-5356-4297-8228-85522d087442'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:23Z|01403|binding|INFO|Releasing lport b64d84a0-5356-4297-8228-85522d087442 from this chassis (sb_readonly=0)
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 nova_compute[259627]: 2025-10-14 09:28:23.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.818 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.819 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5a5ef4-9dac-410a-b9ec-e03eba29e18b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.819 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0ad5af7c-fbad-46b1-979b-db7d2639a7c3
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.pid.haproxy
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0ad5af7c-fbad-46b1-979b-db7d2639a7c3
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:28:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:23.821 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'env', 'PROCESS_TAG=haproxy-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ad5af7c-fbad-46b1-979b-db7d2639a7c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:28:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:24Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 05:28:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:24Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5d:72 10.100.0.12
Oct 14 05:28:24 np0005486808 podman[394464]: 2025-10-14 09:28:24.244846605 +0000 UTC m=+0.080971912 container create de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 05:28:24 np0005486808 systemd[1]: Started libpod-conmon-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope.
Oct 14 05:28:24 np0005486808 podman[394464]: 2025-10-14 09:28:24.203886173 +0000 UTC m=+0.040011910 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:28:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f023d5bb6e112cd231a0a14f750e65530a9849b6d39894261763a8809936d7ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:24 np0005486808 podman[394464]: 2025-10-14 09:28:24.343845351 +0000 UTC m=+0.179970678 container init de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:28:24 np0005486808 podman[394464]: 2025-10-14 09:28:24.350498256 +0000 UTC m=+0.186623563 container start de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:28:24 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : New worker (394485) forked
Oct 14 05:28:24 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : Loading success.
Oct 14 05:28:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.555 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.554593, 9314cb71-9b9f-4379-90ba-61445b09c003 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.556 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Started (Lifecycle Event)#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.558 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.563 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.569 2 INFO nova.virt.libvirt.driver [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance spawned successfully.#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.569 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.595 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.603 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.609 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.610 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.611 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.612 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.612 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.613 2 DEBUG nova.virt.libvirt.driver [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.655 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.5548694, 9314cb71-9b9f-4379-90ba-61445b09c003 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.655 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.698 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.704 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434104.5616372, 9314cb71-9b9f-4379-90ba-61445b09c003 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.704 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.711 2 INFO nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 8.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.712 2 DEBUG nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.730 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.734 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.775 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.804 2 INFO nova.compute.manager [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 9.45 seconds to build instance.#033[00m
Oct 14 05:28:24 np0005486808 nova_compute[259627]: 2025-10-14 09:28:24.822 2 DEBUG oslo_concurrency.lockutils [None req-d1ec9361-7028-4a9a-ace6-4d28a18b3791 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.810 2 DEBUG nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.810 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG oslo_concurrency.lockutils [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.811 2 DEBUG nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:28:25 np0005486808 nova_compute[259627]: 2025-10-14 09:28:25.811 2 WARNING nova.compute.manager [req-955de7ec-b67c-470b-9a34-e20d9d72d42c req-dbe39656-eab3-4122-b93c-d278ae1b2602 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received unexpected event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with vm_state active and task_state None.#033[00m
Oct 14 05:28:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 14 05:28:27 np0005486808 nova_compute[259627]: 2025-10-14 09:28:27.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:27.373 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:28:27 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:27.374 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:28:27 np0005486808 nova_compute[259627]: 2025-10-14 09:28:27.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 14 05:28:28 np0005486808 nova_compute[259627]: 2025-10-14 09:28:28.548 2 DEBUG nova.compute.manager [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:28 np0005486808 nova_compute[259627]: 2025-10-14 09:28:28.549 2 DEBUG nova.compute.manager [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:28:28 np0005486808 nova_compute[259627]: 2025-10-14 09:28:28.550 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:28 np0005486808 nova_compute[259627]: 2025-10-14 09:28:28.550 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:28 np0005486808 nova_compute[259627]: 2025-10-14 09:28:28.551 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:28:29 np0005486808 nova_compute[259627]: 2025-10-14 09:28:29.879 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:29 np0005486808 nova_compute[259627]: 2025-10-14 09:28:29.881 2 DEBUG nova.network.neutron [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:29 np0005486808 nova_compute[259627]: 2025-10-14 09:28:29.905 2 DEBUG oslo_concurrency.lockutils [req-eb39e2f5-9a50-4899-95c7-c361215c8d45 req-88418871-8872-4d4f-ba86-bc553c394476 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:30 np0005486808 nova_compute[259627]: 2025-10-14 09:28:30.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 14 05:28:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:32.376 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Oct 14 05:28:32 np0005486808 nova_compute[259627]: 2025-10-14 09:28:32.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:28:32
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images']
Oct 14 05:28:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:28:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:28:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:28:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Oct 14 05:28:35 np0005486808 nova_compute[259627]: 2025-10-14 09:28:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:36Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 05:28:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:36Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:0f:a6 10.100.0.12
Oct 14 05:28:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 164 op/s
Oct 14 05:28:36 np0005486808 nova_compute[259627]: 2025-10-14 09:28:36.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:36 np0005486808 nova_compute[259627]: 2025-10-14 09:28:36.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.678 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.679 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.697 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.790 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.791 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.802 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.802 2 INFO nova.compute.claims [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:28:37 np0005486808 nova_compute[259627]: 2025-10-14 09:28:37.977 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 178 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Oct 14 05:28:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:28:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782866377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.463 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.471 2 DEBUG nova.compute.provider_tree [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.486 2 DEBUG nova.scheduler.client.report [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.504 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.504 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.544 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.545 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.564 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.584 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.679 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.680 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.680 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating image(s)#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.707 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.735 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.759 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.763 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.856 2 DEBUG nova.policy [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.859 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.860 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.881 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:38 np0005486808 nova_compute[259627]: 2025-10-14 09:28:38.884 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.153 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.220 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.323 2 DEBUG nova.objects.instance [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.343 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.343 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Ensure instance console log exists: /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.344 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:39 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.785 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Successfully created port: 3550cf12-50e7-4809-9e33-8057ba120200 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:39.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:40.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 218 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 14 05:28:40 np0005486808 podman[394682]: 2025-10-14 09:28:40.699157124 +0000 UTC m=+0.101556160 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:28:40 np0005486808 podman[394683]: 2025-10-14 09:28:40.72446963 +0000 UTC m=+0.123790550 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:40.880 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Successfully updated port: 3550cf12-50e7-4809-9e33-8057ba120200 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:40 np0005486808 nova_compute[259627]: 2025-10-14 09:28:40.910 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.032 2 DEBUG nova.compute.manager [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-changed-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.033 2 DEBUG nova.compute.manager [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Refreshing instance network info cache due to event network-changed-3550cf12-50e7-4809-9e33-8057ba120200. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.033 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.074 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.233 2 INFO nova.compute.manager [None req-205ddce6-9c36-481e-a7be-7598671b08c9 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output#033[00m
Oct 14 05:28:41 np0005486808 nova_compute[259627]: 2025-10-14 09:28:41.241 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.111 2 DEBUG nova.network.neutron [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.137 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.137 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance network_info: |[{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.138 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.138 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Refreshing network info cache for port 3550cf12-50e7-4809-9e33-8057ba120200 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.144 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start _get_guest_xml network_info=[{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.150 2 WARNING nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.156 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.157 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.161 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.162 2 DEBUG nova.virt.libvirt.host [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.163 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.163 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.164 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.165 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.166 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.167 2 DEBUG nova.virt.hardware [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.172 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:42Z|01404|binding|INFO|Releasing lport b64d84a0-5356-4297-8228-85522d087442 from this chassis (sb_readonly=0)
Oct 14 05:28:42 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:42Z|01405|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:28:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855327202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.688 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.725 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.731 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:42 np0005486808 nova_compute[259627]: 2025-10-14 09:28:42.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.010 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:28:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038311572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.162 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.164 2 DEBUG nova.virt.libvirt.vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:38Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.164 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.165 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.166 2 DEBUG nova.objects.instance [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.190 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <uuid>fb3db81b-4d6f-4736-9d4b-b1900fad6488</uuid>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <name>instance-00000084</name>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944</nova:name>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:28:42</nova:creationTime>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <nova:port uuid="3550cf12-50e7-4809-9e33-8057ba120200">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="serial">fb3db81b-4d6f-4736-9d4b-b1900fad6488</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="uuid">fb3db81b-4d6f-4736-9d4b-b1900fad6488</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:64:be:04"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <target dev="tap3550cf12-50"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/console.log" append="off"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:28:43 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:28:43 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:28:43 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:28:43 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.192 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Preparing to wait for external event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.192 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.193 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.193 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.194 2 DEBUG nova.virt.libvirt.vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:28:38Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.194 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.195 2 DEBUG nova.network.os_vif_util [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.196 2 DEBUG os_vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3550cf12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3550cf12-50, col_values=(('external_ids', {'iface-id': '3550cf12-50e7-4809-9e33-8057ba120200', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:be:04', 'vm-uuid': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:43 np0005486808 NetworkManager[44885]: <info>  [1760434123.2444] manager: (tap3550cf12-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.253 2 INFO os_vif [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50')#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.324 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.325 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.325 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:64:be:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.326 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Using config drive#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.349 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018621902878166309 of space, bias 1.0, pg target 0.5586570863449892 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:28:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:28:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:28:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193183784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.483 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.503 2 INFO nova.compute.manager [None req-46a2b89b-47bc-4ae1-bdd1-d444b0552f4f 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.508 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.569 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.569 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.835 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.836 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3254MB free_disk=59.87657165527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.836 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.837 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.859 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updated VIF entry in instance network info cache for port 3550cf12-50e7-4809-9e33-8057ba120200. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.860 2 DEBUG nova.network.neutron [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [{"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.886 2 DEBUG oslo_concurrency.lockutils [req-ceaf4437-72c8-4ff0-8e35-61ebd51f9f22 req-15f3d7ec-1a62-4f07-8bac-874d891c436c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-fb3db81b-4d6f-4736-9d4b-b1900fad6488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.966 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Creating config drive at /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config#033[00m
Oct 14 05:28:43 np0005486808 nova_compute[259627]: 2025-10-14 09:28:43.973 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqwzpkb5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.034 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 7a110a3c-a2ca-4314-a190-28a4505cc26c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.034 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9314cb71-9b9f-4379-90ba-61445b09c003 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance fb3db81b-4d6f-4736-9d4b-b1900fad6488 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.035 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.128 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqwzpkb5" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.162 2 DEBUG nova.storage.rbd_utils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.166 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.272 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.381 2 DEBUG oslo_concurrency.processutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config fb3db81b-4d6f-4736-9d4b-b1900fad6488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.383 2 INFO nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deleting local config drive /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488/disk.config because it was imported into RBD.#033[00m
Oct 14 05:28:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 NetworkManager[44885]: <info>  [1760434124.4969] manager: (tap3550cf12-50): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct 14 05:28:44 np0005486808 kernel: tap3550cf12-50: entered promiscuous mode
Oct 14 05:28:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:44Z|01406|binding|INFO|Claiming lport 3550cf12-50e7-4809-9e33-8057ba120200 for this chassis.
Oct 14 05:28:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:44Z|01407|binding|INFO|3550cf12-50e7-4809-9e33-8057ba120200: Claiming fa:16:3e:64:be:04 10.100.0.7
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.511 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:be:04 10.100.0.7'], port_security=['fa:16:3e:64:be:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3550cf12-50e7-4809-9e33-8057ba120200) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.512 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3550cf12-50e7-4809-9e33-8057ba120200 in datapath f09a704d-6063-4e40-b690-c967cd364b32 bound to our chassis#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.514 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32#033[00m
Oct 14 05:28:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:44Z|01408|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 ovn-installed in OVS
Oct 14 05:28:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:44Z|01409|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 up in Southbound
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.537 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1922984b-4c86-4a10-9454-5cc4faa7c374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 systemd-udevd[395030]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:28:44 np0005486808 systemd-machined[214636]: New machine qemu-165-instance-00000084.
Oct 14 05:28:44 np0005486808 NetworkManager[44885]: <info>  [1760434124.5620] device (tap3550cf12-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:28:44 np0005486808 NetworkManager[44885]: <info>  [1760434124.5633] device (tap3550cf12-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:28:44 np0005486808 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.572 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[757d30c7-7da3-4645-a4d0-bb4ea97ea1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.576 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[94a1461c-2c43-4453-81f2-8089ea6a3d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.606 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27a3c312-2699-4650-8364-adcf73d278d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2b2e3883-e30c-4c78-9465-7a100b7c52c0 does not exist
Oct 14 05:28:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9aa4f7a2-9c83-4a37-8685-205309344383 does not exist
Oct 14 05:28:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d42f651-2478-4f47-8b0f-7216def51906 does not exist
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.639 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b57ffaf9-cdcc-4944-8930-4eca8f4f9abf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395039, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f12d4450-ff40-4060-ae11-099bb3c22460]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785940, 'tstamp': 785940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395043, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785944, 'tstamp': 785944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395043, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.665 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.669 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.669 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.670 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:44.671 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.724 2 INFO nova.compute.manager [None req-4079e86d-4772-4253-a7bb-e2158f196723 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Get console output#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.742 20781 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:28:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149852731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.793 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.801 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.815 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.837 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:28:44 np0005486808 nova_compute[259627]: 2025-10-14 09:28:44.838 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.036 2 DEBUG nova.compute.manager [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.036 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG oslo_concurrency.lockutils [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.037 2 DEBUG nova.compute.manager [req-3368ca70-58eb-4720-b17d-9d6ff79c5f89 req-1fa1f65c-c69e-435e-8334-ab963be8e77e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Processing event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.537106378 +0000 UTC m=+0.072833591 container create ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:28:45 np0005486808 systemd[1]: Started libpod-conmon-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope.
Oct 14 05:28:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:28:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.506482221 +0000 UTC m=+0.042209454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.649527286 +0000 UTC m=+0.185254579 container init ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.662038495 +0000 UTC m=+0.197765728 container start ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.666269569 +0000 UTC m=+0.201996792 container attach ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:28:45 np0005486808 systemd[1]: libpod-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope: Deactivated successfully.
Oct 14 05:28:45 np0005486808 exciting_visvesvaraya[395202]: 167 167
Oct 14 05:28:45 np0005486808 conmon[395202]: conmon ce88c42c3ce94b15e913 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope/container/memory.events
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.674346299 +0000 UTC m=+0.210073502 container died ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:28:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-386c022e0d39d0307b71d4c188a70e173ba6ea69d27dc7e0d8a815240d96c3b8-merged.mount: Deactivated successfully.
Oct 14 05:28:45 np0005486808 podman[395186]: 2025-10-14 09:28:45.725912743 +0000 UTC m=+0.261639946 container remove ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:28:45 np0005486808 systemd[1]: libpod-conmon-ce88c42c3ce94b15e91309118178a5e68cfd70e58b000a7b071ad22f7e43acd5.scope: Deactivated successfully.
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.832 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:45 np0005486808 nova_compute[259627]: 2025-10-14 09:28:45.834 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:45 np0005486808 podman[395244]: 2025-10-14 09:28:45.920654434 +0000 UTC m=+0.034956285 container create ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:28:45 np0005486808 systemd[1]: Started libpod-conmon-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope.
Oct 14 05:28:46 np0005486808 podman[395244]: 2025-10-14 09:28:45.907189091 +0000 UTC m=+0.021490962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:46 np0005486808 podman[395244]: 2025-10-14 09:28:46.04228091 +0000 UTC m=+0.156582781 container init ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:28:46 np0005486808 podman[395244]: 2025-10-14 09:28:46.048629176 +0000 UTC m=+0.162931067 container start ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:28:46 np0005486808 podman[395244]: 2025-10-14 09:28:46.05240506 +0000 UTC m=+0.166706901 container attach ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.087 2 DEBUG nova.compute.manager [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG nova.compute.manager [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing instance network info cache due to event network-changed-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.088 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Refreshing network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.183 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.184 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.184 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.185 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.185 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.186 2 INFO nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Terminating instance#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.187 2 DEBUG nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:28:46 np0005486808 kernel: tap5a63e80f-6b (unregistering): left promiscuous mode
Oct 14 05:28:46 np0005486808 NetworkManager[44885]: <info>  [1760434126.2577] device (tap5a63e80f-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:46Z|01410|binding|INFO|Releasing lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa from this chassis (sb_readonly=0)
Oct 14 05:28:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:46Z|01411|binding|INFO|Setting lport 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa down in Southbound
Oct 14 05:28:46 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:46Z|01412|binding|INFO|Removing iface tap5a63e80f-6b ovn-installed in OVS
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.282 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:a9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9314cb71-9b9f-4379-90ba-61445b09c003', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4562699bdb1548f1bb36819107535620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee202a2d-dda0-40f4-92dc-f1908630878a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42808c73-a7f9-4337-928b-894f78f53e75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.284 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa in datapath 0ad5af7c-fbad-46b1-979b-db7d2639a7c3 unbound from our chassis#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.287 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ad5af7c-fbad-46b1-979b-db7d2639a7c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.289 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[078159ee-9aba-4770-891b-2c67904b2cdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.290 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 namespace which is not needed anymore#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 14 05:28:46 np0005486808 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 13.228s CPU time.
Oct 14 05:28:46 np0005486808 systemd-machined[214636]: Machine qemu-164-instance-00000083 terminated.
Oct 14 05:28:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.427 2 INFO nova.virt.libvirt.driver [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Instance destroyed successfully.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.428 2 DEBUG nova.objects.instance [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lazy-loading 'resources' on Instance uuid 9314cb71-9b9f-4379-90ba-61445b09c003 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:46 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : haproxy version is 2.8.14-c23fe91
Oct 14 05:28:46 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [NOTICE]   (394483) : path to executable is /usr/sbin/haproxy
Oct 14 05:28:46 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [WARNING]  (394483) : Exiting Master process...
Oct 14 05:28:46 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [ALERT]    (394483) : Current worker (394485) exited with code 143 (Terminated)
Oct 14 05:28:46 np0005486808 neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3[394479]: [WARNING]  (394483) : All workers exited. Exiting... (0)
Oct 14 05:28:46 np0005486808 systemd[1]: libpod-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope: Deactivated successfully.
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.475 2 DEBUG nova.virt.libvirt.vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:28:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-575903537',display_name='tempest-TestNetworkBasicOps-server-575903537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-575903537',id=131,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZNK+Wbj9xAsgpWQ+tziM8jRjajF62ZTB5kTzrK/xn8p7FnYdtDdeLVfLEHTxjGjbKXIgJlw92Cf9a6ZSykZsO+5buce9Y3MqTSkOEHFe9bsgLvMZ76ONZmB8tSQVQONA==',key_name='tempest-TestNetworkBasicOps-1689754690',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4562699bdb1548f1bb36819107535620',ramdisk_id='',reservation_id='r-eaasug7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-2096526217',owner_user_name='tempest-TestNetworkBasicOps-2096526217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:24Z,user_data=None,user_id='10926c27278e45e7b995f2e53b9d16f9',uuid=9314cb71-9b9f-4379-90ba-61445b09c003,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.476 2 DEBUG nova.network.os_vif_util [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converting VIF {"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.477 2 DEBUG nova.network.os_vif_util [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.478 2 DEBUG os_vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:28:46 np0005486808 podman[395310]: 2025-10-14 09:28:46.481428221 +0000 UTC m=+0.071176650 container died de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a63e80f-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.490 2 INFO os_vif [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa,network=Network(0ad5af7c-fbad-46b1-979b-db7d2639a7c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a63e80f-6b')#033[00m
Oct 14 05:28:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c-userdata-shm.mount: Deactivated successfully.
Oct 14 05:28:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f023d5bb6e112cd231a0a14f750e65530a9849b6d39894261763a8809936d7ae-merged.mount: Deactivated successfully.
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.526 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.5262241, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.527 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Started (Lifecycle Event)#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.529 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:28:46 np0005486808 podman[395310]: 2025-10-14 09:28:46.541427393 +0000 UTC m=+0.131175802 container cleanup de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:28:46 np0005486808 systemd[1]: libpod-conmon-de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c.scope: Deactivated successfully.
Oct 14 05:28:46 np0005486808 podman[395366]: 2025-10-14 09:28:46.630828262 +0000 UTC m=+0.058167558 container remove de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd56a2f-e248-4950-833d-3ce7f1561ed0]: (4, ('Tue Oct 14 09:28:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 (de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c)\nde148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c\nTue Oct 14 09:28:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 (de148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c)\nde148a123f8991f5ef3217343d7bc0a7b6e8802ccc588c7f0e22f031aec8154c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.643 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[975de803-0474-423f-acf2-96cc759c1200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.644 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ad5af7c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:28:46 np0005486808 kernel: tap0ad5af7c-f0: left promiscuous mode
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52b369af-d0dc-4819-8443-f381bd43be2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.687 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.687 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.687 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61b659a5-72a5-4f26-bed7-e4177177a997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.688 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6e7a35-fce1-416e-a81e-3471f3d91acc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.691 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.694 2 INFO nova.virt.libvirt.driver [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance spawned successfully.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.694 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.708 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9f1d11-bee8-4000-96df-d785b67ba38e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788231, 'reachable_time': 19998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395380, 'error': None, 'target': 'ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0ad5af7c\x2dfbad\x2d46b1\x2d979b\x2ddb7d2639a7c3.mount: Deactivated successfully.
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.714 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.5264158, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.715 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ad5af7c-fbad-46b1-979b-db7d2639a7c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:28:46 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:28:46.716 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[8a403ea4-825e-4077-a6ea-c0211f3d8cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.719 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.720 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.720 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.721 2 DEBUG nova.virt.libvirt.driver [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.733 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.739 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434126.672971, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.739 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.777 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.782 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.801 2 INFO nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 8.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.801 2 DEBUG nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.836 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.873 2 INFO nova.compute.manager [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 9.12 seconds to build instance.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.883 2 INFO nova.virt.libvirt.driver [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deleting instance files /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003_del#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.883 2 INFO nova.virt.libvirt.driver [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deletion of /var/lib/nova/instances/9314cb71-9b9f-4379-90ba-61445b09c003_del complete#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.890 2 DEBUG oslo_concurrency.lockutils [None req-218c55bc-36b4-4cad-a22d-d7fdea3537ce 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.931 2 INFO nova.compute.manager [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG oslo.service.loopingcall [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.932 2 DEBUG nova.network.neutron [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:28:46 np0005486808 nova_compute[259627]: 2025-10-14 09:28:46.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.151 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.152 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.153 2 WARNING nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received unexpected event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.153 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-unplugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.154 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG oslo_concurrency.lockutils [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.155 2 DEBUG nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] No waiting events found dispatching network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.155 2 WARNING nova.compute.manager [req-2ea41469-7920-4c2a-a14e-8e0c5ccd451d req-8f8b2081-9a02-4ae2-945a-0f20ff30e27f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received unexpected event network-vif-plugged-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.157 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:28:47 np0005486808 peaceful_lalande[395284]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:28:47 np0005486808 peaceful_lalande[395284]: --> relative data size: 1.0
Oct 14 05:28:47 np0005486808 peaceful_lalande[395284]: --> All data devices are unavailable
Oct 14 05:28:47 np0005486808 systemd[1]: libpod-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope: Deactivated successfully.
Oct 14 05:28:47 np0005486808 podman[395244]: 2025-10-14 09:28:47.209816999 +0000 UTC m=+1.324118860 container died ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:28:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-32733c7b40182ce3455f5cced0eafb6260546af5ac2a361372827fbd6b09f413-merged.mount: Deactivated successfully.
Oct 14 05:28:47 np0005486808 podman[395244]: 2025-10-14 09:28:47.265834703 +0000 UTC m=+1.380136554 container remove ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:28:47 np0005486808 systemd[1]: libpod-conmon-ece047ab51f8f5709c18e2e6b736464c99cde7f6e7d172fa346187bbc9f484d3.scope: Deactivated successfully.
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.393 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updated VIF entry in instance network info cache for port 5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.394 2 DEBUG nova.network.neutron [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [{"id": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "address": "fa:16:3e:a9:0f:a6", "network": {"id": "0ad5af7c-fbad-46b1-979b-db7d2639a7c3", "bridge": "br-int", "label": "tempest-network-smoke--2010721578", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4562699bdb1548f1bb36819107535620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a63e80f-6b", "ovs_interfaceid": "5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.419 2 DEBUG oslo_concurrency.lockutils [req-5501dbab-95aa-4889-a1c2-c44edeaf3d2c req-b831670b-bff0-438f-9701-3bbbee812369 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9314cb71-9b9f-4379-90ba-61445b09c003" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.581 2 DEBUG nova.network.neutron [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.601 2 INFO nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.671 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.672 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:28:47 np0005486808 nova_compute[259627]: 2025-10-14 09:28:47.781 2 DEBUG oslo_concurrency.processutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:28:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:48 np0005486808 podman[395573]: 2025-10-14 09:28:48.042249658 +0000 UTC m=+0.068431762 container create ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:28:48 np0005486808 systemd[1]: Started libpod-conmon-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope.
Oct 14 05:28:48 np0005486808 podman[395573]: 2025-10-14 09:28:48.023628299 +0000 UTC m=+0.049810423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:48 np0005486808 podman[395573]: 2025-10-14 09:28:48.151392655 +0000 UTC m=+0.177574839 container init ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:28:48 np0005486808 podman[395573]: 2025-10-14 09:28:48.165484263 +0000 UTC m=+0.191666377 container start ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:28:48 np0005486808 podman[395573]: 2025-10-14 09:28:48.169990095 +0000 UTC m=+0.196172329 container attach ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:28:48 np0005486808 brave_visvesvaraya[395595]: 167 167
Oct 14 05:28:48 np0005486808 systemd[1]: libpod-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope: Deactivated successfully.
Oct 14 05:28:48 np0005486808 podman[395600]: 2025-10-14 09:28:48.242891746 +0000 UTC m=+0.046329566 container died ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:28:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-38fb6f5d08bc56e428c0361843182f654c5429059ff0dd1c1cfe164013565ab0-merged.mount: Deactivated successfully.
Oct 14 05:28:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:28:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000951975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:28:48 np0005486808 podman[395600]: 2025-10-14 09:28:48.283327625 +0000 UTC m=+0.086765395 container remove ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:28:48 np0005486808 systemd[1]: libpod-conmon-ffeea182c969d631a3e8b872676fa879de0d45c6c9366c19d2fed9257ee86c82.scope: Deactivated successfully.
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.299 2 DEBUG oslo_concurrency.processutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.309 2 DEBUG nova.compute.provider_tree [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.335 2 DEBUG nova.scheduler.client.report [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.352 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.377 2 INFO nova.scheduler.client.report [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Deleted allocations for instance 9314cb71-9b9f-4379-90ba-61445b09c003#033[00m
Oct 14 05:28:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Oct 14 05:28:48 np0005486808 nova_compute[259627]: 2025-10-14 09:28:48.452 2 DEBUG oslo_concurrency.lockutils [None req-00c999b5-a3cb-475d-bc3a-5157a4057393 10926c27278e45e7b995f2e53b9d16f9 4562699bdb1548f1bb36819107535620 - - default default] Lock "9314cb71-9b9f-4379-90ba-61445b09c003" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:28:48 np0005486808 podman[395623]: 2025-10-14 09:28:48.528441422 +0000 UTC m=+0.067997761 container create 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:28:48 np0005486808 systemd[1]: Started libpod-conmon-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope.
Oct 14 05:28:48 np0005486808 podman[395623]: 2025-10-14 09:28:48.499462766 +0000 UTC m=+0.039019155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:48 np0005486808 podman[395623]: 2025-10-14 09:28:48.64210912 +0000 UTC m=+0.181665459 container init 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:28:48 np0005486808 podman[395623]: 2025-10-14 09:28:48.654488606 +0000 UTC m=+0.194044935 container start 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:28:48 np0005486808 podman[395623]: 2025-10-14 09:28:48.659146841 +0000 UTC m=+0.198703230 container attach 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.265 2 DEBUG nova.compute.manager [req-35790b4c-97bc-481f-971e-3535542d9667 req-bca5c904-f152-451d-a1b4-201a452e942d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Received event network-vif-deleted-5a63e80f-6b5f-4d7a-8976-f1033d8b6ffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]: {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    "0": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "devices": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "/dev/loop3"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            ],
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_name": "ceph_lv0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_size": "21470642176",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "name": "ceph_lv0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "tags": {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_name": "ceph",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.crush_device_class": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.encrypted": "0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_id": "0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.vdo": "0"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            },
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "vg_name": "ceph_vg0"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        }
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    ],
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    "1": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "devices": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "/dev/loop4"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            ],
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_name": "ceph_lv1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_size": "21470642176",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "name": "ceph_lv1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "tags": {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_name": "ceph",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.crush_device_class": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.encrypted": "0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_id": "1",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.vdo": "0"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            },
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "vg_name": "ceph_vg1"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        }
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    ],
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    "2": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "devices": [
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "/dev/loop5"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            ],
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_name": "ceph_lv2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_size": "21470642176",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "name": "ceph_lv2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "tags": {
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.cluster_name": "ceph",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.crush_device_class": "",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.encrypted": "0",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osd_id": "2",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:                "ceph.vdo": "0"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            },
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "type": "block",
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:            "vg_name": "ceph_vg2"
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:        }
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]:    ]
Oct 14 05:28:49 np0005486808 vigilant_galileo[395640]: }
Oct 14 05:28:49 np0005486808 systemd[1]: libpod-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope: Deactivated successfully.
Oct 14 05:28:49 np0005486808 podman[395623]: 2025-10-14 09:28:49.52357009 +0000 UTC m=+1.063126389 container died 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:28:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4719913f2a9ae86b6ebec698b858525c09c34c4f11dabcfb7c4ad81ec27e7354-merged.mount: Deactivated successfully.
Oct 14 05:28:49 np0005486808 podman[395623]: 2025-10-14 09:28:49.598742208 +0000 UTC m=+1.138298517 container remove 36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:28:49 np0005486808 systemd[1]: libpod-conmon-36e6f5616bd12d127c5450884da24b6e3094abc04d7ca8febb5fd31effc940f4.scope: Deactivated successfully.
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.942 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.960 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.961 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.962 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:49 np0005486808 nova_compute[259627]: 2025-10-14 09:28:49.962 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:28:50 np0005486808 nova_compute[259627]: 2025-10-14 09:28:50.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 209 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.434828707 +0000 UTC m=+0.049463463 container create e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:28:50 np0005486808 systemd[1]: Started libpod-conmon-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope.
Oct 14 05:28:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.41348049 +0000 UTC m=+0.028115336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.52034059 +0000 UTC m=+0.134975386 container init e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.529148938 +0000 UTC m=+0.143783694 container start e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.53329487 +0000 UTC m=+0.147929706 container attach e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:28:50 np0005486808 zen_morse[395821]: 167 167
Oct 14 05:28:50 np0005486808 systemd[1]: libpod-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope: Deactivated successfully.
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.535551766 +0000 UTC m=+0.150186532 container died e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:28:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d8f7a85bdae22fac6eee88101392a013edfc1ae108344148c37642d17c44b8ce-merged.mount: Deactivated successfully.
Oct 14 05:28:50 np0005486808 podman[395805]: 2025-10-14 09:28:50.585858439 +0000 UTC m=+0.200493225 container remove e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:28:50 np0005486808 systemd[1]: libpod-conmon-e3858e6d897119a94bfa5fd4cd9a817a585d2cf514073185f0fab968325b8a48.scope: Deactivated successfully.
Oct 14 05:28:50 np0005486808 podman[395845]: 2025-10-14 09:28:50.805056275 +0000 UTC m=+0.049420992 container create cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:28:50 np0005486808 systemd[1]: Started libpod-conmon-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope.
Oct 14 05:28:50 np0005486808 podman[395845]: 2025-10-14 09:28:50.787793899 +0000 UTC m=+0.032158626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:28:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:28:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:50 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:28:50 np0005486808 podman[395845]: 2025-10-14 09:28:50.919183635 +0000 UTC m=+0.163548382 container init cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:28:50 np0005486808 podman[395845]: 2025-10-14 09:28:50.928251829 +0000 UTC m=+0.172616566 container start cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:28:50 np0005486808 podman[395845]: 2025-10-14 09:28:50.934042752 +0000 UTC m=+0.178407509 container attach cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:28:50 np0005486808 nova_compute[259627]: 2025-10-14 09:28:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:51 np0005486808 nova_compute[259627]: 2025-10-14 09:28:51.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]: {
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_id": 2,
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "type": "bluestore"
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    },
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_id": 1,
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "type": "bluestore"
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    },
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_id": 0,
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:        "type": "bluestore"
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]:    }
Oct 14 05:28:52 np0005486808 dreamy_northcutt[395861]: }
Oct 14 05:28:52 np0005486808 systemd[1]: libpod-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Deactivated successfully.
Oct 14 05:28:52 np0005486808 systemd[1]: libpod-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Consumed 1.170s CPU time.
Oct 14 05:28:52 np0005486808 podman[395894]: 2025-10-14 09:28:52.150204743 +0000 UTC m=+0.032832712 container died cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:28:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-20d0055e1f5c218e39e6aa9b14d83197e02a002106da5b52161c9ed157b0255d-merged.mount: Deactivated successfully.
Oct 14 05:28:52 np0005486808 podman[395894]: 2025-10-14 09:28:52.223564156 +0000 UTC m=+0.106192105 container remove cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:28:52 np0005486808 systemd[1]: libpod-conmon-cdf42c5e92ed9dff8b34c2eb74c19159d1088b00938f764644937b6ba6bf152e.scope: Deactivated successfully.
Oct 14 05:28:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:28:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:28:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 327edd82-5740-4f51-9ae8-1a86ddce70f6 does not exist
Oct 14 05:28:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4717444d-2d99-476f-8bf4-26530edc97c1 does not exist
Oct 14 05:28:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 964 KiB/s wr, 132 op/s
Oct 14 05:28:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:52Z|01413|binding|INFO|Releasing lport e4065da2-8191-4cbc-a6ed-0505ac5ea1c6 from this chassis (sb_readonly=0)
Oct 14 05:28:52 np0005486808 nova_compute[259627]: 2025-10-14 09:28:52.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:28:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 14 05:28:54 np0005486808 podman[395961]: 2025-10-14 09:28:54.700322065 +0000 UTC m=+0.107380454 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:28:54 np0005486808 podman[395960]: 2025-10-14 09:28:54.739420471 +0000 UTC m=+0.144365438 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:28:54 np0005486808 nova_compute[259627]: 2025-10-14 09:28:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:54 np0005486808 nova_compute[259627]: 2025-10-14 09:28:54.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:28:55 np0005486808 nova_compute[259627]: 2025-10-14 09:28:55.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 103 op/s
Oct 14 05:28:56 np0005486808 nova_compute[259627]: 2025-10-14 09:28:56.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:28:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:28:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 KiB/s wr, 97 op/s
Oct 14 05:28:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:58Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:be:04 10.100.0.7
Oct 14 05:28:58 np0005486808 ovn_controller[152662]: 2025-10-14T09:28:58Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:be:04 10.100.0.7
Oct 14 05:29:00 np0005486808 nova_compute[259627]: 2025-10-14 09:29:00.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 183 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 14 05:29:01 np0005486808 nova_compute[259627]: 2025-10-14 09:29:01.424 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434126.4195702, 9314cb71-9b9f-4379-90ba-61445b09c003 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:01 np0005486808 nova_compute[259627]: 2025-10-14 09:29:01.425 2 INFO nova.compute.manager [-] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:29:01 np0005486808 nova_compute[259627]: 2025-10-14 09:29:01.452 2 DEBUG nova.compute.manager [None req-1352b517-72e4-4122-b8e3-f4ae766c41e0 - - - - - -] [instance: 9314cb71-9b9f-4379-90ba-61445b09c003] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:01 np0005486808 nova_compute[259627]: 2025-10-14 09:29:01.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.237 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.238 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.239 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.239 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.240 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.242 2 INFO nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Terminating instance#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.243 2 DEBUG nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:29:05 np0005486808 kernel: tap3550cf12-50 (unregistering): left promiscuous mode
Oct 14 05:29:05 np0005486808 NetworkManager[44885]: <info>  [1760434145.3068] device (tap3550cf12-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:29:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:05Z|01414|binding|INFO|Releasing lport 3550cf12-50e7-4809-9e33-8057ba120200 from this chassis (sb_readonly=0)
Oct 14 05:29:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:05Z|01415|binding|INFO|Setting lport 3550cf12-50e7-4809-9e33-8057ba120200 down in Southbound
Oct 14 05:29:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:05Z|01416|binding|INFO|Removing iface tap3550cf12-50 ovn-installed in OVS
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.337 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:be:04 10.100.0.7'], port_security=['fa:16:3e:64:be:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fb3db81b-4d6f-4736-9d4b-b1900fad6488', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3550cf12-50e7-4809-9e33-8057ba120200) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.339 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3550cf12-50e7-4809-9e33-8057ba120200 in datapath f09a704d-6063-4e40-b690-c967cd364b32 unbound from our chassis#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.341 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f09a704d-6063-4e40-b690-c967cd364b32#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.363 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c7733bdc-bb62-4ad2-9e33-6f4db693bf72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 14 05:29:05 np0005486808 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 13.422s CPU time.
Oct 14 05:29:05 np0005486808 systemd-machined[214636]: Machine qemu-165-instance-00000084 terminated.
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.401 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77b84394-3235-4f58-8a81-28237fc0631a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.407 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1383e082-cb1e-4df1-a598-e8478a590986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.451 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5a652061-b8ee-440b-b3b8-2dc52f7940eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ba40fb-085f-4f4b-90c6-e9305cc6a089]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf09a704d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f0:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785922, 'reachable_time': 22812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396019, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.494 2 INFO nova.virt.libvirt.driver [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Instance destroyed successfully.#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.495 2 DEBUG nova.objects.instance [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid fb3db81b-4d6f-4736-9d4b-b1900fad6488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.514 2 DEBUG nova.virt.libvirt.vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:28:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-0-1042294944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=132,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jr53qs99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:46Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=fb3db81b-4d6f-4736-9d4b-b1900fad6488,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.515 2 DEBUG nova.network.os_vif_util [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3550cf12-50e7-4809-9e33-8057ba120200", "address": "fa:16:3e:64:be:04", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3550cf12-50", "ovs_interfaceid": "3550cf12-50e7-4809-9e33-8057ba120200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.515 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33658422-8cdf-4148-be4f-2b69efbf511b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785940, 'tstamp': 785940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396025, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf09a704d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785944, 'tstamp': 785944}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396025, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.517 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.517 2 DEBUG nova.network.os_vif_util [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.518 2 DEBUG os_vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3550cf12-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.529 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09a704d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.529 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf09a704d-60, col_values=(('external_ids', {'iface-id': 'e4065da2-8191-4cbc-a6ed-0505ac5ea1c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:05 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:05.530 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.534 2 INFO os_vif [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:be:04,bridge_name='br-int',has_traffic_filtering=True,id=3550cf12-50e7-4809-9e33-8057ba120200,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3550cf12-50')#033[00m
Oct 14 05:29:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:29:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:29:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:29:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201009748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.957 2 INFO nova.virt.libvirt.driver [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deleting instance files /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488_del#033[00m
Oct 14 05:29:05 np0005486808 nova_compute[259627]: 2025-10-14 09:29:05.957 2 INFO nova.virt.libvirt.driver [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deletion of /var/lib/nova/instances/fb3db81b-4d6f-4736-9d4b-b1900fad6488_del complete#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.029 2 INFO nova.compute.manager [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.030 2 DEBUG oslo.service.loopingcall [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.032 2 DEBUG nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.032 2 DEBUG nova.network.neutron [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.256 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.256 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG oslo_concurrency.lockutils [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.257 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.258 2 DEBUG nova.compute.manager [req-0b585b7d-fa4d-4741-89d1-acd3d317838d req-78ccef2c-d6ed-4fb0-a4a2-29c19324fa9e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-unplugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:29:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.857 2 DEBUG nova.network.neutron [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.877 2 INFO nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.922 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.923 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:06 np0005486808 nova_compute[259627]: 2025-10-14 09:29:06.941 2 DEBUG nova.compute.manager [req-00898b96-40fc-4c62-b482-8aa889b2c237 req-d3e89ba1-1be4-4a01-a66c-6a63638fd66d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-deleted-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.002 2 DEBUG oslo_concurrency.processutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.044 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530213059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.524 2 DEBUG oslo_concurrency.processutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.534 2 DEBUG nova.compute.provider_tree [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.559 2 DEBUG nova.scheduler.client.report [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.590 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.634 2 INFO nova.scheduler.client.report [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance fb3db81b-4d6f-4736-9d4b-b1900fad6488#033[00m
Oct 14 05:29:07 np0005486808 nova_compute[259627]: 2025-10-14 09:29:07.739 2 DEBUG oslo_concurrency.lockutils [None req-a7ce237e-bc9c-4d7a-9e5e-3d438305324c 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.362 2 DEBUG nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.363 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.363 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.364 2 DEBUG oslo_concurrency.lockutils [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "fb3db81b-4d6f-4736-9d4b-b1900fad6488-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.365 2 DEBUG nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] No waiting events found dispatching network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:29:08 np0005486808 nova_compute[259627]: 2025-10-14 09:29:08.365 2 WARNING nova.compute.manager [req-c2ea7950-917f-4b33-b2fd-4d163771c8be req-433ee06e-40cc-4291-8fd0-5a40a2b16354 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Received unexpected event network-vif-plugged-3550cf12-50e7-4809-9e33-8057ba120200 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:29:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 164 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.467 2 DEBUG nova.compute.manager [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-changed-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.467 2 DEBUG nova.compute.manager [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing instance network info cache due to event network-changed-3fc32773-5083-4341-9838-5282b7963f56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.468 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Refreshing network info cache for port 3fc32773-5083-4341-9838-5282b7963f56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.537 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.537 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.538 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.540 2 INFO nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Terminating instance#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.541 2 DEBUG nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 kernel: tap3fc32773-50 (unregistering): left promiscuous mode
Oct 14 05:29:10 np0005486808 NetworkManager[44885]: <info>  [1760434150.6090] device (tap3fc32773-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:29:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:10Z|01417|binding|INFO|Releasing lport 3fc32773-5083-4341-9838-5282b7963f56 from this chassis (sb_readonly=0)
Oct 14 05:29:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:10Z|01418|binding|INFO|Setting lport 3fc32773-5083-4341-9838-5282b7963f56 down in Southbound
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:10Z|01419|binding|INFO|Removing iface tap3fc32773-50 ovn-installed in OVS
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.627 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5d:72 10.100.0.12'], port_security=['fa:16:3e:b3:5d:72 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7a110a3c-a2ca-4314-a190-28a4505cc26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f09a704d-6063-4e40-b690-c967cd364b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24af2eac-35ae-4c02-b261-8fe378764631 e76d2fee-d8c5-45a1-ac1f-55a35976452c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b90ca07-80d1-49c1-a91f-225f989dd9c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3fc32773-5083-4341-9838-5282b7963f56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.628 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc32773-5083-4341-9838-5282b7963f56 in datapath f09a704d-6063-4e40-b690-c967cd364b32 unbound from our chassis#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.630 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f09a704d-6063-4e40-b690-c967cd364b32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.631 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0063334-cf47-4f65-8404-ed43e66c3388]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.632 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 namespace which is not needed anymore#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 14 05:29:10 np0005486808 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 15.198s CPU time.
Oct 14 05:29:10 np0005486808 systemd-machined[214636]: Machine qemu-163-instance-00000082 terminated.
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.799 2 INFO nova.virt.libvirt.driver [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Instance destroyed successfully.#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.800 2 DEBUG nova.objects.instance [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 7a110a3c-a2ca-4314-a190-28a4505cc26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : haproxy version is 2.8.14-c23fe91
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [NOTICE]   (393934) : path to executable is /usr/sbin/haproxy
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : Exiting Master process...
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : Exiting Master process...
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.828 2 DEBUG nova.virt.libvirt.vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:27:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1879326167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=130,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFHCa/sppIAnQAQkdZ/1Ef+DTNecrrS5rhEQtzPrxlFpoBPEHdQG0Rr7ARx/izPzPbjo5rPoXWm8ksRfiQ+ieFxBsihNkHMZrgGUaPkR43e+YFaxNpM2eAk15miiuIGX3Q==',key_name='tempest-TestSecurityGroupsBasicOps-1939533280',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:28:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-tkp1b0io',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:28:11Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=7a110a3c-a2ca-4314-a190-28a4505cc26c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.829 2 DEBUG nova.network.os_vif_util [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.830 2 DEBUG nova.network.os_vif_util [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.830 2 DEBUG os_vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fc32773-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [ALERT]    (393934) : Current worker (393936) exited with code 143 (Terminated)
Oct 14 05:29:10 np0005486808 neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32[393930]: [WARNING]  (393934) : All workers exited. Exiting... (0)
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.839 2 INFO os_vif [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:5d:72,bridge_name='br-int',has_traffic_filtering=True,id=3fc32773-5083-4341-9838-5282b7963f56,network=Network(f09a704d-6063-4e40-b690-c967cd364b32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fc32773-50')#033[00m
Oct 14 05:29:10 np0005486808 systemd[1]: libpod-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope: Deactivated successfully.
Oct 14 05:29:10 np0005486808 podman[396097]: 2025-10-14 09:29:10.85146507 +0000 UTC m=+0.082157571 container died a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:29:10 np0005486808 podman[396095]: 2025-10-14 09:29:10.872227423 +0000 UTC m=+0.104761869 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:29:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8-userdata-shm.mount: Deactivated successfully.
Oct 14 05:29:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-98a05a81ea8f0e7b8861af2cf251020d339678d913c2f8ee38478cf7f1b84037-merged.mount: Deactivated successfully.
Oct 14 05:29:10 np0005486808 podman[396092]: 2025-10-14 09:29:10.896552974 +0000 UTC m=+0.139264782 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 14 05:29:10 np0005486808 podman[396097]: 2025-10-14 09:29:10.912236931 +0000 UTC m=+0.142929442 container cleanup a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 05:29:10 np0005486808 systemd[1]: libpod-conmon-a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8.scope: Deactivated successfully.
Oct 14 05:29:10 np0005486808 podman[396188]: 2025-10-14 09:29:10.981576984 +0000 UTC m=+0.046053919 container remove a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47e32a42-c5f5-4d9c-8c11-fac3e02a0af6]: (4, ('Tue Oct 14 09:29:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 (a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8)\na90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8\nTue Oct 14 09:29:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 (a90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8)\na90386e53be92dbf5da11f0527735f8ae4714154683cfbfa50002c49341a67b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.990 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b67a4f15-8756-4f21-aec1-57238e905f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.991 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09a704d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:10 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:11 np0005486808 kernel: tapf09a704d-60: left promiscuous mode
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:10.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:10.998 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24477a2b-9a19-47ed-8288-16216d3b887e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.025 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bbaed3-3ff5-4732-a859-448cb6e392db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.026 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[069e60ff-4efb-42a6-8c70-8f8cd283c790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[47c63d8a-e96f-4841-8484-de27f8b146c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785913, 'reachable_time': 21456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396203, 'error': None, 'target': 'ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.049 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f09a704d-6063-4e40-b690-c967cd364b32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:29:11 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:11.049 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[481dbc38-c833-4935-9822-9f420fdb1b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:11 np0005486808 systemd[1]: run-netns-ovnmeta\x2df09a704d\x2d6063\x2d4e40\x2db690\x2dc967cd364b32.mount: Deactivated successfully.
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.287 2 INFO nova.virt.libvirt.driver [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deleting instance files /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c_del#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.289 2 INFO nova.virt.libvirt.driver [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deletion of /var/lib/nova/instances/7a110a3c-a2ca-4314-a190-28a4505cc26c_del complete#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.373 2 INFO nova.compute.manager [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.374 2 DEBUG oslo.service.loopingcall [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.375 2 DEBUG nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.376 2 DEBUG nova.network.neutron [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:11 np0005486808 nova_compute[259627]: 2025-10-14 09:29:11.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.072 2 DEBUG nova.network.neutron [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.096 2 INFO nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.136 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.137 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.165 2 DEBUG nova.compute.manager [req-4f0f0582-9461-42e5-a1bd-65f80d3c4731 req-53f22dbf-579b-4ed7-9546-d15cb008246c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Received event network-vif-deleted-3fc32773-5083-4341-9838-5282b7963f56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.175 2 DEBUG oslo_concurrency.processutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.321 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updated VIF entry in instance network info cache for port 3fc32773-5083-4341-9838-5282b7963f56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.323 2 DEBUG nova.network.neutron [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Updating instance_info_cache with network_info: [{"id": "3fc32773-5083-4341-9838-5282b7963f56", "address": "fa:16:3e:b3:5d:72", "network": {"id": "f09a704d-6063-4e40-b690-c967cd364b32", "bridge": "br-int", "label": "tempest-network-smoke--383142843", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fc32773-50", "ovs_interfaceid": "3fc32773-5083-4341-9838-5282b7963f56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.355 2 DEBUG oslo_concurrency.lockutils [req-0bec4604-b961-490a-b372-437de8ebebe0 req-56379879-6a62-4395-b3e4-db197ebe3777 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-7a110a3c-a2ca-4314-a190-28a4505cc26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Oct 14 05:29:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2256220778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.618 2 DEBUG oslo_concurrency.processutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.624 2 DEBUG nova.compute.provider_tree [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.640 2 DEBUG nova.scheduler.client.report [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.663 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.690 2 INFO nova.scheduler.client.report [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 7a110a3c-a2ca-4314-a190-28a4505cc26c#033[00m
Oct 14 05:29:12 np0005486808 nova_compute[259627]: 2025-10-14 09:29:12.752 2 DEBUG oslo_concurrency.lockutils [None req-5adc3004-df57-40fb-af61-bdc347f2a809 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "7a110a3c-a2ca-4314-a190-28a4505cc26c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.008 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:43:1c 10.100.0.2 2001:db8::f816:3eff:fe4c:431c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe4c:431c/64', 'neutron:device_id': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a306a4f-b3d3-4c63-8490-e1049c247650) old=Port_Binding(mac=['fa:16:3e:4c:43:1c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.010 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a306a4f-b3d3-4c63-8490-e1049c247650 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 updated#033[00m
Oct 14 05:29:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.011 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:29:13 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:13.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3014cadb-a777-4f97-af69-24b204ded998]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 14 05:29:15 np0005486808 nova_compute[259627]: 2025-10-14 09:29:15.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:15 np0005486808 nova_compute[259627]: 2025-10-14 09:29:15.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 05:29:16 np0005486808 nova_compute[259627]: 2025-10-14 09:29:16.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:16 np0005486808 nova_compute[259627]: 2025-10-14 09:29:16.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.753 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.754 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.771 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.851 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.852 2 INFO nova.compute.claims [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:29:17 np0005486808 nova_compute[259627]: 2025-10-14 09:29:17.959 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177090088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.412 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.422 2 DEBUG nova.compute.provider_tree [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.447 2 DEBUG nova.scheduler.client.report [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.485 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.486 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.546 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.547 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.569 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.591 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.694 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.696 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.696 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating image(s)#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.715 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.736 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.755 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.758 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.839 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.840 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.841 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.860 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.864 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:18 np0005486808 nova_compute[259627]: 2025-10-14 09:29:18.901 2 DEBUG nova.policy [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.145 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.233 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.324 2 DEBUG nova.objects.instance [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.343 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.343 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Ensure instance console log exists: /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.344 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:19 np0005486808 nova_compute[259627]: 2025-10-14 09:29:19.640 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Successfully created port: 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 45 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 55 op/s
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.446 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Successfully updated port: 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.468 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.469 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.469 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.491 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434145.4899757, fb3db81b-4d6f-4736-9d4b-b1900fad6488 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.492 2 INFO nova.compute.manager [-] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.520 2 DEBUG nova.compute.manager [None req-e27d8aab-7e7a-4f2d-91e0-09bbf36e269a - - - - - -] [instance: fb3db81b-4d6f-4736-9d4b-b1900fad6488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.576 2 DEBUG nova.compute.manager [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.577 2 DEBUG nova.compute.manager [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.577 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.671 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:29:20 np0005486808 nova_compute[259627]: 2025-10-14 09:29:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.628 2 DEBUG nova.network.neutron [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.654 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance network_info: |[{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.655 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.659 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start _get_guest_xml network_info=[{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.665 2 WARNING nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:29:21 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:29:21 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.676 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.677 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.682 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.682 2 DEBUG nova.virt.libvirt.host [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.683 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.684 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.684 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.685 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.685 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.686 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.686 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.687 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.687 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.688 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.688 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.689 2 DEBUG nova.virt.hardware [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:29:21 np0005486808 nova_compute[259627]: 2025-10-14 09:29:21.693 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723144166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.187 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.221 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.226 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct 14 05:29:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48654064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.765 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.768 2 DEBUG nova.virt.libvirt.vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.769 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.770 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.772 2 DEBUG nova.objects.instance [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.786 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <uuid>9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</uuid>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <name>instance-00000085</name>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-766373181</nova:name>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:29:21</nova:creationTime>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <nova:port uuid="312d35c6-7aa5-4056-b4ed-679cf0e1a12a">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe40:44e9" ipVersion="6"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="serial">9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="uuid">9bab7e53-30b3-4cd0-ad07-3cc9b5c05492</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:40:44:e9"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <target dev="tap312d35c6-7a"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/console.log" append="off"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:29:22 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:29:22 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:29:22 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:29:22 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.787 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Preparing to wait for external event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.788 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.789 2 DEBUG nova.virt.libvirt.vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.789 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG nova.network.os_vif_util [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG os_vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap312d35c6-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap312d35c6-7a, col_values=(('external_ids', {'iface-id': '312d35c6-7aa5-4056-b4ed-679cf0e1a12a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:44:e9', 'vm-uuid': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:22 np0005486808 NetworkManager[44885]: <info>  [1760434162.8325] manager: (tap312d35c6-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.838 2 INFO os_vif [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a')#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.920 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:40:44:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.921 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Using config drive#033[00m
Oct 14 05:29:22 np0005486808 nova_compute[259627]: 2025-10-14 09:29:22.947 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.271 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Creating config drive at /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.280 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfazezws5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.448 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfazezws5" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.488 2 DEBUG nova.storage.rbd_utils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.493 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.572 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.573 2 DEBUG nova.network.neutron [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.596 2 DEBUG oslo_concurrency.lockutils [req-a01e5cb9-9f36-4c08-b9b8-44e0ee54e982 req-4a73e4d3-02aa-4b06-96b8-1b32af5038cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.696 2 DEBUG oslo_concurrency.processutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.697 2 INFO nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deleting local config drive /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492/disk.config because it was imported into RBD.#033[00m
Oct 14 05:29:23 np0005486808 kernel: tap312d35c6-7a: entered promiscuous mode
Oct 14 05:29:23 np0005486808 NetworkManager[44885]: <info>  [1760434163.7728] manager: (tap312d35c6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:23Z|01420|binding|INFO|Claiming lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a for this chassis.
Oct 14 05:29:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:23Z|01421|binding|INFO|312d35c6-7aa5-4056-b4ed-679cf0e1a12a: Claiming fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:23 np0005486808 systemd-udevd[396550]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:29:23 np0005486808 NetworkManager[44885]: <info>  [1760434163.8135] device (tap312d35c6-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.811 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:23 np0005486808 NetworkManager[44885]: <info>  [1760434163.8161] device (tap312d35c6-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 bound to our chassis#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.817 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395#033[00m
Oct 14 05:29:23 np0005486808 systemd-machined[214636]: New machine qemu-166-instance-00000085.
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.832 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a508083a-8f8e-40ed-87d2-dec4836b25a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.833 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3cbcf7e5-a1 in ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.835 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3cbcf7e5-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.835 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f814d7f8-e67c-41ea-a591-f8988d3e1d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.836 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d59ff287-a8bd-49f5-9fe3-f25f4d3a143d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 systemd[1]: Started Virtual Machine qemu-166-instance-00000085.
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.848 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a48e10a7-4c41-4f95-a21a-911b49281161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[abc5121e-09b9-4434-aac6-b76e644cf24b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:23Z|01422|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a ovn-installed in OVS
Oct 14 05:29:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:23Z|01423|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a up in Southbound
Oct 14 05:29:23 np0005486808 nova_compute[259627]: 2025-10-14 09:29:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.936 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[502bae7a-a3ad-481e-acc5-99a9f5edca96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 NetworkManager[44885]: <info>  [1760434163.9480] manager: (tap3cbcf7e5-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/578)
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.947 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bccbe0f-d8a3-41ff-945d-2956d7c6d4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.987 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[935ff4ec-b98d-4ade-9d40-8f372fe15841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:23.991 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce013b93-1e1c-4999-81ff-d44eb417f436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 NetworkManager[44885]: <info>  [1760434164.0101] device (tap3cbcf7e5-a0): carrier: link connected
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.015 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b104349-f716-4837-b055-eef56fe146bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.032 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce99378e-c4a4-4521-ac2f-66925ef1d2ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396585, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.047 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1978cf1a-3b5a-4ca5-9b2d-5e1191836c87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:431c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794277, 'tstamp': 794277}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396586, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.064 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc10ed3-92d3-4f8f-8437-62f506552b6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396587, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.097 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a617f439-edcc-43f2-a322-da9dc8a392a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.169 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8aad28-3810-4f3e-a470-b2ab56615180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.171 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.172 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:24 np0005486808 NetworkManager[44885]: <info>  [1760434164.1744] manager: (tap3cbcf7e5-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct 14 05:29:24 np0005486808 kernel: tap3cbcf7e5-a0: entered promiscuous mode
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.178 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:24 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:24Z|01424|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.180 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.181 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2435e46e-6744-48d7-96de-a6abc87e0e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.182 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/3cbcf7e5-ac17-454b-893d-3fda266aa395.pid.haproxy
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 3cbcf7e5-ac17-454b-893d-3fda266aa395
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:29:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:24.183 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'env', 'PROCESS_TAG=haproxy-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3cbcf7e5-ac17-454b-893d-3fda266aa395.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 88 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 05:29:24 np0005486808 podman[396659]: 2025-10-14 09:29:24.617449087 +0000 UTC m=+0.086379756 container create c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:29:24 np0005486808 podman[396659]: 2025-10-14 09:29:24.571578283 +0000 UTC m=+0.040509002 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:29:24 np0005486808 systemd[1]: Started libpod-conmon-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope.
Oct 14 05:29:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba749212af37e81d7c4992e139c5f78aa6b1ef87bc093a5fe8a54df954b3324e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:24 np0005486808 podman[396659]: 2025-10-14 09:29:24.736430347 +0000 UTC m=+0.205361026 container init c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:29:24 np0005486808 podman[396659]: 2025-10-14 09:29:24.741570804 +0000 UTC m=+0.210501453 container start c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:29:24 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : New worker (396691) forked
Oct 14 05:29:24 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : Loading success.
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.786 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434164.7859428, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.786 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Started (Lifecycle Event)#033[00m
Oct 14 05:29:24 np0005486808 podman[396678]: 2025-10-14 09:29:24.796822009 +0000 UTC m=+0.059941552 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.810 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.814 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434164.78601, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.814 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.845 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:24 np0005486808 nova_compute[259627]: 2025-10-14 09:29:24.872 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:24 np0005486808 podman[396710]: 2025-10-14 09:29:24.899834415 +0000 UTC m=+0.072685378 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller)
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.005 2 DEBUG nova.compute.manager [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.006 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.006 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG oslo_concurrency.lockutils [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG nova.compute.manager [req-13299bc0-8c7b-4f71-8b5b-468fae691a2c req-b1b02a8f-2780-4806-9a17-63ce1c515c5e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Processing event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.007 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.011 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434165.011257, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.011 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.013 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.018 2 INFO nova.virt.libvirt.driver [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance spawned successfully.#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.019 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.037 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.042 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.046 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.046 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.047 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.047 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.048 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.048 2 DEBUG nova.virt.libvirt.driver [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.069 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.094 2 INFO nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 6.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.095 2 DEBUG nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.152 2 INFO nova.compute.manager [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 7.34 seconds to build instance.#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.167 2 DEBUG oslo_concurrency.lockutils [None req-f4c05265-ee24-4523-b9ed-fd70a9c743ba 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.796 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434150.795242, 7a110a3c-a2ca-4314-a190-28a4505cc26c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.798 2 INFO nova.compute.manager [-] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:29:25 np0005486808 nova_compute[259627]: 2025-10-14 09:29:25.825 2 DEBUG nova.compute.manager [None req-cdf0494d-3ed2-46c6-b8cb-4f100e072a39 - - - - - -] [instance: 7a110a3c-a2ca-4314-a190-28a4505cc26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.545 2 DEBUG nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.546 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.547 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.547 2 DEBUG oslo_concurrency.lockutils [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.548 2 DEBUG nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.548 2 WARNING nova.compute.manager [req-f81fb56d-e79c-47ed-9a6c-4dd65e9bfef9 req-7e8f029b-6e1f-4d7b-b455-90a4daefa5b2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:29:27 np0005486808 nova_compute[259627]: 2025-10-14 09:29:27.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.386 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.387 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.476 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.566 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.567 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.575 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.576 2 INFO nova.compute.claims [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:29:30 np0005486808 nova_compute[259627]: 2025-10-14 09:29:30.721 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:31 np0005486808 NetworkManager[44885]: <info>  [1760434171.0948] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct 14 05:29:31 np0005486808 NetworkManager[44885]: <info>  [1760434171.0968] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:31Z|01425|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409564366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.270 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.278 2 DEBUG nova.compute.provider_tree [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.300 2 DEBUG nova.scheduler.client.report [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.327 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.328 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:29:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:31.377 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:31.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.393 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.394 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.423 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.432 2 DEBUG nova.compute.manager [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.432 2 DEBUG nova.compute.manager [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.433 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.433 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.434 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.449 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.555 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.558 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.558 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating image(s)#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.593 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.632 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.662 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.665 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.728 2 DEBUG nova.policy [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.770 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.771 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.772 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.772 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.797 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:31 np0005486808 nova_compute[259627]: 2025-10-14 09:29:31.801 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.116 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.174 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.276 2 DEBUG nova.objects.instance [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.299 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.299 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Ensure instance console log exists: /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.300 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.301 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.301 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:29:32
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.log']
Oct 14 05:29:32 np0005486808 nova_compute[259627]: 2025-10-14 09:29:32.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:29:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:33 np0005486808 nova_compute[259627]: 2025-10-14 09:29:33.244 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Successfully created port: 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:29:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:29:33 np0005486808 nova_compute[259627]: 2025-10-14 09:29:33.802 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:33 np0005486808 nova_compute[259627]: 2025-10-14 09:29:33.803 2 DEBUG nova.network.neutron [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:33 np0005486808 nova_compute[259627]: 2025-10-14 09:29:33.835 2 DEBUG oslo_concurrency.lockutils [req-b15f8a9c-4d93-4062-a288-cf882a17ee18 req-4f4a70d1-ad23-4f20-840a-601f6fdb144d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.107 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Successfully updated port: 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.122 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.122 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.123 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.239 2 DEBUG nova.compute.manager [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.240 2 DEBUG nova.compute.manager [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.241 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:34 np0005486808 nova_compute[259627]: 2025-10-14 09:29:34.339 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:29:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 88 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:35.382 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.827 2 DEBUG nova.network.neutron [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.853 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.854 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance network_info: |[{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.854 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.855 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.859 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start _get_guest_xml network_info=[{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.865 2 WARNING nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.870 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.871 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.876 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.877 2 DEBUG nova.virt.libvirt.host [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.877 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.878 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.879 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.879 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.880 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.880 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.881 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.881 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.882 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.883 2 DEBUG nova.virt.hardware [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:29:35 np0005486808 nova_compute[259627]: 2025-10-14 09:29:35.888 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:35Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:44:e9 10.100.0.8
Oct 14 05:29:35 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:35Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:44:e9 10.100.0.8
Oct 14 05:29:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265088364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.368 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.405 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 121 op/s
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.412 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130448813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.905 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.907 2 DEBUG nova.virt.libvirt.vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:31Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.908 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.909 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.910 2 DEBUG nova.objects.instance [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.927 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <uuid>f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</uuid>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <name>instance-00000086</name>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662</nova:name>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:29:35</nova:creationTime>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <nova:port uuid="3eaf2ed5-bd76-4749-b8f0-58985c91a040">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="serial">f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="uuid">f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8c:c0:28"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <target dev="tap3eaf2ed5-bd"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/console.log" append="off"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:29:36 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:29:36 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:29:36 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:29:36 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.929 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Preparing to wait for external event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.930 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.930 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.931 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.932 2 DEBUG nova.virt.libvirt.vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:31Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.932 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.933 2 DEBUG nova.network.os_vif_util [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.934 2 DEBUG os_vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eaf2ed5-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eaf2ed5-bd, col_values=(('external_ids', {'iface-id': '3eaf2ed5-bd76-4749-b8f0-58985c91a040', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:c0:28', 'vm-uuid': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:36 np0005486808 NetworkManager[44885]: <info>  [1760434176.9747] manager: (tap3eaf2ed5-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:36 np0005486808 nova_compute[259627]: 2025-10-14 09:29:36.984 2 INFO os_vif [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd')#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.038 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.039 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.039 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:8c:c0:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.040 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Using config drive#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.061 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.304 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.305 2 DEBUG nova.network.neutron [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.331 2 DEBUG oslo_concurrency.lockutils [req-dd7c0ff9-85f9-4a40-a121-fbb39531800e req-6a9ffcfb-ce3e-45f5-a1e6-70e4c0c56c1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.439 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Creating config drive at /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.450 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8hydacz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.601 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8hydacz" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.647 2 DEBUG nova.storage.rbd_utils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.653 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.861 2 DEBUG oslo_concurrency.processutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.863 2 INFO nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deleting local config drive /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e/disk.config because it was imported into RBD.#033[00m
Oct 14 05:29:37 np0005486808 kernel: tap3eaf2ed5-bd: entered promiscuous mode
Oct 14 05:29:37 np0005486808 NetworkManager[44885]: <info>  [1760434177.9171] manager: (tap3eaf2ed5-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Oct 14 05:29:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:37Z|01426|binding|INFO|Claiming lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 for this chassis.
Oct 14 05:29:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:37Z|01427|binding|INFO|3eaf2ed5-bd76-4749-b8f0-58985c91a040: Claiming fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:37Z|01428|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 ovn-installed in OVS
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:37 np0005486808 nova_compute[259627]: 2025-10-14 09:29:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:37 np0005486808 systemd-machined[214636]: New machine qemu-167-instance-00000086.
Oct 14 05:29:37 np0005486808 systemd-udevd[397060]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:29:37 np0005486808 NetworkManager[44885]: <info>  [1760434177.9835] device (tap3eaf2ed5-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:29:37 np0005486808 NetworkManager[44885]: <info>  [1760434177.9845] device (tap3eaf2ed5-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:29:37 np0005486808 systemd[1]: Started Virtual Machine qemu-167-instance-00000086.
Oct 14 05:29:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.161 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:c0:28 10.100.0.14'], port_security=['fa:16:3e:8c:c0:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9fc4eb48-6270-40e6-85b5-254b29fe464f fbf03768-a7c8-4472-a870-318ef2b37cd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd3c523a-c2e8-4daa-a5c1-e2e6a8d953b6, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3eaf2ed5-bd76-4749-b8f0-58985c91a040) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:38Z|01429|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 up in Southbound
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.163 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 in datapath 7a804945-64f0-4b07-8e8b-2ad2beb7451e bound to our chassis#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.164 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a804945-64f0-4b07-8e8b-2ad2beb7451e#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.186 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb0470f-2f8b-457a-b71d-ca2eb1bd58f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.187 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a804945-61 in ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.188 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a804945-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.189 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cf5b0a-77ff-4e8b-a61e-8d87a68f62a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.189 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65b43c3a-e33b-4c41-b032-c92128aef563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.212 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[658c1207-bf6f-46aa-acf0-8121730e49fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.244 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6199e169-4a54-4655-9468-7ca56b250e2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.278 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[13a85189-b9c7-47f7-add8-f32e9f8d2509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 systemd-udevd[397062]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:29:38 np0005486808 NetworkManager[44885]: <info>  [1760434178.2877] manager: (tap7a804945-60): new Veth device (/org/freedesktop/NetworkManager/Devices/584)
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc014fc-62d0-4271-9f2c-1d42a6760b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.349 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9fd12b-d540-4e57-8b44-64ebfc77976b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.353 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3d89d8-f4a0-4eee-8310-9567ee7b5949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 NetworkManager[44885]: <info>  [1760434178.3901] device (tap7a804945-60): carrier: link connected
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.393 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[123fbbda-aa8e-4d44-a223-c7b5b8549a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 148 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 111 op/s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.413 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d10adb-4bc5-441d-8b7c-a615999f4342]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a804945-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795714, 'reachable_time': 37260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397093, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.439 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd80551-fe81-4dc6-9a34-d0321d9a631f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:f304'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795714, 'tstamp': 795714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397094, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.461 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ef01c3-8cb4-4882-b419-603fa876a33b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a804945-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f3:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795714, 'reachable_time': 37260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397102, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.499 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6fae0f-a846-431d-9bc6-075298bd0f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.538 2 DEBUG nova.compute.manager [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.539 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG oslo_concurrency.lockutils [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.540 2 DEBUG nova.compute.manager [req-29be966f-695d-4ff2-af48-02553d99cc5e req-9486a61f-d436-4c23-800f-e3e06053149f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Processing event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.568 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78f6b162-0bfc-4320-b61d-2a2e27801683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.569 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a804945-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a804945-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:38 np0005486808 NetworkManager[44885]: <info>  [1760434178.5727] manager: (tap7a804945-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct 14 05:29:38 np0005486808 kernel: tap7a804945-60: entered promiscuous mode
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.576 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a804945-60, col_values=(('external_ids', {'iface-id': '8593aa40-4696-48eb-b1b8-2c53671eee76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:38Z|01430|binding|INFO|Releasing lport 8593aa40-4696-48eb-b1b8-2c53671eee76 from this chassis (sb_readonly=0)
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.579 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:29:38 np0005486808 nova_compute[259627]: 2025-10-14 09:29:38.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.591 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c9c536-3057-44ca-aec4-f229a0517f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.592 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-7a804945-64f0-4b07-8e8b-2ad2beb7451e
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/7a804945-64f0-4b07-8e8b-2ad2beb7451e.pid.haproxy
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 7a804945-64f0-4b07-8e8b-2ad2beb7451e
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:29:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:38.593 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'env', 'PROCESS_TAG=haproxy-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a804945-64f0-4b07-8e8b-2ad2beb7451e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:29:39 np0005486808 podman[397169]: 2025-10-14 09:29:39.07971009 +0000 UTC m=+0.069290073 container create 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:29:39 np0005486808 podman[397169]: 2025-10-14 09:29:39.051000431 +0000 UTC m=+0.040580424 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:29:39 np0005486808 systemd[1]: Started libpod-conmon-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope.
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.152 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.154 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.151438, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.155 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Started (Lifecycle Event)#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.158 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.164 2 INFO nova.virt.libvirt.driver [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance spawned successfully.#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.165 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.180 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.185 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cdbb75e8a79d144f3413c716738114ae1bf49c3311a5a61a4a5f20e5f113958/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.200 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.200 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.201 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.202 2 DEBUG nova.virt.libvirt.driver [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.205 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.206 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.153142, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.207 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:29:39 np0005486808 podman[397169]: 2025-10-14 09:29:39.219290439 +0000 UTC m=+0.208870482 container init 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:29:39 np0005486808 podman[397169]: 2025-10-14 09:29:39.230316192 +0000 UTC m=+0.219896205 container start 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.240 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.246 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434179.1619325, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.246 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:29:39 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : New worker (397190) forked
Oct 14 05:29:39 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : Loading success.
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.263 2 INFO nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 7.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.264 2 DEBUG nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.278 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.282 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.327 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.345 2 INFO nova.compute.manager [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 8.81 seconds to build instance.#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.364 2 DEBUG oslo_concurrency.lockutils [None req-789435ec-c8b0-4cb7-9f5c-c0ebf8c744dc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:39 np0005486808 nova_compute[259627]: 2025-10-14 09:29:39.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 160 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 119 op/s
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.651 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.652 2 DEBUG oslo_concurrency.lockutils [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.652 2 DEBUG nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:29:40 np0005486808 nova_compute[259627]: 2025-10-14 09:29:40.652 2 WARNING nova.compute.manager [req-c4eb8f15-3911-4ca8-aab9-b79cde9a4a76 req-d6300a39-f4dd-4303-a2cf-1ecff6b4e24c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:29:41 np0005486808 podman[397199]: 2025-10-14 09:29:41.652665776 +0000 UTC m=+0.065934590 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:29:41 np0005486808 podman[397200]: 2025-10-14 09:29:41.673898731 +0000 UTC m=+0.087117594 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 05:29:41 np0005486808 nova_compute[259627]: 2025-10-14 09:29:41.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Oct 14 05:29:42 np0005486808 nova_compute[259627]: 2025-10-14 09:29:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.054552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183054576, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1018, "num_deletes": 250, "total_data_size": 1430857, "memory_usage": 1458072, "flush_reason": "Manual Compaction"}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183061230, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 864902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47747, "largest_seqno": 48764, "table_properties": {"data_size": 860999, "index_size": 1555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10530, "raw_average_key_size": 20, "raw_value_size": 852537, "raw_average_value_size": 1678, "num_data_blocks": 70, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434089, "oldest_key_time": 1760434089, "file_creation_time": 1760434183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 6741 microseconds, and 3216 cpu microseconds.
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.061282) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 864902 bytes OK
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.061352) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063447) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063468) EVENT_LOG_v1 {"time_micros": 1760434183063461, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.063487) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1426049, prev total WAL file size 1426049, number of live WAL files 2.
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.064559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373530' seq:72057594037927935, type:22 .. '6D6772737461740032303031' seq:0, type:0; will stop at (end)
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(844KB)], [110(10MB)]
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183064668, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11435369, "oldest_snapshot_seqno": -1}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6922 keys, 8683031 bytes, temperature: kUnknown
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183130437, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8683031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8638130, "index_size": 26485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180238, "raw_average_key_size": 26, "raw_value_size": 8515567, "raw_average_value_size": 1230, "num_data_blocks": 1035, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.130720) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8683031 bytes
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.132475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(23.3) write-amplify(10.0) OK, records in: 7392, records dropped: 470 output_compression: NoCompression
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.132495) EVENT_LOG_v1 {"time_micros": 1760434183132487, "job": 66, "event": "compaction_finished", "compaction_time_micros": 65842, "compaction_time_cpu_micros": 43856, "output_level": 6, "num_output_files": 1, "total_output_size": 8683031, "num_input_records": 7392, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183132782, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434183134992, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.064299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:29:43.135067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:29:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:29:43 np0005486808 nova_compute[259627]: 2025-10-14 09:29:43.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct 14 05:29:44 np0005486808 nova_compute[259627]: 2025-10-14 09:29:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:44 np0005486808 nova_compute[259627]: 2025-10-14 09:29:44.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.010 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708871909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.547 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.610 2 DEBUG nova.compute.manager [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.611 2 DEBUG nova.compute.manager [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.612 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.612 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.613 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.652 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.653 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.661 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.662 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.909 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.911 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3311MB free_disk=59.921993255615234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.911 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:45 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.912 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:45.999 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.000 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.019 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.034 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.035 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.047 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.068 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.188 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 14 05:29:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023518290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.702 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.712 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.735 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.763 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:29:46 np0005486808 nova_compute[259627]: 2025-10-14 09:29:46.764 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:47 np0005486808 nova_compute[259627]: 2025-10-14 09:29:47.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:47 np0005486808 nova_compute[259627]: 2025-10-14 09:29:47.391 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:47 np0005486808 nova_compute[259627]: 2025-10-14 09:29:47.392 2 DEBUG nova.network.neutron [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:47 np0005486808 nova_compute[259627]: 2025-10-14 09:29:47.425 2 DEBUG oslo_concurrency.lockutils [req-b1f3e820-bbc0-4d56-8663-d17510f23468 req-3d8d135d-2484-4d97-8ba7-b37b56be122b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 167 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 625 KiB/s wr, 117 op/s
Oct 14 05:29:48 np0005486808 nova_compute[259627]: 2025-10-14 09:29:48.766 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:48 np0005486808 nova_compute[259627]: 2025-10-14 09:29:48.766 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:29:48 np0005486808 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:29:48 np0005486808 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:48 np0005486808 nova_compute[259627]: 2025-10-14 09:29:48.795 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.807 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.808 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.826 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.901 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.901 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.910 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:29:49 np0005486808 nova_compute[259627]: 2025-10-14 09:29:49.910 2 INFO nova.compute.claims [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.052 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:50Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 05:29:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:50Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:c0:28 10.100.0.14
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 176 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 123 op/s
Oct 14 05:29:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:29:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922411037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.579 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.587 2 DEBUG nova.compute.provider_tree [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.610 2 DEBUG nova.scheduler.client.report [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.638 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.639 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.704 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.705 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.727 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.748 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.825 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.827 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.827 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating image(s)#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.856 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.886 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.914 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.919 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:50 np0005486808 nova_compute[259627]: 2025-10-14 09:29:50.976 2 DEBUG nova.policy [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.023 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.025 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.026 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.026 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.058 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.063 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.419 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.496 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.605 2 DEBUG nova.objects.instance [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.624 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.624 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Ensure instance console log exists: /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.625 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.625 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.626 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:51 np0005486808 nova_compute[259627]: 2025-10-14 09:29:51.992 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Successfully created port: f7181e0a-4f5b-4d28-8302-c28ab393a348 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:29:52 np0005486808 nova_compute[259627]: 2025-10-14 09:29:52.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.201 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Successfully updated port: f7181e0a-4f5b-4d28-8302-c28ab393a348 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.223 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.223 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.224 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.412 2 DEBUG nova.compute.manager [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.412 2 DEBUG nova.compute.manager [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.413 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:29:53 np0005486808 nova_compute[259627]: 2025-10-14 09:29:53.505 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:29:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a16765dc-d020-418b-bc47-d031b859894d does not exist
Oct 14 05:29:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6c6d1e84-3a64-4fd6-a52f-8bacec2aec01 does not exist
Oct 14 05:29:53 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 746212b4-6ef5-485c-b715-e8ce50f2df4a does not exist
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:29:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.307142159 +0000 UTC m=+0.070995435 container create 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:29:54 np0005486808 systemd[1]: Started libpod-conmon-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope.
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.271434087 +0000 UTC m=+0.035287353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 192 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 739 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.417842104 +0000 UTC m=+0.181695380 container init 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.426812206 +0000 UTC m=+0.190665452 container start 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.430413925 +0000 UTC m=+0.194267171 container attach 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 14 05:29:54 np0005486808 distracted_black[397759]: 167 167
Oct 14 05:29:54 np0005486808 systemd[1]: libpod-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope: Deactivated successfully.
Oct 14 05:29:54 np0005486808 conmon[397759]: conmon 41aad58c88be03037c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope/container/memory.events
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.43709323 +0000 UTC m=+0.200946476 container died 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:29:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b7a86ed7a5ceee2fd2a67325f977d5a84cdf1f322976a53434581eefe65dfa82-merged.mount: Deactivated successfully.
Oct 14 05:29:54 np0005486808 podman[397743]: 2025-10-14 09:29:54.486101301 +0000 UTC m=+0.249954577 container remove 41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:29:54 np0005486808 systemd[1]: libpod-conmon-41aad58c88be03037c798426898156c6f00b0fd19f85dbefd498a571795bb818.scope: Deactivated successfully.
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.580 2 DEBUG nova.network.neutron [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.599 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.600 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance network_info: |[{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.602 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.603 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.609 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start _get_guest_xml network_info=[{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.617 2 WARNING nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.631 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.633 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.638 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.639 2 DEBUG nova.virt.libvirt.host [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.640 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.641 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.643 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.643 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.644 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.645 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.646 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.646 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.647 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.648 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.648 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.649 2 DEBUG nova.virt.hardware [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.654 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:54 np0005486808 podman[397784]: 2025-10-14 09:29:54.73453437 +0000 UTC m=+0.064176787 container create 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:29:54 np0005486808 systemd[1]: Started libpod-conmon-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope.
Oct 14 05:29:54 np0005486808 podman[397784]: 2025-10-14 09:29:54.709169523 +0000 UTC m=+0.038811940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:54 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:54 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:54 np0005486808 podman[397784]: 2025-10-14 09:29:54.846143868 +0000 UTC m=+0.175786295 container init 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:29:54 np0005486808 podman[397784]: 2025-10-14 09:29:54.862288427 +0000 UTC m=+0.191930804 container start 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:29:54 np0005486808 podman[397784]: 2025-10-14 09:29:54.865546587 +0000 UTC m=+0.195188974 container attach 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:29:54 np0005486808 nova_compute[259627]: 2025-10-14 09:29:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:29:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886267692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.150 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.184 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.190 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:55 np0005486808 podman[397874]: 2025-10-14 09:29:55.665966386 +0000 UTC m=+0.068684179 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:29:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:29:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.705 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.707 2 DEBUG nova.virt.libvirt.vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:50Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.708 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.709 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.711 2 DEBUG nova.objects.instance [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.728 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <uuid>edf4b59a-fe1b-48ae-92a3-7bec88fc7491</uuid>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <name>instance-00000087</name>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-1997112513</nova:name>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:29:54</nova:creationTime>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <nova:port uuid="f7181e0a-4f5b-4d28-8302-c28ab393a348">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe90:3d7f" ipVersion="6"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="serial">edf4b59a-fe1b-48ae-92a3-7bec88fc7491</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="uuid">edf4b59a-fe1b-48ae-92a3-7bec88fc7491</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:90:3d:7f"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <target dev="tapf7181e0a-4f"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/console.log" append="off"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:29:55 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:29:55 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:29:55 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:29:55 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.729 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Preparing to wait for external event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.729 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.730 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.730 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.731 2 DEBUG nova.virt.libvirt.vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:29:50Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.731 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.732 2 DEBUG nova.network.os_vif_util [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.732 2 DEBUG os_vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:55 np0005486808 podman[397871]: 2025-10-14 09:29:55.733788421 +0000 UTC m=+0.139675982 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7181e0a-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7181e0a-4f, col_values=(('external_ids', {'iface-id': 'f7181e0a-4f5b-4d28-8302-c28ab393a348', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:3d:7f', 'vm-uuid': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:55 np0005486808 NetworkManager[44885]: <info>  [1760434195.7402] manager: (tapf7181e0a-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.749 2 INFO os_vif [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f')#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.816 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.816 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.817 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:90:3d:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.817 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Using config drive#033[00m
Oct 14 05:29:55 np0005486808 nova_compute[259627]: 2025-10-14 09:29:55.840 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:55 np0005486808 admiring_villani[397802]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:29:55 np0005486808 admiring_villani[397802]: --> relative data size: 1.0
Oct 14 05:29:55 np0005486808 admiring_villani[397802]: --> All data devices are unavailable
Oct 14 05:29:55 np0005486808 systemd[1]: libpod-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Deactivated successfully.
Oct 14 05:29:55 np0005486808 systemd[1]: libpod-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Consumed 1.008s CPU time.
Oct 14 05:29:56 np0005486808 podman[397955]: 2025-10-14 09:29:56.011026662 +0000 UTC m=+0.046411988 container died 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:29:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c74b007edb988d3d22a61963f685ad1a4c43e8371ac49bd380213f79eb05f8c7-merged.mount: Deactivated successfully.
Oct 14 05:29:56 np0005486808 podman[397955]: 2025-10-14 09:29:56.093933161 +0000 UTC m=+0.129318387 container remove 5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_villani, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:29:56 np0005486808 systemd[1]: libpod-conmon-5caf0800a5e658bab418df7104cd9315332f6abb78b5e2270a8752d932ac6972.scope: Deactivated successfully.
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.216 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.218 2 DEBUG nova.network.neutron [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.241 2 DEBUG oslo_concurrency.lockutils [req-97b74cde-2d65-4b7c-9883-e822c71c7d94 req-e946fa3b-01ec-4ef8-9064-ed0c0e1dad39 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.249 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Creating config drive at /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.259 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4aa38oa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.429 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4aa38oa" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.474 2 DEBUG nova.storage.rbd_utils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.484 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.691 2 DEBUG oslo_concurrency.processutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config edf4b59a-fe1b-48ae-92a3-7bec88fc7491_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.692 2 INFO nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deleting local config drive /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491/disk.config because it was imported into RBD.#033[00m
Oct 14 05:29:56 np0005486808 kernel: tapf7181e0a-4f: entered promiscuous mode
Oct 14 05:29:56 np0005486808 NetworkManager[44885]: <info>  [1760434196.7730] manager: (tapf7181e0a-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:56Z|01431|binding|INFO|Claiming lport f7181e0a-4f5b-4d28-8302-c28ab393a348 for this chassis.
Oct 14 05:29:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:56Z|01432|binding|INFO|f7181e0a-4f5b-4d28-8302-c28ab393a348: Claiming fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.786 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], port_security=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe90:3d7f/64', 'neutron:device_id': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f7181e0a-4f5b-4d28-8302-c28ab393a348) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.789 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f7181e0a-4f5b-4d28-8302-c28ab393a348 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 bound to our chassis#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.793 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395#033[00m
Oct 14 05:29:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:56Z|01433|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 ovn-installed in OVS
Oct 14 05:29:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:29:56Z|01434|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 up in Southbound
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:56 np0005486808 systemd-udevd[398144]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:29:56 np0005486808 nova_compute[259627]: 2025-10-14 09:29:56.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.824 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d5eeba-d48c-4f99-b803-674a880a54ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 NetworkManager[44885]: <info>  [1760434196.8446] device (tapf7181e0a-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:29:56 np0005486808 NetworkManager[44885]: <info>  [1760434196.8466] device (tapf7181e0a-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:29:56 np0005486808 systemd-machined[214636]: New machine qemu-168-instance-00000087.
Oct 14 05:29:56 np0005486808 systemd[1]: Started Virtual Machine qemu-168-instance-00000087.
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.870 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9491a1-2bba-4635-bb6a-58ad763d7310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.876 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[903ae0c3-1e5e-45bd-ae33-9f285087ddfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[60054373-d9aa-4b16-8764-4b18b1d53771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.945 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e4fa25-7168-4796-8149-5d54aba8185e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398169, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.971 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0981af-7ff4-4b3b-beb3-38a94961973c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794288, 'tstamp': 794288}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398178, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794292, 'tstamp': 794292}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398178, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:29:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:56.972 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:29:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.013 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.014 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:29:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:29:57.014 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.076547601 +0000 UTC m=+0.046077570 container create 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:29:57 np0005486808 systemd[1]: Started libpod-conmon-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope.
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.055102441 +0000 UTC m=+0.024632400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.184836057 +0000 UTC m=+0.154366016 container init 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.197069239 +0000 UTC m=+0.166599188 container start 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.20074203 +0000 UTC m=+0.170271999 container attach 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 05:29:57 np0005486808 objective_chandrasekhar[398196]: 167 167
Oct 14 05:29:57 np0005486808 systemd[1]: libpod-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope: Deactivated successfully.
Oct 14 05:29:57 np0005486808 conmon[398196]: conmon 37a8ed99351be3a276a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope/container/memory.events
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.207790634 +0000 UTC m=+0.177321053 container died 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:29:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-50fa703c85e1f47f2f1ef77a5018910296cc1e6b308d504d97ed3263486322c5-merged.mount: Deactivated successfully.
Oct 14 05:29:57 np0005486808 podman[398179]: 2025-10-14 09:29:57.25621574 +0000 UTC m=+0.225745689 container remove 37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:29:57 np0005486808 systemd[1]: libpod-conmon-37a8ed99351be3a276a1e60b8a3fd28cddb870253c2fe2e576e368f6c5ab341b.scope: Deactivated successfully.
Oct 14 05:29:57 np0005486808 podman[398260]: 2025-10-14 09:29:57.45939643 +0000 UTC m=+0.043420143 container create f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:29:57 np0005486808 systemd[1]: Started libpod-conmon-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope.
Oct 14 05:29:57 np0005486808 podman[398260]: 2025-10-14 09:29:57.439670664 +0000 UTC m=+0.023694397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:57 np0005486808 podman[398260]: 2025-10-14 09:29:57.560128439 +0000 UTC m=+0.144152162 container init f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:29:57 np0005486808 podman[398260]: 2025-10-14 09:29:57.566869976 +0000 UTC m=+0.150893689 container start f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:29:57 np0005486808 podman[398260]: 2025-10-14 09:29:57.570514706 +0000 UTC m=+0.154538429 container attach f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.860 2 DEBUG nova.compute.manager [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.861 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.862 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.862 2 DEBUG oslo_concurrency.lockutils [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.863 2 DEBUG nova.compute.manager [req-9288f348-346a-4af8-900d-0bfbd50b3e7f req-19b03f52-c55f-42bb-8ecb-e88d9e370be3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Processing event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.892682, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.894 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Started (Lifecycle Event)#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.898 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.903 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.908 2 INFO nova.virt.libvirt.driver [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance spawned successfully.#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.908 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.916 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.921 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.938 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.938 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.939 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.940 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.941 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.942 2 DEBUG nova.virt.libvirt.driver [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.948 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.949 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.8967695, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.949 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.983 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.987 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434197.90257, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:29:57 np0005486808 nova_compute[259627]: 2025-10-14 09:29:57.988 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.009 2 INFO nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.009 2 DEBUG nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.011 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.018 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.048 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:29:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.073 2 INFO nova.compute.manager [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 8.20 seconds to build instance.#033[00m
Oct 14 05:29:58 np0005486808 nova_compute[259627]: 2025-10-14 09:29:58.090 2 DEBUG oslo_concurrency.lockutils [None req-aa482710-d795-4310-ad7e-3852114c6bf6 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]: {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    "0": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "devices": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "/dev/loop3"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            ],
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_name": "ceph_lv0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_size": "21470642176",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "name": "ceph_lv0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "tags": {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_name": "ceph",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.crush_device_class": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.encrypted": "0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_id": "0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.vdo": "0"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            },
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "vg_name": "ceph_vg0"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        }
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    ],
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    "1": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "devices": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "/dev/loop4"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            ],
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_name": "ceph_lv1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_size": "21470642176",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "name": "ceph_lv1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "tags": {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_name": "ceph",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.crush_device_class": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.encrypted": "0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_id": "1",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.vdo": "0"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            },
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "vg_name": "ceph_vg1"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        }
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    ],
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    "2": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "devices": [
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "/dev/loop5"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            ],
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_name": "ceph_lv2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_size": "21470642176",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "name": "ceph_lv2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "tags": {
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.cluster_name": "ceph",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.crush_device_class": "",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.encrypted": "0",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osd_id": "2",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:                "ceph.vdo": "0"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            },
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "type": "block",
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:            "vg_name": "ceph_vg2"
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:        }
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]:    ]
Oct 14 05:29:58 np0005486808 beautiful_brattain[398277]: }
Oct 14 05:29:58 np0005486808 systemd[1]: libpod-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope: Deactivated successfully.
Oct 14 05:29:58 np0005486808 podman[398260]: 2025-10-14 09:29:58.387601346 +0000 UTC m=+0.971625079 container died f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:29:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 14 05:29:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-78d08c1b5f2a3ac5422a1623d3f26457ad423915f7220442db95bb8035700c2a-merged.mount: Deactivated successfully.
Oct 14 05:29:58 np0005486808 podman[398260]: 2025-10-14 09:29:58.44807117 +0000 UTC m=+1.032094873 container remove f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:29:58 np0005486808 systemd[1]: libpod-conmon-f4a13d4c079dc5c4cefc46fb4142ea85eb27b73a95797bd7c050dcf25e8418a5.scope: Deactivated successfully.
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.064394359 +0000 UTC m=+0.040591644 container create a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:29:59 np0005486808 systemd[1]: Started libpod-conmon-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope.
Oct 14 05:29:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.139043574 +0000 UTC m=+0.115240889 container init a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.045668786 +0000 UTC m=+0.021866131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.148079437 +0000 UTC m=+0.124276732 container start a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 05:29:59 np0005486808 adoring_bose[398456]: 167 167
Oct 14 05:29:59 np0005486808 systemd[1]: libpod-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope: Deactivated successfully.
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.195098399 +0000 UTC m=+0.171295684 container attach a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.195885268 +0000 UTC m=+0.172082573 container died a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:29:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1a026b168da758625ff37bf5b2f7c40799777bbabbbeee7c8fc983328b214ee5-merged.mount: Deactivated successfully.
Oct 14 05:29:59 np0005486808 podman[398440]: 2025-10-14 09:29:59.246636602 +0000 UTC m=+0.222833927 container remove a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bose, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:29:59 np0005486808 systemd[1]: libpod-conmon-a778afb204376a7b4ebbd48022fab5e74c7e8330f27cdc84363824fe28520202.scope: Deactivated successfully.
Oct 14 05:29:59 np0005486808 podman[398478]: 2025-10-14 09:29:59.438972815 +0000 UTC m=+0.038648986 container create fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:29:59 np0005486808 systemd[1]: Started libpod-conmon-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope.
Oct 14 05:29:59 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:29:59 np0005486808 podman[398478]: 2025-10-14 09:29:59.420832536 +0000 UTC m=+0.020508697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:29:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:59 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:29:59 np0005486808 podman[398478]: 2025-10-14 09:29:59.540633457 +0000 UTC m=+0.140309688 container init fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 14 05:29:59 np0005486808 podman[398478]: 2025-10-14 09:29:59.553561866 +0000 UTC m=+0.153238037 container start fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:29:59 np0005486808 podman[398478]: 2025-10-14 09:29:59.557698468 +0000 UTC m=+0.157374639 container attach fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.992 2 DEBUG nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.993 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG oslo_concurrency.lockutils [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.994 2 DEBUG nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] No waiting events found dispatching network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:29:59 np0005486808 nova_compute[259627]: 2025-10-14 09:29:59.995 2 WARNING nova.compute.manager [req-73f7d834-28aa-4d83-b6a0-40bbf6e75d47 req-8657a24c-cd52-4ccc-a306-349dd6f7eb30 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received unexpected event network-vif-plugged-f7181e0a-4f5b-4d28-8302-c28ab393a348 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:30:00 np0005486808 nova_compute[259627]: 2025-10-14 09:30:00.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]: {
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_id": 2,
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "type": "bluestore"
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    },
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_id": 1,
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "type": "bluestore"
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    },
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_id": 0,
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:        "type": "bluestore"
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]:    }
Oct 14 05:30:00 np0005486808 amazing_goldstine[398495]: }
Oct 14 05:30:00 np0005486808 systemd[1]: libpod-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope: Deactivated successfully.
Oct 14 05:30:00 np0005486808 podman[398478]: 2025-10-14 09:30:00.537483859 +0000 UTC m=+1.137160000 container died fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:30:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9aa50bf96cab8b53243fb848f1663858d8f2026b985dbcd060fa1c72bd3d3634-merged.mount: Deactivated successfully.
Oct 14 05:30:00 np0005486808 podman[398478]: 2025-10-14 09:30:00.588720195 +0000 UTC m=+1.188396336 container remove fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:30:00 np0005486808 systemd[1]: libpod-conmon-fda6c88571e62e5883f7fdc24d0049b71cbe100a88d6e37b411f33b0e180ef10.scope: Deactivated successfully.
Oct 14 05:30:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:30:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:30:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:30:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:30:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3c5f605b-7aec-49c8-9a61-610334c8b099 does not exist
Oct 14 05:30:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5b70eb0b-8aa3-4a7f-8c1b-5c3fb3238cea does not exist
Oct 14 05:30:00 np0005486808 nova_compute[259627]: 2025-10-14 09:30:00.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:30:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 159 op/s
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.101 2 DEBUG nova.compute.manager [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.102 2 DEBUG nova.compute.manager [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.102 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.103 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.103 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.106 2 DEBUG nova.compute.manager [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.107 2 DEBUG nova.compute.manager [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing instance network info cache due to event network-changed-3eaf2ed5-bd76-4749-b8f0-58985c91a040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.107 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.108 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.108 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Refreshing network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.243 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.244 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.245 2 INFO nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Terminating instance#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.246 2 DEBUG nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:30:04 np0005486808 kernel: tap3eaf2ed5-bd (unregistering): left promiscuous mode
Oct 14 05:30:04 np0005486808 NetworkManager[44885]: <info>  [1760434204.3073] device (tap3eaf2ed5-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:30:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:04Z|01435|binding|INFO|Releasing lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 from this chassis (sb_readonly=0)
Oct 14 05:30:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:04Z|01436|binding|INFO|Setting lport 3eaf2ed5-bd76-4749-b8f0-58985c91a040 down in Southbound
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:04Z|01437|binding|INFO|Removing iface tap3eaf2ed5-bd ovn-installed in OVS
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.333 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:c0:28 10.100.0.14'], port_security=['fa:16:3e:8c:c0:28 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9fc4eb48-6270-40e6-85b5-254b29fe464f fbf03768-a7c8-4472-a870-318ef2b37cd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd3c523a-c2e8-4daa-a5c1-e2e6a8d953b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3eaf2ed5-bd76-4749-b8f0-58985c91a040) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.334 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaf2ed5-bd76-4749-b8f0-58985c91a040 in datapath 7a804945-64f0-4b07-8e8b-2ad2beb7451e unbound from our chassis#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.335 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a804945-64f0-4b07-8e8b-2ad2beb7451e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.336 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[62ce8280-3ee8-4280-a093-837d350e7d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.337 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e namespace which is not needed anymore#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 14 05:30:04 np0005486808 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000086.scope: Consumed 13.037s CPU time.
Oct 14 05:30:04 np0005486808 systemd-machined[214636]: Machine qemu-167-instance-00000086 terminated.
Oct 14 05:30:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 115 op/s
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.491 2 INFO nova.virt.libvirt.driver [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Instance destroyed successfully.#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.492 2 DEBUG nova.objects.instance [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:04 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : haproxy version is 2.8.14-c23fe91
Oct 14 05:30:04 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [NOTICE]   (397188) : path to executable is /usr/sbin/haproxy
Oct 14 05:30:04 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [WARNING]  (397188) : Exiting Master process...
Oct 14 05:30:04 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [ALERT]    (397188) : Current worker (397190) exited with code 143 (Terminated)
Oct 14 05:30:04 np0005486808 neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e[397184]: [WARNING]  (397188) : All workers exited. Exiting... (0)
Oct 14 05:30:04 np0005486808 systemd[1]: libpod-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope: Deactivated successfully.
Oct 14 05:30:04 np0005486808 conmon[397184]: conmon 31b0bf40db3070dd9ef7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope/container/memory.events
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.510 2 DEBUG nova.virt.libvirt.vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-325033662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=134,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHiiMn1IPbRe1fqI+0HCf//9qD9/3fgeK9VnineHi5Mo2MAqU+EDOJGm1zH6VOW3MfRMImj3kzww8OxL4WA50EnUF8UJVTfKwxadLnhug9+sBPKdPWQa79dlH3frsHdZeQ==',key_name='tempest-TestSecurityGroupsBasicOps-1209305033',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-2bof4d8p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:39Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.510 2 DEBUG nova.network.os_vif_util [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:30:04 np0005486808 podman[398613]: 2025-10-14 09:30:04.514530639 +0000 UTC m=+0.077396133 container died 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.514 2 DEBUG nova.network.os_vif_util [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.515 2 DEBUG os_vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eaf2ed5-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.530 2 INFO os_vif [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:c0:28,bridge_name='br-int',has_traffic_filtering=True,id=3eaf2ed5-bd76-4749-b8f0-58985c91a040,network=Network(7a804945-64f0-4b07-8e8b-2ad2beb7451e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaf2ed5-bd')#033[00m
Oct 14 05:30:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c-userdata-shm.mount: Deactivated successfully.
Oct 14 05:30:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0cdbb75e8a79d144f3413c716738114ae1bf49c3311a5a61a4a5f20e5f113958-merged.mount: Deactivated successfully.
Oct 14 05:30:04 np0005486808 podman[398613]: 2025-10-14 09:30:04.558165087 +0000 UTC m=+0.121030601 container cleanup 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:30:04 np0005486808 systemd[1]: libpod-conmon-31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c.scope: Deactivated successfully.
Oct 14 05:30:04 np0005486808 podman[398665]: 2025-10-14 09:30:04.65300484 +0000 UTC m=+0.062766141 container remove 31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fed169c0-934c-48c4-b3e8-035a098702bb]: (4, ('Tue Oct 14 09:30:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e (31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c)\n31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c\nTue Oct 14 09:30:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e (31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c)\n31b0bf40db3070dd9ef783f726fd5f2ddbc791b31343f5d42d0e350d5fa35d7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.670 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[611778be-cd25-4341-92f9-d1bcccc19b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.672 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a804945-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 kernel: tap7a804945-60: left promiscuous mode
Oct 14 05:30:04 np0005486808 nova_compute[259627]: 2025-10-14 09:30:04.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5264363-608f-418d-9e20-09d7066b064b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.720 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aeab9965-fdc3-4684-97ee-006b6c448eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f688e92a-3242-4c1b-ae4e-3386f364548e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.747 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf96b21-0f09-43c3-9a99-af6f70eb3d24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795703, 'reachable_time': 31120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398684, 'error': None, 'target': 'ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:04 np0005486808 systemd[1]: run-netns-ovnmeta\x2d7a804945\x2d64f0\x2d4b07\x2d8e8b\x2d2ad2beb7451e.mount: Deactivated successfully.
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.750 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a804945-64f0-4b07-8e8b-2ad2beb7451e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:30:04 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:04.750 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f7c71d-2903-416f-9be4-622700bfd46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.034 2 INFO nova.virt.libvirt.driver [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deleting instance files /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_del#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.035 2 INFO nova.virt.libvirt.driver [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deletion of /var/lib/nova/instances/f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e_del complete#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.116 2 INFO nova.compute.manager [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.117 2 DEBUG oslo.service.loopingcall [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.117 2 DEBUG nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.118 2 DEBUG nova.network.neutron [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.262 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updated VIF entry in instance network info cache for port 3eaf2ed5-bd76-4749-b8f0-58985c91a040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.262 2 DEBUG nova.network.neutron [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [{"id": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "address": "fa:16:3e:8c:c0:28", "network": {"id": "7a804945-64f0-4b07-8e8b-2ad2beb7451e", "bridge": "br-int", "label": "tempest-network-smoke--1677445378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaf2ed5-bd", "ovs_interfaceid": "3eaf2ed5-bd76-4749-b8f0-58985c91a040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.290 2 DEBUG oslo_concurrency.lockutils [req-0a9fc5ea-8372-48d2-a226-950dea4e2684 req-875c551b-7c8f-43e4-943f-76b45d2262a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:30:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:30:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:30:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948722892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.712 2 DEBUG nova.network.neutron [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.731 2 INFO nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.767 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.768 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.845 2 DEBUG oslo_concurrency.processutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.949 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.951 2 DEBUG nova.network.neutron [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:05 np0005486808 nova_compute[259627]: 2025-10-14 09:30:05.977 2 DEBUG oslo_concurrency.lockutils [req-a8a50836-7a36-4795-94ac-1bef3336d608 req-1b1238e4-83d8-4eac-8c8b-3663eabf1ba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.221 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.222 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.223 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.223 2 WARNING nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-unplugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.224 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG oslo_concurrency.lockutils [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] No waiting events found dispatching network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.225 2 WARNING nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received unexpected event network-vif-plugged-3eaf2ed5-bd76-4749-b8f0-58985c91a040 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.225 2 DEBUG nova.compute.manager [req-aac8266f-5f67-445d-97e2-49ce0f366a47 req-0bdb83e9-7638-4fd0-ba2b-bb287edb9e64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Received event network-vif-deleted-3eaf2ed5-bd76-4749-b8f0-58985c91a040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244300800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.301 2 DEBUG oslo_concurrency.processutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.309 2 DEBUG nova.compute.provider_tree [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.346 2 DEBUG nova.scheduler.client.report [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.387 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 143 op/s
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.492 2 INFO nova.scheduler.client.report [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e#033[00m
Oct 14 05:30:06 np0005486808 nova_compute[259627]: 2025-10-14 09:30:06.566 2 DEBUG oslo_concurrency.lockutils [None req-7ff1070f-7eef-4da3-ac6c-74ba3f3d0134 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.045 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:07.046 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 167 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Oct 14 05:30:09 np0005486808 nova_compute[259627]: 2025-10-14 09:30:09.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:10 np0005486808 nova_compute[259627]: 2025-10-14 09:30:10.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 184 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Oct 14 05:30:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:10Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:3d:7f 10.100.0.11
Oct 14 05:30:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:10Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:3d:7f 10.100.0.11
Oct 14 05:30:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct 14 05:30:12 np0005486808 podman[398709]: 2025-10-14 09:30:12.722679696 +0000 UTC m=+0.121816871 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 05:30:12 np0005486808 podman[398708]: 2025-10-14 09:30:12.728342256 +0000 UTC m=+0.130113016 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:30:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:13Z|01438|binding|INFO|Releasing lport 2a306a4f-b3d3-4c63-8490-e1049c247650 from this chassis (sb_readonly=0)
Oct 14 05:30:13 np0005486808 nova_compute[259627]: 2025-10-14 09:30:13.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:13 np0005486808 nova_compute[259627]: 2025-10-14 09:30:13.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 05:30:14 np0005486808 nova_compute[259627]: 2025-10-14 09:30:14.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:15 np0005486808 nova_compute[259627]: 2025-10-14 09:30:15.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 14 05:30:17 np0005486808 nova_compute[259627]: 2025-10-14 09:30:17.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:19 np0005486808 nova_compute[259627]: 2025-10-14 09:30:19.489 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434204.4882314, f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:19 np0005486808 nova_compute[259627]: 2025-10-14 09:30:19.489 2 INFO nova.compute.manager [-] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:30:19 np0005486808 nova_compute[259627]: 2025-10-14 09:30:19.518 2 DEBUG nova.compute.manager [None req-372005b1-3c92-4d11-ba35-9b3a1ba2530c - - - - - -] [instance: f1d868b6-c1ad-4be4-b27e-ca17f7ed8e2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:19 np0005486808 nova_compute[259627]: 2025-10-14 09:30:19.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:20 np0005486808 nova_compute[259627]: 2025-10-14 09:30:20.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.315 2 DEBUG nova.compute.manager [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.315 2 DEBUG nova.compute.manager [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing instance network info cache due to event network-changed-f7181e0a-4f5b-4d28-8302-c28ab393a348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.316 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.316 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.317 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Refreshing network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.367 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.367 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.368 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.368 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.369 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.371 2 INFO nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Terminating instance#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.373 2 DEBUG nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 kernel: tapf7181e0a-4f (unregistering): left promiscuous mode
Oct 14 05:30:21 np0005486808 NetworkManager[44885]: <info>  [1760434221.4592] device (tapf7181e0a-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:21Z|01439|binding|INFO|Releasing lport f7181e0a-4f5b-4d28-8302-c28ab393a348 from this chassis (sb_readonly=0)
Oct 14 05:30:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:21Z|01440|binding|INFO|Setting lport f7181e0a-4f5b-4d28-8302-c28ab393a348 down in Southbound
Oct 14 05:30:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:21Z|01441|binding|INFO|Removing iface tapf7181e0a-4f ovn-installed in OVS
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.482 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], port_security=['fa:16:3e:90:3d:7f 10.100.0.11 2001:db8::f816:3eff:fe90:3d7f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe90:3d7f/64', 'neutron:device_id': 'edf4b59a-fe1b-48ae-92a3-7bec88fc7491', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f7181e0a-4f5b-4d28-8302-c28ab393a348) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.485 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f7181e0a-4f5b-4d28-8302-c28ab393a348 in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.487 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3cbcf7e5-ac17-454b-893d-3fda266aa395#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.514 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d577d5ee-cb55-4b57-80f7-d86838d083c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 14 05:30:21 np0005486808 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000087.scope: Consumed 13.241s CPU time.
Oct 14 05:30:21 np0005486808 systemd-machined[214636]: Machine qemu-168-instance-00000087 terminated.
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.562 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[38b032a7-d2be-4250-b737-836ab07533be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.566 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f01e51c4-c188-40ab-99ff-998dfc3c1fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.622 2 INFO nova.virt.libvirt.driver [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Instance destroyed successfully.#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.623 2 DEBUG nova.objects.instance [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid edf4b59a-fe1b-48ae-92a3-7bec88fc7491 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.626 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c044161a-fe4c-4588-967f-d4847cef4766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.642 2 DEBUG nova.virt.libvirt.vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1997112513',display_name='tempest-TestGettingAddress-server-1997112513',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1997112513',id=135,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-viso8tmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=edf4b59a-fe1b-48ae-92a3-7bec88fc7491,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.643 2 DEBUG nova.network.os_vif_util [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.644 2 DEBUG nova.network.os_vif_util [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.645 2 DEBUG os_vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7181e0a-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e72025-b0c3-4371-a0e8-09b78e79e92f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3cbcf7e5-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:43:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794277, 'reachable_time': 28589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398770, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.714 2 INFO os_vif [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:3d:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7181e0a-4f5b-4d28-8302-c28ab393a348,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7181e0a-4f')#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd01d993-e84c-441a-8942-49768b006d66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794288, 'tstamp': 794288}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398772, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3cbcf7e5-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 794292, 'tstamp': 794292}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398772, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbcf7e5-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.730 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.731 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3cbcf7e5-a0, col_values=(('external_ids', {'iface-id': '2a306a4f-b3d3-4c63-8490-e1049c247650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:21.731 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:30:21 np0005486808 nova_compute[259627]: 2025-10-14 09:30:21.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.115 2 INFO nova.virt.libvirt.driver [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deleting instance files /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_del#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.118 2 INFO nova.virt.libvirt.driver [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deletion of /var/lib/nova/instances/edf4b59a-fe1b-48ae-92a3-7bec88fc7491_del complete#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.187 2 INFO nova.compute.manager [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.188 2 DEBUG oslo.service.loopingcall [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.188 2 DEBUG nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:30:22 np0005486808 nova_compute[259627]: 2025-10-14 09:30:22.189 2 DEBUG nova.network.neutron [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:30:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 1012 KiB/s wr, 53 op/s
Oct 14 05:30:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.161 2 DEBUG nova.network.neutron [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.187 2 INFO nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.240 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.241 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.346 2 DEBUG oslo_concurrency.processutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.518 2 DEBUG nova.compute.manager [req-7f27f3c9-3d45-4bfa-869f-376a91dcf49b req-801c7f03-99dc-439a-ae63-a0ce8c25d45d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Received event network-vif-deleted-f7181e0a-4f5b-4d28-8302-c28ab393a348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.533 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updated VIF entry in instance network info cache for port f7181e0a-4f5b-4d28-8302-c28ab393a348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.534 2 DEBUG nova.network.neutron [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Updating instance_info_cache with network_info: [{"id": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "address": "fa:16:3e:90:3d:7f", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe90:3d7f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7181e0a-4f", "ovs_interfaceid": "f7181e0a-4f5b-4d28-8302-c28ab393a348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.553 2 DEBUG oslo_concurrency.lockutils [req-fba9a4f9-f3eb-4025-9096-4539191e4d05 req-5c2a29e1-41f5-4239-87e5-830217054859 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-edf4b59a-fe1b-48ae-92a3-7bec88fc7491" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827997056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.815 2 DEBUG oslo_concurrency.processutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.822 2 DEBUG nova.compute.provider_tree [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.839 2 DEBUG nova.scheduler.client.report [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.861 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.889 2 INFO nova.scheduler.client.report [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance edf4b59a-fe1b-48ae-92a3-7bec88fc7491#033[00m
Oct 14 05:30:23 np0005486808 nova_compute[259627]: 2025-10-14 09:30:23.965 2 DEBUG oslo_concurrency.lockutils [None req-6c32b55b-598e-4802-b84f-ab341515e0e2 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "edf4b59a-fe1b-48ae-92a3-7bec88fc7491" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 200 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.450 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.451 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.486 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.541 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.542 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.543 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.543 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.544 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.546 2 INFO nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Terminating instance#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.548 2 DEBUG nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:30:25 np0005486808 kernel: tap312d35c6-7a (unregistering): left promiscuous mode
Oct 14 05:30:25 np0005486808 NetworkManager[44885]: <info>  [1760434225.6147] device (tap312d35c6-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01442|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=0)
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01443|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down in Southbound
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01444|binding|INFO|Removing iface tap312d35c6-7a ovn-installed in OVS
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.635 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.636 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.637 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.638 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[52110287-8618-4d68-9eb0-56698faae691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.639 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 namespace which is not needed anymore#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.652 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.652 2 INFO nova.compute.claims [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.685 2 DEBUG nova.compute.manager [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.686 2 DEBUG nova.compute.manager [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing instance network info cache due to event network-changed-312d35c6-7aa5-4056-b4ed-679cf0e1a12a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.687 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.687 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.688 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Refreshing network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:25 np0005486808 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 14 05:30:25 np0005486808 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Consumed 14.820s CPU time.
Oct 14 05:30:25 np0005486808 systemd-machined[214636]: Machine qemu-166-instance-00000085 terminated.
Oct 14 05:30:25 np0005486808 podman[398824]: 2025-10-14 09:30:25.762514395 +0000 UTC m=+0.067622362 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:30:25 np0005486808 kernel: tap312d35c6-7a: entered promiscuous mode
Oct 14 05:30:25 np0005486808 NetworkManager[44885]: <info>  [1760434225.7698] manager: (tap312d35c6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01445|binding|INFO|Claiming lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a for this chassis.
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01446|binding|INFO|312d35c6-7aa5-4056-b4ed-679cf0e1a12a: Claiming fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9
Oct 14 05:30:25 np0005486808 systemd-udevd[398821]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 kernel: tap312d35c6-7a (unregistering): left promiscuous mode
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.784 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01447|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a ovn-installed in OVS
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01448|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a up in Southbound
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01449|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=1)
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01450|binding|INFO|Removing iface tap312d35c6-7a ovn-installed in OVS
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01451|if_status|INFO|Dropped 2 log messages in last 162 seconds (most recently, 162 seconds ago) due to excessive rate
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01452|if_status|INFO|Not setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down as sb is readonly
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01453|binding|INFO|Releasing lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a from this chassis (sb_readonly=0)
Oct 14 05:30:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:25Z|01454|binding|INFO|Setting lport 312d35c6-7aa5-4056-b4ed-679cf0e1a12a down in Southbound
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], port_security=['fa:16:3e:40:44:e9 10.100.0.8 2001:db8::f816:3eff:fe40:44e9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fe40:44e9/64', 'neutron:device_id': '9bab7e53-30b3-4cd0-ad07-3cc9b5c05492', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598825b2-c17a-4454-af22-d97e9570639b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=426ad543-f963-4015-b706-28bdf047c321, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=312d35c6-7aa5-4056-b4ed-679cf0e1a12a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.810 2 INFO nova.virt.libvirt.driver [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Instance destroyed successfully.#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.811 2 DEBUG nova.objects.instance [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : haproxy version is 2.8.14-c23fe91
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [NOTICE]   (396685) : path to executable is /usr/sbin/haproxy
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : Exiting Master process...
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : Exiting Master process...
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [ALERT]    (396685) : Current worker (396691) exited with code 143 (Terminated)
Oct 14 05:30:25 np0005486808 neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395[396675]: [WARNING]  (396685) : All workers exited. Exiting... (0)
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.821 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:25 np0005486808 systemd[1]: libpod-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope: Deactivated successfully.
Oct 14 05:30:25 np0005486808 podman[398859]: 2025-10-14 09:30:25.834220657 +0000 UTC m=+0.055490842 container died c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.858 2 DEBUG nova.virt.libvirt.vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:29:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-766373181',display_name='tempest-TestGettingAddress-server-766373181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-766373181',id=133,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3NS95m6mTTkHzJJUJYRW56WG9z1F4s978ToWhMf7sPvH/Oc2BrnynBRS1TQVQWP6VVqaEePs7ictqhZIZyqMTpbO+UGDq3FfVttcWsPDDYdiFqglsH1qUqUZlXlWfhiQ==',key_name='tempest-TestGettingAddress-313667449',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:29:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-kdy1goa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:29:25Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=9bab7e53-30b3-4cd0-ad07-3cc9b5c05492,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.860 2 DEBUG nova.network.os_vif_util [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.861 2 DEBUG nova.network.os_vif_util [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.861 2 DEBUG os_vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap312d35c6-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d-userdata-shm.mount: Deactivated successfully.
Oct 14 05:30:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ba749212af37e81d7c4992e139c5f78aa6b1ef87bc093a5fe8a54df954b3324e-merged.mount: Deactivated successfully.
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 podman[398859]: 2025-10-14 09:30:25.880593473 +0000 UTC m=+0.101863628 container cleanup c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.881 2 INFO os_vif [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:44:e9,bridge_name='br-int',has_traffic_filtering=True,id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a,network=Network(3cbcf7e5-ac17-454b-893d-3fda266aa395),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap312d35c6-7a')#033[00m
Oct 14 05:30:25 np0005486808 systemd[1]: libpod-conmon-c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d.scope: Deactivated successfully.
Oct 14 05:30:25 np0005486808 podman[398860]: 2025-10-14 09:30:25.929898791 +0000 UTC m=+0.132947756 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:30:25 np0005486808 podman[398916]: 2025-10-14 09:30:25.948601983 +0000 UTC m=+0.046730595 container remove c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[61ca7753-941c-4d37-927d-a3ce332ae3e1]: (4, ('Tue Oct 14 09:30:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 (c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d)\nc291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d\nTue Oct 14 09:30:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 (c291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d)\nc291dbc64f7c689362a43eb55f867d9a826b81453de3eb822c9e704da2c09b3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.961 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a030cb9-98b6-4622-b006-2dfedd01aed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.961 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbcf7e5-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 nova_compute[259627]: 2025-10-14 09:30:25.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:25 np0005486808 kernel: tap3cbcf7e5-a0: left promiscuous mode
Oct 14 05:30:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:25.984 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[40397486-df2e-4ee3-874f-c154f2858f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.015 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[71ade00c-1180-4df8-aeaa-21d47061d975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.017 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70c67373-26e3-466a-a34c-df055a8ec49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.033 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34f925f8-d714-46b5-873c-1ed3398c3631]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 794269, 'reachable_time': 38248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398969, 'error': None, 'target': 'ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.035 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3cbcf7e5-ac17-454b-893d-3fda266aa395 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.035 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[670a9662-aa4b-444c-b041-756be4c15c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 systemd[1]: run-netns-ovnmeta\x2d3cbcf7e5\x2dac17\x2d454b\x2d893d\x2d3fda266aa395.mount: Deactivated successfully.
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.036 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.038 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.039 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2938bd7-c185-4452-85ab-b022626487bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.040 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a in datapath 3cbcf7e5-ac17-454b-893d-3fda266aa395 unbound from our chassis#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.041 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cbcf7e5-ac17-454b-893d-3fda266aa395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:30:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:26.042 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb2f3fc-dd5a-4eef-b217-2effbf381304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.149 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.151 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.152 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.153 2 DEBUG oslo_concurrency.lockutils [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.153 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.154 2 DEBUG nova.compute.manager [req-b2de543b-cc7a-46bd-a35e-505dd00f90d2 req-6136fb3b-ea53-4a4c-856a-2a40c44df227 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:30:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2686686302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.277 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.285 2 DEBUG nova.compute.provider_tree [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.303 2 DEBUG nova.scheduler.client.report [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.316 2 INFO nova.virt.libvirt.driver [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deleting instance files /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_del#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.318 2 INFO nova.virt.libvirt.driver [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deletion of /var/lib/nova/instances/9bab7e53-30b3-4cd0-ad07-3cc9b5c05492_del complete#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.345 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.345 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.379 2 INFO nova.compute.manager [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG oslo.service.loopingcall [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.380 2 DEBUG nova.network.neutron [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.398 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.398 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:30:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.426 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.448 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.527 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.529 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.530 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating image(s)#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.560 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.587 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.614 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.618 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.678 2 DEBUG nova.policy [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.726 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.727 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.728 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.728 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.762 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:26 np0005486808 nova_compute[259627]: 2025-10-14 09:30:26.767 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.076 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.164 2 DEBUG nova.network.neutron [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.172 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.216 2 INFO nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.293 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.293 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.300 2 DEBUG nova.objects.instance [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.322 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.323 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Ensure instance console log exists: /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.323 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.324 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.324 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.385 2 DEBUG oslo_concurrency.processutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.492 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Successfully created port: 2f6bb222-680e-469f-83d5-517735604bb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.640 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updated VIF entry in instance network info cache for port 312d35c6-7aa5-4056-b4ed-679cf0e1a12a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.641 2 DEBUG nova.network.neutron [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [{"id": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "address": "fa:16:3e:40:44:e9", "network": {"id": "3cbcf7e5-ac17-454b-893d-3fda266aa395", "bridge": "br-int", "label": "tempest-network-smoke--833396188", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe40:44e9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap312d35c6-7a", "ovs_interfaceid": "312d35c6-7aa5-4056-b4ed-679cf0e1a12a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.660 2 DEBUG oslo_concurrency.lockutils [req-e1e06327-cdf3-4a61-bf4e-510103b0e7dd req-9caab46b-e75c-43e0-a4d7-818f357a9427 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.817 2 DEBUG nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-deleted-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.818 2 INFO nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Neutron deleted interface 312d35c6-7aa5-4056-b4ed-679cf0e1a12a; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.818 2 DEBUG nova.network.neutron [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.842 2 DEBUG nova.compute.manager [req-3b4dd3a2-9665-478b-8e52-4b37e5d5f87d req-f610e572-1305-479d-9353-1b76cb6e7a45 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Detach interface failed, port_id=312d35c6-7aa5-4056-b4ed-679cf0e1a12a, reason: Instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:30:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382039413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.887 2 DEBUG oslo_concurrency.processutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.894 2 DEBUG nova.compute.provider_tree [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.909 2 DEBUG nova.scheduler.client.report [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.932 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:27 np0005486808 nova_compute[259627]: 2025-10-14 09:30:27.973 2 INFO nova.scheduler.client.report [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.045 2 DEBUG oslo_concurrency.lockutils [None req-0c5b1be9-f0a6-4863-853e-4504e867cfc0 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.299 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Successfully updated port: 2f6bb222-680e-469f-83d5-517735604bb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.305 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.306 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.306 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.307 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.307 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.308 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.309 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.309 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.310 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.310 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.311 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.312 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-unplugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.312 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.313 2 DEBUG oslo_concurrency.lockutils [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "9bab7e53-30b3-4cd0-ad07-3cc9b5c05492-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.313 2 DEBUG nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] No waiting events found dispatching network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.313 2 WARNING nova.compute.manager [req-3754fae2-a679-4479-a67e-edb19e4ab8f0 req-18def2e5-f20a-40ea-a34e-a79e10e95ce2 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Received unexpected event network-vif-plugged-312d35c6-7aa5-4056-b4ed-679cf0e1a12a for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.322 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:30:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 05:30:28 np0005486808 nova_compute[259627]: 2025-10-14 09:30:28.465 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.459 2 DEBUG nova.network.neutron [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.482 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.482 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance network_info: |[{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.488 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start _get_guest_xml network_info=[{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.494 2 WARNING nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.500 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.500 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.503 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.503 2 DEBUG nova.virt.libvirt.host [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.504 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.505 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.506 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.506 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.507 2 DEBUG nova.virt.hardware [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.509 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.959 2 DEBUG nova.compute.manager [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.959 2 DEBUG nova.compute.manager [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.960 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.960 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.961 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:29 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:30:29 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587058374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:30:29 np0005486808 nova_compute[259627]: 2025-10-14 09:30:29.987 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.010 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.015 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 124 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 746 KiB/s wr, 37 op/s
Oct 14 05:30:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:30:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728374326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.462 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.464 2 DEBUG nova.virt.libvirt.vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.464 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.465 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.466 2 DEBUG nova.objects.instance [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.515 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <uuid>4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</uuid>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <name>instance-00000088</name>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337</nova:name>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:30:29</nova:creationTime>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <nova:port uuid="2f6bb222-680e-469f-83d5-517735604bb0">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="serial">4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="uuid">4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:57:bb:97"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <target dev="tap2f6bb222-68"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/console.log" append="off"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:30:30 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:30:30 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:30:30 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:30:30 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Preparing to wait for external event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.517 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.518 2 DEBUG nova.virt.libvirt.vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.518 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.519 2 DEBUG nova.network.os_vif_util [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.519 2 DEBUG os_vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6bb222-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6bb222-68, col_values=(('external_ids', {'iface-id': '2f6bb222-680e-469f-83d5-517735604bb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:bb:97', 'vm-uuid': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:30:30 np0005486808 NetworkManager[44885]: <info>  [1760434230.5287] manager: (tap2f6bb222-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.532 2 INFO os_vif [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68')#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.600 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.600 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.601 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:57:bb:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.602 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Using config drive#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.634 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.985 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Creating config drive at /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config#033[00m
Oct 14 05:30:30 np0005486808 nova_compute[259627]: 2025-10-14 09:30:30.990 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu3ihbsce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.160 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu3ihbsce" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.182 2 DEBUG nova.storage.rbd_utils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.185 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.372 2 DEBUG oslo_concurrency.processutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.373 2 INFO nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deleting local config drive /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4/disk.config because it was imported into RBD.#033[00m
Oct 14 05:30:31 np0005486808 kernel: tap2f6bb222-68: entered promiscuous mode
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.4395] manager: (tap2f6bb222-68): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01455|binding|INFO|Claiming lport 2f6bb222-680e-469f-83d5-517735604bb0 for this chassis.
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01456|binding|INFO|2f6bb222-680e-469f-83d5-517735604bb0: Claiming fa:16:3e:57:bb:97 10.100.0.9
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 systemd-udevd[399293]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.468 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:bb:97 10.100.0.9'], port_security=['fa:16:3e:57:bb:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1 93b5966a-7949-42d1-a83d-7ff7c7667c63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2f6bb222-680e-469f-83d5-517735604bb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.469 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6bb222-680e-469f-83d5-517735604bb0 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a bound to our chassis#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.470 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a#033[00m
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01457|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 ovn-installed in OVS
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01458|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 up in Southbound
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.4863] device (tap2f6bb222-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.4882] device (tap2f6bb222-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.488 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be55eec0-e8df-4d9a-8fa4-178906779226]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.490 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e05ecd2-81 in ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.492 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e05ecd2-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[98d9ccf7-1f1c-49c0-a636-2e9b1bb42b72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.493 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca031c2-ef59-4e87-800e-7476a4ba6f88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 systemd-machined[214636]: New machine qemu-169-instance-00000088.
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.510 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[65c18e46-78f4-4fbd-b1c6-d88437ac355d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 systemd[1]: Started Virtual Machine qemu-169-instance-00000088.
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[807a4759-cbcb-4d4e-bf0c-ce0f188b6388]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.574 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[27c07f54-69cd-4bb9-ab61-0ffab471f23a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.5809] manager: (tap6e05ecd2-80): new Veth device (/org/freedesktop/NetworkManager/Devices/591)
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd8a2ea-8d30-4c63-aba5-e90e1c2c65d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.619 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7a83ba1a-8be2-4cb0-9b27-3fb14388621e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.622 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebd2fc7-7ad4-47f4-862b-444b59804e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.6502] device (tap6e05ecd2-80): carrier: link connected
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.661 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[07a196f6-9afc-484c-b273-1917a81ce3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.680 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[76c8fdd6-d7cc-469b-84ce-9398a30d6853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399329, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.700 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f98c2d1b-0ab0-422f-aac8-0d406e8530e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:c23a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801041, 'tstamp': 801041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399330, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[906594ef-9c45-4c5f-a223-3a98294a65cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399331, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.723 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.724 2 DEBUG nova.network.neutron [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.737 2 DEBUG oslo_concurrency.lockutils [req-094fe7fa-e637-42cc-ab5d-89243a325c52 req-28cb9092-f73b-4f23-85f8-95985ac8336c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e9ae58-26dd-4389-94d2-82a7ce911c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.810 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13e6a980-a85b-4ab9-80c5-9ce277eb9d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.811 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.812 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.812 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 NetworkManager[44885]: <info>  [1760434231.8142] manager: (tap6e05ecd2-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct 14 05:30:31 np0005486808 kernel: tap6e05ecd2-80: entered promiscuous mode
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01459|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.833 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.834 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca32ad8-5da9-431d-bdf2-2bda31e966d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.836 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.pid.haproxy
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.837 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'env', 'PROCESS_TAG=haproxy-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:31 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:31.857 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:30:31 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:31Z|01460|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 05:30:31 np0005486808 nova_compute[259627]: 2025-10-14 09:30:31.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:32Z|01461|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.081 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.081 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.082 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.082 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Processing event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.083 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG oslo_concurrency.lockutils [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.084 2 DEBUG nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.085 2 WARNING nova.compute.manager [req-1350b62b-3d8a-4171-9ab4-4a24d31c1e80 req-6f4c1a9b-24e3-4a36-a121-4da82f47c480 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received unexpected event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:30:32 np0005486808 podman[399403]: 2025-10-14 09:30:32.340996935 +0000 UTC m=+0.089358869 container create ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:30:32 np0005486808 podman[399403]: 2025-10-14 09:30:32.295573853 +0000 UTC m=+0.043935837 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:30:32 np0005486808 systemd[1]: Started libpod-conmon-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope.
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 14 05:30:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:30:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a4ae5bac5fb7737a47f6362c904610fbce4171335446f4e8312252dc95d3e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:30:32 np0005486808 podman[399403]: 2025-10-14 09:30:32.461965424 +0000 UTC m=+0.210327358 container init ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:30:32 np0005486808 podman[399403]: 2025-10-14 09:30:32.472608637 +0000 UTC m=+0.220970541 container start ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:30:32 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : New worker (399424) forked
Oct 14 05:30:32 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : Loading success.
Oct 14 05:30:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:32.535 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.572 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5717463, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.573 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Started (Lifecycle Event)#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.576 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.584 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.588 2 INFO nova.virt.libvirt.driver [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance spawned successfully.#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.589 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.655 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.656 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.657 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.658 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.659 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.660 2 DEBUG nova.virt.libvirt.driver [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.666 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.670 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.712 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.713 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5720613, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.714 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.743 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.749 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434232.5836525, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.750 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.756 2 INFO nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 6.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.756 2 DEBUG nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.771 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.775 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.804 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.827 2 INFO nova.compute.manager [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 7.26 seconds to build instance.#033[00m
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:30:32
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', 'images', '.rgw.root', 'vms']
Oct 14 05:30:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:30:32 np0005486808 nova_compute[259627]: 2025-10-14 09:30:32.866 2 DEBUG oslo_concurrency.lockutils [None req-7a01ef06-07b9-4695-bbd1-45bbc1a2e014 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:30:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:30:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 05:30:35 np0005486808 nova_compute[259627]: 2025-10-14 09:30:35.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:35 np0005486808 nova_compute[259627]: 2025-10-14 09:30:35.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 14 05:30:36 np0005486808 nova_compute[259627]: 2025-10-14 09:30:36.621 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434221.6195803, edf4b59a-fe1b-48ae-92a3-7bec88fc7491 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:36 np0005486808 nova_compute[259627]: 2025-10-14 09:30:36.621 2 INFO nova.compute.manager [-] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:30:36 np0005486808 nova_compute[259627]: 2025-10-14 09:30:36.647 2 DEBUG nova.compute.manager [None req-04138097-0440-40de-bdf7-e1ca56659660 - - - - - -] [instance: edf4b59a-fe1b-48ae-92a3-7bec88fc7491] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:37Z|01462|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:37 np0005486808 NetworkManager[44885]: <info>  [1760434237.4034] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct 14 05:30:37 np0005486808 NetworkManager[44885]: <info>  [1760434237.4062] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 14 05:30:37 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:37Z|01463|binding|INFO|Releasing lport 4697e43c-b02d-4f27-aea8-a54cad6fa2da from this chassis (sb_readonly=0)
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.992 2 DEBUG nova.compute.manager [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.993 2 DEBUG nova.compute.manager [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:37 np0005486808 nova_compute[259627]: 2025-10-14 09:30:37.994 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:30:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 05:30:38 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:30:38.537 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.806 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434225.805421, 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.808 2 INFO nova.compute.manager [-] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.848 2 DEBUG nova.compute.manager [None req-85a827a1-fbd8-46f8-8624-21847f342c34 - - - - - -] [instance: 9bab7e53-30b3-4cd0-ad07-3cc9b5c05492] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.920 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.921 2 DEBUG nova.network.neutron [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:40 np0005486808 nova_compute[259627]: 2025-10-14 09:30:40.949 2 DEBUG oslo_concurrency.lockutils [req-c5197957-c1a5-4221-ac9d-93b4e4843a1d req-110312ce-e355-451f-b022-163acd31e2d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:41 np0005486808 nova_compute[259627]: 2025-10-14 09:30:41.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Oct 14 05:30:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:30:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:30:43 np0005486808 podman[399434]: 2025-10-14 09:30:43.699644564 +0000 UTC m=+0.094123967 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 14 05:30:43 np0005486808 podman[399435]: 2025-10-14 09:30:43.725495033 +0000 UTC m=+0.118515470 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:30:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 88 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:30:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:44Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:bb:97 10.100.0.9
Oct 14 05:30:44 np0005486808 ovn_controller[152662]: 2025-10-14T09:30:44Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:bb:97 10.100.0.9
Oct 14 05:30:44 np0005486808 nova_compute[259627]: 2025-10-14 09:30:44.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:44 np0005486808 nova_compute[259627]: 2025-10-14 09:30:44.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:44 np0005486808 nova_compute[259627]: 2025-10-14 09:30:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710949761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.521 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.654 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.655 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.842 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.843 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.843 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.844 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:45 np0005486808 nova_compute[259627]: 2025-10-14 09:30:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.075 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 14 05:30:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693183136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.560 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.569 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.593 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.625 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:30:46 np0005486808 nova_compute[259627]: 2025-10-14 09:30:46.626 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:47 np0005486808 nova_compute[259627]: 2025-10-14 09:30:47.628 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:47 np0005486808 nova_compute[259627]: 2025-10-14 09:30:47.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:47 np0005486808 nova_compute[259627]: 2025-10-14 09:30:47.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:30:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:48 np0005486808 nova_compute[259627]: 2025-10-14 09:30:48.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:48 np0005486808 nova_compute[259627]: 2025-10-14 09:30:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:30:48 np0005486808 nova_compute[259627]: 2025-10-14 09:30:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:30:49 np0005486808 nova_compute[259627]: 2025-10-14 09:30:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:49 np0005486808 nova_compute[259627]: 2025-10-14 09:30:49.672 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:30:49 np0005486808 nova_compute[259627]: 2025-10-14 09:30:49.673 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:30:49 np0005486808 nova_compute[259627]: 2025-10-14 09:30:49.673 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:30:49 np0005486808 nova_compute[259627]: 2025-10-14 09:30:49.674 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:50 np0005486808 nova_compute[259627]: 2025-10-14 09:30:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:50 np0005486808 nova_compute[259627]: 2025-10-14 09:30:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:52 np0005486808 nova_compute[259627]: 2025-10-14 09:30:52.163 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:30:52 np0005486808 nova_compute[259627]: 2025-10-14 09:30:52.180 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:30:52 np0005486808 nova_compute[259627]: 2025-10-14 09:30:52.181 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:30:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:53 np0005486808 nova_compute[259627]: 2025-10-14 09:30:53.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:53 np0005486808 nova_compute[259627]: 2025-10-14 09:30:53.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:54 np0005486808 nova_compute[259627]: 2025-10-14 09:30:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:30:55 np0005486808 nova_compute[259627]: 2025-10-14 09:30:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:55 np0005486808 nova_compute[259627]: 2025-10-14 09:30:55.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:30:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:30:56 np0005486808 podman[399520]: 2025-10-14 09:30:56.699965344 +0000 UTC m=+0.093399539 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:30:56 np0005486808 podman[399519]: 2025-10-14 09:30:56.755343282 +0000 UTC m=+0.153863443 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.330 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.330 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.349 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.431 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.431 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.442 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.442 2 INFO nova.compute.claims [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.509 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.510 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.536 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.613 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:57 np0005486808 nova_compute[259627]: 2025-10-14 09:30:57.675 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:30:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385454893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.085 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.093 2 DEBUG nova.compute.provider_tree [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.165 2 DEBUG nova.scheduler.client.report [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.188 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.189 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.193 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.202 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.203 2 INFO nova.compute.claims [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.339 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.340 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.361 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.391 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.408 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 121 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.605 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.607 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.608 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating image(s)#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.634 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.658 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.678 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.681 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.767 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.768 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.769 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.769 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.787 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.790 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b30a994a-5fb7-4344-9944-98d3d75d3b04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:30:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771659984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.870 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.876 2 DEBUG nova.compute.provider_tree [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.893 2 DEBUG nova.policy [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:30:58 np0005486808 nova_compute[259627]: 2025-10-14 09:30:58.930 2 DEBUG nova.scheduler.client.report [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.045 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 b30a994a-5fb7-4344-9944-98d3d75d3b04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.090 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.114 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.114 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.166 2 DEBUG nova.objects.instance [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.192 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.192 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Ensure instance console log exists: /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.193 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.203 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.203 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.243 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.270 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.426 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.427 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.428 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating image(s)#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.461 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.502 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.539 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.544 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.649 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.651 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.652 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.652 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.688 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.694 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.743 2 DEBUG nova.policy [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:30:59 np0005486808 nova_compute[259627]: 2025-10-14 09:30:59.993 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.043 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.127 2 DEBUG nova.objects.instance [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.145 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.145 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Ensure instance console log exists: /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.146 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.266 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully created port: f2639397-8fb2-4541-a298-fd68219e1e47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.376 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Successfully created port: 20f15fae-2789-43f5-8ca3-2a412dba5625 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 149 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 893 KiB/s wr, 8 op/s
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:00 np0005486808 nova_compute[259627]: 2025-10-14 09:31:00.741 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully created port: c22b3ec2-f5a1-4c97-8648-a463e9e12545 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.097 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Successfully updated port: 20f15fae-2789-43f5-8ca3-2a412dba5625 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.116 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.117 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.117 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.194 2 DEBUG nova.compute.manager [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.194 2 DEBUG nova.compute.manager [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.195 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fb69312d-d2b5-4528-8d5d-8b9ff46eed9f does not exist
Oct 14 05:31:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 746fbf9d-33e8-41ed-947d-4c61e8f1df80 does not exist
Oct 14 05:31:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6f41fa35-5568-49eb-9d51-77847a85f6a7 does not exist
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:31:01 np0005486808 nova_compute[259627]: 2025-10-14 09:31:01.883 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.140 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully updated port: f2639397-8fb2-4541-a298-fd68219e1e47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.225 2 DEBUG nova.compute.manager [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG nova.compute.manager [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.226 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 05:31:02 np0005486808 podman[400215]: 2025-10-14 09:31:02.515266169 +0000 UTC m=+0.070811739 container create e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:31:02 np0005486808 systemd[1]: Started libpod-conmon-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope.
Oct 14 05:31:02 np0005486808 podman[400215]: 2025-10-14 09:31:02.483724295 +0000 UTC m=+0.039269915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:02 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:02 np0005486808 podman[400215]: 2025-10-14 09:31:02.634951226 +0000 UTC m=+0.190496806 container init e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:31:02 np0005486808 podman[400215]: 2025-10-14 09:31:02.649458582 +0000 UTC m=+0.205004142 container start e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:31:02 np0005486808 podman[400215]: 2025-10-14 09:31:02.653567003 +0000 UTC m=+0.209112563 container attach e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:31:02 np0005486808 agitated_keller[400232]: 167 167
Oct 14 05:31:02 np0005486808 systemd[1]: libpod-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope: Deactivated successfully.
Oct 14 05:31:02 np0005486808 conmon[400232]: conmon e923e8739478cb2d50e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope/container/memory.events
Oct 14 05:31:02 np0005486808 podman[400237]: 2025-10-14 09:31:02.730938142 +0000 UTC m=+0.045378145 container died e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9e73c11aa86aaea91b43b0ef76e3a5973c171310b7fc73cd4e8427d689997bb4-merged.mount: Deactivated successfully.
Oct 14 05:31:02 np0005486808 podman[400237]: 2025-10-14 09:31:02.791817975 +0000 UTC m=+0.106257938 container remove e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_keller, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:02 np0005486808 systemd[1]: libpod-conmon-e923e8739478cb2d50e12e146d82c0d1582fdcd57757cf16f4be7e876a4e51dd.scope: Deactivated successfully.
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:02 np0005486808 nova_compute[259627]: 2025-10-14 09:31:02.848 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:02 np0005486808 podman[400259]: 2025-10-14 09:31:02.990596564 +0000 UTC m=+0.057263467 container create a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:31:03 np0005486808 systemd[1]: Started libpod-conmon-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope.
Oct 14 05:31:03 np0005486808 podman[400259]: 2025-10-14 09:31:02.962281599 +0000 UTC m=+0.028948542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:03 np0005486808 podman[400259]: 2025-10-14 09:31:03.11434035 +0000 UTC m=+0.181007223 container init a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:31:03 np0005486808 podman[400259]: 2025-10-14 09:31:03.128056327 +0000 UTC m=+0.194723230 container start a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:31:03 np0005486808 podman[400259]: 2025-10-14 09:31:03.131925222 +0000 UTC m=+0.198592105 container attach a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.495 2 DEBUG nova.network.neutron [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.523 2 DEBUG oslo_concurrency.lockutils [req-2656b88e-360f-410a-8d12-7d14a0e60cd7 req-4ef5b7dd-e374-474f-b149-3bc936fb85cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.549 2 DEBUG nova.network.neutron [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.571 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.571 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance network_info: |[{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.572 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.572 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.577 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start _get_guest_xml network_info=[{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.585 2 WARNING nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.598 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.599 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.606 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.607 2 DEBUG nova.virt.libvirt.host [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.607 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.608 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.609 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.609 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.610 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.610 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.611 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.612 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.612 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.613 2 DEBUG nova.virt.hardware [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.617 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.743 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Successfully updated port: c22b3ec2-f5a1-4c97-8648-a463e9e12545 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.763 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.763 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.764 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:31:03 np0005486808 nova_compute[259627]: 2025-10-14 09:31:03.940 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477009379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.103 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.130 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.135 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:04 np0005486808 tender_wing[400276]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:31:04 np0005486808 tender_wing[400276]: --> relative data size: 1.0
Oct 14 05:31:04 np0005486808 tender_wing[400276]: --> All data devices are unavailable
Oct 14 05:31:04 np0005486808 systemd[1]: libpod-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Deactivated successfully.
Oct 14 05:31:04 np0005486808 systemd[1]: libpod-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Consumed 1.086s CPU time.
Oct 14 05:31:04 np0005486808 podman[400259]: 2025-10-14 09:31:04.276002488 +0000 UTC m=+1.342669391 container died a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:31:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-200887aa44f4e3c057fc8b3d046787d87f9c5c5cd5709084db61c452b4032090-merged.mount: Deactivated successfully.
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.331 2 DEBUG nova.compute.manager [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.333 2 DEBUG nova.compute.manager [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-c22b3ec2-f5a1-4c97-8648-a463e9e12545. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.334 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:04 np0005486808 podman[400259]: 2025-10-14 09:31:04.354030392 +0000 UTC m=+1.420697265 container remove a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:04 np0005486808 systemd[1]: libpod-conmon-a09dc736fe193a64239a4ef14aa220440189eb28bbb15b53c7d6f5e75596bf13.scope: Deactivated successfully.
Oct 14 05:31:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 213 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 48 op/s
Oct 14 05:31:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751276044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.614 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.615 2 DEBUG nova.virt.libvirt.vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:59Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.616 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.617 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.618 2 DEBUG nova.objects.instance [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.637 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <uuid>4ff65022-c3e1-4ee6-b866-7892555ef52f</uuid>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <name>instance-0000008a</name>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503</nova:name>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:31:03</nova:creationTime>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <nova:port uuid="20f15fae-2789-43f5-8ca3-2a412dba5625">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="serial">4ff65022-c3e1-4ee6-b866-7892555ef52f</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="uuid">4ff65022-c3e1-4ee6-b866-7892555ef52f</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4ff65022-c3e1-4ee6-b866-7892555ef52f_disk">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:59:13:28"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <target dev="tap20f15fae-27"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/console.log" append="off"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:31:04 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:31:04 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:31:04 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:31:04 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Preparing to wait for external event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.638 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.639 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.639 2 DEBUG nova.virt.libvirt.vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:59Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.640 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.640 2 DEBUG nova.network.os_vif_util [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.641 2 DEBUG os_vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20f15fae-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20f15fae-27, col_values=(('external_ids', {'iface-id': '20f15fae-2789-43f5-8ca3-2a412dba5625', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:13:28', 'vm-uuid': '4ff65022-c3e1-4ee6-b866-7892555ef52f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:04 np0005486808 NetworkManager[44885]: <info>  [1760434264.6517] manager: (tap20f15fae-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.657 2 INFO os_vif [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27')#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.723 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.724 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.724 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:59:13:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.724 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Using config drive#033[00m
Oct 14 05:31:04 np0005486808 nova_compute[259627]: 2025-10-14 09:31:04.759 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.173625685 +0000 UTC m=+0.058034035 container create 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:31:05 np0005486808 systemd[1]: Started libpod-conmon-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope.
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.149571385 +0000 UTC m=+0.033979705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.28383083 +0000 UTC m=+0.168239240 container init 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.297124016 +0000 UTC m=+0.181532366 container start 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.301543114 +0000 UTC m=+0.185951464 container attach 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:05 np0005486808 boring_raman[400557]: 167 167
Oct 14 05:31:05 np0005486808 systemd[1]: libpod-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope: Deactivated successfully.
Oct 14 05:31:05 np0005486808 conmon[400557]: conmon 0a454cd99e4ecd55682f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope/container/memory.events
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.306164948 +0000 UTC m=+0.190573298 container died 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:31:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fac05094bd029e5f2e0ace3c4be0c25d16e02217ddc01df89ab426382e3f6da2-merged.mount: Deactivated successfully.
Oct 14 05:31:05 np0005486808 podman[400540]: 2025-10-14 09:31:05.366848957 +0000 UTC m=+0.251257267 container remove 0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:31:05 np0005486808 systemd[1]: libpod-conmon-0a454cd99e4ecd55682ffc7841f3f00f4340a6eb31f8d3066cd0267d95276372.scope: Deactivated successfully.
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:05 np0005486808 podman[400583]: 2025-10-14 09:31:05.601440813 +0000 UTC m=+0.053690599 container create 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:31:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:31:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:31:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:31:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120634801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:31:05 np0005486808 systemd[1]: Started libpod-conmon-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope.
Oct 14 05:31:05 np0005486808 podman[400583]: 2025-10-14 09:31:05.581936304 +0000 UTC m=+0.034186120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:05 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:05 np0005486808 podman[400583]: 2025-10-14 09:31:05.71786882 +0000 UTC m=+0.170118636 container init 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:31:05 np0005486808 podman[400583]: 2025-10-14 09:31:05.728956832 +0000 UTC m=+0.181206638 container start 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:31:05 np0005486808 podman[400583]: 2025-10-14 09:31:05.733066083 +0000 UTC m=+0.185315959 container attach 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.882 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Creating config drive at /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config#033[00m
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.892 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphnklfotg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.960 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.962 2 DEBUG nova.network.neutron [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:05 np0005486808 nova_compute[259627]: 2025-10-14 09:31:05.989 2 DEBUG oslo_concurrency.lockutils [req-3bcd5722-1300-4aef-9cc5-74c01341235e req-e63a307b-839d-496f-81af-789375f19033 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.066 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphnklfotg" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.111 2 DEBUG nova.storage.rbd_utils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.118 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.330 2 DEBUG oslo_concurrency.processutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config 4ff65022-c3e1-4ee6-b866-7892555ef52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.332 2 INFO nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deleting local config drive /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f/disk.config because it was imported into RBD.#033[00m
Oct 14 05:31:06 np0005486808 kernel: tap20f15fae-27: entered promiscuous mode
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:06Z|01464|binding|INFO|Claiming lport 20f15fae-2789-43f5-8ca3-2a412dba5625 for this chassis.
Oct 14 05:31:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:06Z|01465|binding|INFO|20f15fae-2789-43f5-8ca3-2a412dba5625: Claiming fa:16:3e:59:13:28 10.100.0.10
Oct 14 05:31:06 np0005486808 NetworkManager[44885]: <info>  [1760434266.4208] manager: (tap20f15fae-27): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.428 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:13:28 10.100.0.10'], port_security=['fa:16:3e:59:13:28 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ff65022-c3e1-4ee6-b866-7892555ef52f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=20f15fae-2789-43f5-8ca3-2a412dba5625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.429 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 20f15fae-2789-43f5-8ca3-2a412dba5625 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a bound to our chassis#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.431 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a#033[00m
Oct 14 05:31:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:06Z|01466|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 up in Southbound
Oct 14 05:31:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:06Z|01467|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 ovn-installed in OVS
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.449 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[462a7724-a6be-4b3f-b5a2-ad0b25c76546]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 systemd-udevd[400660]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:06 np0005486808 NetworkManager[44885]: <info>  [1760434266.4656] device (tap20f15fae-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:31:06 np0005486808 NetworkManager[44885]: <info>  [1760434266.4663] device (tap20f15fae-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:31:06 np0005486808 systemd-machined[214636]: New machine qemu-170-instance-0000008a.
Oct 14 05:31:06 np0005486808 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.492 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5f600944-56e9-4c23-b412-52a98dd6e2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.495 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[84b0a67a-cb59-45fd-aed8-92083d90b591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]: {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    "0": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "devices": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "/dev/loop3"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            ],
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_name": "ceph_lv0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_size": "21470642176",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "name": "ceph_lv0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "tags": {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_name": "ceph",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.crush_device_class": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.encrypted": "0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_id": "0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.vdo": "0"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            },
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "vg_name": "ceph_vg0"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        }
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    ],
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    "1": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "devices": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "/dev/loop4"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            ],
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_name": "ceph_lv1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_size": "21470642176",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "name": "ceph_lv1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "tags": {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_name": "ceph",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.crush_device_class": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.encrypted": "0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_id": "1",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.vdo": "0"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            },
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "vg_name": "ceph_vg1"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        }
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    ],
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    "2": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "devices": [
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "/dev/loop5"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            ],
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_name": "ceph_lv2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_size": "21470642176",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "name": "ceph_lv2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "tags": {
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.cluster_name": "ceph",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.crush_device_class": "",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.encrypted": "0",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osd_id": "2",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:                "ceph.vdo": "0"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            },
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "type": "block",
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:            "vg_name": "ceph_vg2"
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:        }
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]:    ]
Oct 14 05:31:06 np0005486808 trusting_rhodes[400600]: }
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.523 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1f5623-ca9f-4b5e-aa6c-30caea7136b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 systemd[1]: libpod-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope: Deactivated successfully.
Oct 14 05:31:06 np0005486808 podman[400583]: 2025-10-14 09:31:06.534458039 +0000 UTC m=+0.986707835 container died 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.542 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[647b90d6-5758-45de-8de7-a74b1acc8094]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400673, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-31bd641945c3846127286ebac3a2cf8830bea2d74d43151e4bed411827651092-merged.mount: Deactivated successfully.
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.565 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf024cf-1936-409c-b451-eba4175fe26a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801053, 'tstamp': 801053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400683, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801056, 'tstamp': 801056}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400683, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.566 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:06 np0005486808 nova_compute[259627]: 2025-10-14 09:31:06.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.570 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.571 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:06.571 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:06 np0005486808 podman[400583]: 2025-10-14 09:31:06.588578337 +0000 UTC m=+1.040828113 container remove 43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:31:06 np0005486808 systemd[1]: libpod-conmon-43e6035e760803e9d64f79cf7869c20169367529265f520481e8b5b82f18abc7.scope: Deactivated successfully.
Oct 14 05:31:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.047 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.047 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:07 np0005486808 podman[400871]: 2025-10-14 09:31:07.253378102 +0000 UTC m=+0.063850538 container create be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:31:07 np0005486808 systemd[1]: Started libpod-conmon-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope.
Oct 14 05:31:07 np0005486808 podman[400871]: 2025-10-14 09:31:07.2264126 +0000 UTC m=+0.036885096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:07 np0005486808 podman[400871]: 2025-10-14 09:31:07.338627824 +0000 UTC m=+0.149100330 container init be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:31:07 np0005486808 podman[400871]: 2025-10-14 09:31:07.352569796 +0000 UTC m=+0.163042252 container start be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:31:07 np0005486808 podman[400871]: 2025-10-14 09:31:07.35680535 +0000 UTC m=+0.167277876 container attach be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:31:07 np0005486808 optimistic_grothendieck[400887]: 167 167
Oct 14 05:31:07 np0005486808 systemd[1]: libpod-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope: Deactivated successfully.
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.413 2 DEBUG nova.network.neutron [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:07 np0005486808 podman[400892]: 2025-10-14 09:31:07.428668413 +0000 UTC m=+0.044840811 container died be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.437 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.438 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance network_info: |[{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.439 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.440 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port c22b3ec2-f5a1-4c97-8648-a463e9e12545 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.447 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start _get_guest_xml network_info=[{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.454 2 WARNING nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:31:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b2e2815a89b83457a3087176cc1d5ba8533ddeb1506f0a6ac1a70785f78eb1e4-merged.mount: Deactivated successfully.
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.466 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.4642997, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.466 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Started (Lifecycle Event)#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.474 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:31:07 np0005486808 podman[400892]: 2025-10-14 09:31:07.47456862 +0000 UTC m=+0.090740938 container remove be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.476 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.480 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.480 2 DEBUG nova.virt.libvirt.host [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.481 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:31:07 np0005486808 systemd[1]: libpod-conmon-be7c03cd729dfad4bc331d6d9c93b9182a04daaac7b5a839b94629424bb61758.scope: Deactivated successfully.
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.482 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.484 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.484 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.485 2 DEBUG nova.virt.hardware [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.489 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.534 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.538 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.466683, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.538 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.624 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.652 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:07 np0005486808 podman[400934]: 2025-10-14 09:31:07.771790964 +0000 UTC m=+0.085695134 container create f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:31:07 np0005486808 podman[400934]: 2025-10-14 09:31:07.744001612 +0000 UTC m=+0.057905772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:31:07 np0005486808 systemd[1]: Started libpod-conmon-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope.
Oct 14 05:31:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:07 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.879 2 DEBUG nova.compute.manager [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.880 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.881 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.881 2 DEBUG oslo_concurrency.lockutils [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.882 2 DEBUG nova.compute.manager [req-737e3a1b-388b-4730-b890-2dc339083446 req-300a7c15-41fc-46a8-87fc-bb5e7ee068d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Processing event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.883 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:31:07 np0005486808 podman[400934]: 2025-10-14 09:31:07.884020838 +0000 UTC m=+0.197925058 container init f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.889 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434267.8887336, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.889 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:31:07 np0005486808 podman[400934]: 2025-10-14 09:31:07.893670385 +0000 UTC m=+0.207574515 container start f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.895 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:31:07 np0005486808 podman[400934]: 2025-10-14 09:31:07.897581861 +0000 UTC m=+0.211486031 container attach f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.900 2 INFO nova.virt.libvirt.driver [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance spawned successfully.#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.902 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.910 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.930 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.939 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.940 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.940 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.941 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.942 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.942 2 DEBUG nova.virt.libvirt.driver [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239780367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.966 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:07 np0005486808 nova_compute[259627]: 2025-10-14 09:31:07.982 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.008 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.013 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.065 2 INFO nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.065 2 DEBUG nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.124 2 INFO nova.compute.manager [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 10.52 seconds to build instance.#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.141 2 DEBUG oslo_concurrency.lockutils [None req-55c9865b-71f4-4ed6-b07a-c38f463139fd 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 213 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 14 05:31:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826269457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.469 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.470 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.471 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.472 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.473 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.473 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.474 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.475 2 DEBUG nova.objects.instance [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.490 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <uuid>b30a994a-5fb7-4344-9944-98d3d75d3b04</uuid>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <name>instance-00000089</name>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-1494982542</nova:name>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:31:07</nova:creationTime>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:port uuid="f2639397-8fb2-4541-a298-fd68219e1e47">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <nova:port uuid="c22b3ec2-f5a1-4c97-8648-a463e9e12545">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fecf:8475" ipVersion="6"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="serial">b30a994a-5fb7-4344-9944-98d3d75d3b04</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="uuid">b30a994a-5fb7-4344-9944-98d3d75d3b04</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b30a994a-5fb7-4344-9944-98d3d75d3b04_disk">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:a4:1e:d0"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <target dev="tapf2639397-8f"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:cf:84:75"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <target dev="tapc22b3ec2-f5"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/console.log" append="off"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:31:08 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:31:08 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:31:08 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:31:08 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.492 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Preparing to wait for external event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.492 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.493 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Preparing to wait for external event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.494 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.495 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.495 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.496 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.496 2 DEBUG os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2639397-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2639397-8f, col_values=(('external_ids', {'iface-id': 'f2639397-8fb2-4541-a298-fd68219e1e47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:1e:d0', 'vm-uuid': 'b30a994a-5fb7-4344-9944-98d3d75d3b04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 NetworkManager[44885]: <info>  [1760434268.5055] manager: (tapf2639397-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.515 2 INFO os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f')#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.516 2 DEBUG nova.virt.libvirt.vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:30:58Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.516 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.517 2 DEBUG nova.network.os_vif_util [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.522 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc22b3ec2-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc22b3ec2-f5, col_values=(('external_ids', {'iface-id': 'c22b3ec2-f5a1-4c97-8648-a463e9e12545', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:84:75', 'vm-uuid': 'b30a994a-5fb7-4344-9944-98d3d75d3b04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 NetworkManager[44885]: <info>  [1760434268.5254] manager: (tapc22b3ec2-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.533 2 INFO os_vif [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5')#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.602 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:a4:1e:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.603 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:cf:84:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.604 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Using config drive#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.635 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:08 np0005486808 naughty_gould[400951]: {
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_id": 2,
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "type": "bluestore"
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    },
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_id": 1,
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "type": "bluestore"
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    },
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_id": 0,
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:        "type": "bluestore"
Oct 14 05:31:08 np0005486808 naughty_gould[400951]:    }
Oct 14 05:31:08 np0005486808 naughty_gould[400951]: }
Oct 14 05:31:08 np0005486808 systemd[1]: libpod-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope: Deactivated successfully.
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.873 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port c22b3ec2-f5a1-4c97-8648-a463e9e12545. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.874 2 DEBUG nova.network.neutron [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:08 np0005486808 podman[401049]: 2025-10-14 09:31:08.883932936 +0000 UTC m=+0.029460884 container died f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.892 2 DEBUG oslo_concurrency.lockutils [req-c13e2a66-d859-40f3-9f89-f453f9a07763 req-c1e16193-5d7a-4df3-9a79-2579d09dca62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-734ff42ea0cf2cf98385b44a8ef91d5c7a49a895419dcee04578bcfa4fd0b106-merged.mount: Deactivated successfully.
Oct 14 05:31:08 np0005486808 podman[401049]: 2025-10-14 09:31:08.960753781 +0000 UTC m=+0.106281729 container remove f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:31:08 np0005486808 systemd[1]: libpod-conmon-f04e1d883f6341f4be6a13cd99518fc6ff48e6ef758030c88977424717e78b2f.scope: Deactivated successfully.
Oct 14 05:31:08 np0005486808 nova_compute[259627]: 2025-10-14 09:31:08.997 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Creating config drive at /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.004 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xyf95q_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:09 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 661d9b53-51ab-4eae-b45d-139a965febd8 does not exist
Oct 14 05:31:09 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a1c03be6-192b-4ca1-979d-11d710b6f3c3 does not exist
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.154 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xyf95q_" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.189 2 DEBUG nova.storage.rbd_utils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.193 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.377 2 DEBUG oslo_concurrency.processutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config b30a994a-5fb7-4344-9944-98d3d75d3b04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.378 2 INFO nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deleting local config drive /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04/disk.config because it was imported into RBD.#033[00m
Oct 14 05:31:09 np0005486808 kernel: tapf2639397-8f: entered promiscuous mode
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.4290] manager: (tapf2639397-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/599)
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01468|binding|INFO|Claiming lport f2639397-8fb2-4541-a298-fd68219e1e47 for this chassis.
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01469|binding|INFO|f2639397-8fb2-4541-a298-fd68219e1e47: Claiming fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.4426] manager: (tapc22b3ec2-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.446 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:1e:d0 10.100.0.12'], port_security=['fa:16:3e:a4:1e:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f2639397-8fb2-4541-a298-fd68219e1e47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.447 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f2639397-8fb2-4541-a298-fd68219e1e47 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 bound to our chassis#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.448 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.459 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f205dda6-ed14-4b07-93be-9f17829d6aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.460 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20dd724c-91 in ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.461 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20dd724c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.461 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ac939309-e8cf-4a35-ba7c-349a5f0e4008]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.462 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e94ac-eddb-48e9-8d54-e5a2aacfcbb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 kernel: tapc22b3ec2-f5: entered promiscuous mode
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01470|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 ovn-installed in OVS
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01471|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 up in Southbound
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.482 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[acd45070-14b6-4907-bfc1-2ece6bc89aa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01472|if_status|INFO|Not updating pb chassis for c22b3ec2-f5a1-4c97-8648-a463e9e12545 now as sb is readonly
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01473|binding|INFO|Claiming lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 for this chassis.
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01474|binding|INFO|c22b3ec2-f5a1-4c97-8648-a463e9e12545: Claiming fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475
Oct 14 05:31:09 np0005486808 systemd-machined[214636]: New machine qemu-171-instance-00000089.
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.493 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], port_security=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:8475/64', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c22b3ec2-f5a1-4c97-8648-a463e9e12545) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:09 np0005486808 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01475|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 ovn-installed in OVS
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01476|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 up in Southbound
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.508 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c55d9127-9bc2-4712-a156-e26d3d31c96e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 systemd-udevd[401175]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:09 np0005486808 systemd-udevd[401176]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.544 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3ee912-0d40-4ec2-ba8b-1abf61efd42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.5480] device (tapf2639397-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.5488] device (tapc22b3ec2-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.5496] device (tapf2639397-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.5500] device (tapc22b3ec2-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.5508] manager: (tap20dd724c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.550 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0833f736-8a71-4721-923c-4a9da1020c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.591 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ef928-a20a-4a74-a66d-9b83f07cec69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.594 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3a251a86-4e47-42d3-9a7e-6b52ffa6f2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.6134] device (tap20dd724c-90): carrier: link connected
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.618 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8370e306-ef4d-413b-9cb3-4e19de94b2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[082e0782-eef9-409d-a09c-7e7f4bb90758]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401204, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[125a53c7-45dd-497c-9a88-45282426247a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:7a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804837, 'tstamp': 804837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401205, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.683 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e795851-a8b8-4b18-bd25-f8f71fa093d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401206, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.727 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e31a3a2f-90b5-4217-96cb-e689edfac604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.805 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd31d99-fe3b-4f99-a1a7-e51831791438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.808 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.809 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:09 np0005486808 NetworkManager[44885]: <info>  [1760434269.8127] manager: (tap20dd724c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct 14 05:31:09 np0005486808 kernel: tap20dd724c-90: entered promiscuous mode
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.817 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:09 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:09Z|01477|binding|INFO|Releasing lport 1308be16-f790-4063-acf0-2c8f6fdde665 from this chassis (sb_readonly=0)
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.852 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a075f4a-12c3-45c6-bae1-d4e950a5a919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.854 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.pid.haproxy
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 20dd724c-9d71-4931-8e8b-4dd3fbbacc17
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:31:09 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:09.855 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'env', 'PROCESS_TAG=haproxy-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20dd724c-9d71-4931-8e8b-4dd3fbbacc17.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.880 2 DEBUG nova.compute.manager [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.881 2 DEBUG oslo_concurrency.lockutils [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.882 2 DEBUG nova.compute.manager [req-f40ce521-365a-45e3-90c9-991255452a8d req-3ac7c6aa-fde5-41da-be65-bb422dfa0f4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Processing event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.974 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.975 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.975 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.976 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.976 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.976 2 WARNING nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received unexpected event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.977 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.977 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.978 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Processing event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.979 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.979 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG oslo_concurrency.lockutils [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.980 2 DEBUG nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:09 np0005486808 nova_compute[259627]: 2025-10-14 09:31:09.981 2 WARNING nova.compute.manager [req-6edc8aaa-bf25-424e-ac07-0a1e0d821454 req-87b1ad0e-f48f-4912-9cbb-a7745c8688a7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.322 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.323 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3223557, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.323 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Started (Lifecycle Event)#033[00m
Oct 14 05:31:10 np0005486808 podman[401281]: 2025-10-14 09:31:10.322346744 +0000 UTC m=+0.061017719 container create 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.336 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.339 2 INFO nova.virt.libvirt.driver [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance spawned successfully.#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.339 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.345 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.348 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.359 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.359 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.360 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.360 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.361 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.361 2 DEBUG nova.virt.libvirt.driver [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.366 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3256197, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.366 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:31:10 np0005486808 podman[401281]: 2025-10-14 09:31:10.286878253 +0000 UTC m=+0.025549238 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:31:10 np0005486808 systemd[1]: Started libpod-conmon-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope.
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.397 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.403 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434270.3356943, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.403 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:31:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3a221f83799bd67fc58856adc1cff01fbd3fa4b8be45bf1363f70f768eb22a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.427 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.436 2 INFO nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 11.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.436 2 DEBUG nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.438 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 89 op/s
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.457 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 podman[401281]: 2025-10-14 09:31:10.463743084 +0000 UTC m=+0.202414069 container init 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:31:10 np0005486808 podman[401281]: 2025-10-14 09:31:10.47175686 +0000 UTC m=+0.210427835 container start 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:31:10 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : New worker (401302) forked
Oct 14 05:31:10 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : Loading success.
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.535 2 INFO nova.compute.manager [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 13.14 seconds to build instance.#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.548 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c22b3ec2-f5a1-4c97-8648-a463e9e12545 in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.550 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.552 2 DEBUG oslo_concurrency.lockutils [None req-2fe5948c-e19a-4a8c-b5e9-1d2b5093f82b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.570 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[22171076-d596-4309-9bf8-8c6749a9161d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.571 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1dc63515-41 in ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.574 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1dc63515-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.574 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[58f65509-00a7-4b6c-82c5-8c801cc1d3ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[799df613-5cdb-43f7-99cf-a11768659f09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.594 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c9be0c75-d292-4360-883d-e10e31d781d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.615 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[244de2da-7b44-4715-aeed-2511dc515a12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.655 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[05be8ed3-baf1-44f3-b111-85cd57611cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.666 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[545d3984-4005-4065-8369-9f724b50a346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 NetworkManager[44885]: <info>  [1760434270.6676] manager: (tap1dc63515-40): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Oct 14 05:31:10 np0005486808 systemd-udevd[401187]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.703 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[95568699-e527-4d63-8831-b1cc3233db3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.706 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[da48852e-e5c1-4175-8ff6-1f0125deb873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 NetworkManager[44885]: <info>  [1760434270.7308] device (tap1dc63515-40): carrier: link connected
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.738 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb50297-016b-464f-a554-8ce4e0ca4cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.762 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f45696d6-8f32-484b-a883-b6c1ed2a4dd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401321, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.787 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e11c013-ef67-4512-bef3-32c38f2b0087]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:e0fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804949, 'tstamp': 804949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401322, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.813 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fe2130-e9cb-4f6c-ad9b-c49cc023acd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401323, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.853 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b7f0c-9fbc-49ae-a441-fc3618128393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.901 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9cdb8f-1363-48da-a302-2528c12e5673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.903 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 kernel: tap1dc63515-40: entered promiscuous mode
Oct 14 05:31:10 np0005486808 NetworkManager[44885]: <info>  [1760434270.9049] manager: (tap1dc63515-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.909 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:10Z|01478|binding|INFO|Releasing lport 9754cafe-7819-456f-943e-9907e7f07233 from this chassis (sb_readonly=0)
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 nova_compute[259627]: 2025-10-14 09:31:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.928 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.928 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b1ec84-4808-4655-9de6-895323dc9423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.929 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/1dc63515-48cf-4886-956d-024d1d9cb848.pid.haproxy
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 1dc63515-48cf-4886-956d-024d1d9cb848
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:31:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:10.930 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'env', 'PROCESS_TAG=haproxy-1dc63515-48cf-4886-956d-024d1d9cb848', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1dc63515-48cf-4886-956d-024d1d9cb848.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:31:11 np0005486808 podman[401354]: 2025-10-14 09:31:11.32953463 +0000 UTC m=+0.070867550 container create f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:31:11 np0005486808 systemd[1]: Started libpod-conmon-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope.
Oct 14 05:31:11 np0005486808 podman[401354]: 2025-10-14 09:31:11.295151156 +0000 UTC m=+0.036484166 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:31:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:31:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5feeed86f65fad647b0e405deecf168013385c50e6fadd5dfe122e086c585633/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:31:11 np0005486808 podman[401354]: 2025-10-14 09:31:11.425867294 +0000 UTC m=+0.167200214 container init f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 14 05:31:11 np0005486808 podman[401354]: 2025-10-14 09:31:11.43301444 +0000 UTC m=+0.174347360 container start f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:31:11 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : New worker (401375) forked
Oct 14 05:31:11 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : Loading success.
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.145 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.145 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.146 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.147 2 WARNING nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.147 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.147 2 DEBUG nova.compute.manager [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:12 np0005486808 nova_compute[259627]: 2025-10-14 09:31:12.148 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 129 op/s
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.523862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272523910, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1041, "num_deletes": 251, "total_data_size": 1407883, "memory_usage": 1428064, "flush_reason": "Manual Compaction"}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272534945, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1382355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48765, "largest_seqno": 49805, "table_properties": {"data_size": 1377320, "index_size": 2495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11204, "raw_average_key_size": 19, "raw_value_size": 1367173, "raw_average_value_size": 2419, "num_data_blocks": 111, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434183, "oldest_key_time": 1760434183, "file_creation_time": 1760434272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 11419 microseconds, and 6911 cpu microseconds.
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.535278) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1382355 bytes OK
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.535452) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537365) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537387) EVENT_LOG_v1 {"time_micros": 1760434272537380, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.537407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1402961, prev total WAL file size 1402961, number of live WAL files 2.
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.539290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1349KB)], [113(8479KB)]
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272539330, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10065386, "oldest_snapshot_seqno": -1}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6973 keys, 8354813 bytes, temperature: kUnknown
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272584776, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8354813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8310082, "index_size": 26187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 182015, "raw_average_key_size": 26, "raw_value_size": 8187163, "raw_average_value_size": 1174, "num_data_blocks": 1016, "num_entries": 6973, "num_filter_entries": 6973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.585147) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8354813 bytes
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.586487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.6 rd, 183.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.0) OK, records in: 7487, records dropped: 514 output_compression: NoCompression
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.586503) EVENT_LOG_v1 {"time_micros": 1760434272586495, "job": 68, "event": "compaction_finished", "compaction_time_micros": 45633, "compaction_time_cpu_micros": 19262, "output_level": 6, "num_output_files": 1, "total_output_size": 8354813, "num_input_records": 7487, "num_output_records": 6973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272587158, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434272588707, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.539218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:12 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:31:12.588865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:31:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:13 np0005486808 nova_compute[259627]: 2025-10-14 09:31:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.063 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.063 2 DEBUG nova.network.neutron [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.098 2 DEBUG oslo_concurrency.lockutils [req-645dba83-3538-4e4f-b1c4-5cd2236815cf req-9f170ea4-9186-4cf6-b351-932729e5cbda 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.215 2 DEBUG nova.compute.manager [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.215 2 DEBUG nova.compute.manager [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing instance network info cache due to event network-changed-20f15fae-2789-43f5-8ca3-2a412dba5625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:14 np0005486808 nova_compute[259627]: 2025-10-14 09:31:14.219 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Refreshing network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 90 op/s
Oct 14 05:31:14 np0005486808 podman[401384]: 2025-10-14 09:31:14.645841212 +0000 UTC m=+0.065096119 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:31:14 np0005486808 podman[401385]: 2025-10-14 09:31:14.655241413 +0000 UTC m=+0.060291731 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:31:15 np0005486808 nova_compute[259627]: 2025-10-14 09:31:15.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.027 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updated VIF entry in instance network info cache for port 20f15fae-2789-43f5-8ca3-2a412dba5625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.028 2 DEBUG nova.network.neutron [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [{"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.048 2 DEBUG oslo_concurrency.lockutils [req-94759158-b423-444d-91ed-ef085fda581f req-28fda04d-5a3f-4e01-b4dd-29e5e0d043e5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4ff65022-c3e1-4ee6-b866-7892555ef52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG nova.compute.manager [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG nova.compute.manager [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.267 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.268 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:16 np0005486808 nova_compute[259627]: 2025-10-14 09:31:16.268 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 155 op/s
Oct 14 05:31:18 np0005486808 nova_compute[259627]: 2025-10-14 09:31:18.000 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:18 np0005486808 nova_compute[259627]: 2025-10-14 09:31:18.000 2 DEBUG nova.network.neutron [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:18 np0005486808 nova_compute[259627]: 2025-10-14 09:31:18.021 2 DEBUG oslo_concurrency.lockutils [req-3d641531-d239-44f3-a283-430f12381f77 req-cac5ab25-9b86-4b1a-b0a9-b549ff3d27da 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 148 op/s
Oct 14 05:31:18 np0005486808 nova_compute[259627]: 2025-10-14 09:31:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:20Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:13:28 10.100.0.10
Oct 14 05:31:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:20Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:13:28 10.100.0.10
Oct 14 05:31:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 218 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 598 KiB/s wr, 164 op/s
Oct 14 05:31:20 np0005486808 nova_compute[259627]: 2025-10-14 09:31:20.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Oct 14 05:31:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:22Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 05:31:22 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:22Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:1e:d0 10.100.0.12
Oct 14 05:31:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:23 np0005486808 nova_compute[259627]: 2025-10-14 09:31:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 239 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct 14 05:31:25 np0005486808 nova_compute[259627]: 2025-10-14 09:31:25.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.640 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.641 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.641 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.642 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.643 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.645 2 INFO nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Terminating instance#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.647 2 DEBUG nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:31:26 np0005486808 kernel: tap20f15fae-27 (unregistering): left promiscuous mode
Oct 14 05:31:26 np0005486808 NetworkManager[44885]: <info>  [1760434286.7061] device (tap20f15fae-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:26Z|01479|binding|INFO|Releasing lport 20f15fae-2789-43f5-8ca3-2a412dba5625 from this chassis (sb_readonly=0)
Oct 14 05:31:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:26Z|01480|binding|INFO|Setting lport 20f15fae-2789-43f5-8ca3-2a412dba5625 down in Southbound
Oct 14 05:31:26 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:26Z|01481|binding|INFO|Removing iface tap20f15fae-27 ovn-installed in OVS
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.738 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:13:28 10.100.0.10', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ff65022-c3e1-4ee6-b866-7892555ef52f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=20f15fae-2789-43f5-8ca3-2a412dba5625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.741 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 20f15fae-2789-43f5-8ca3-2a412dba5625 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a unbound from our chassis#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.745 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 14 05:31:26 np0005486808 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 13.727s CPU time.
Oct 14 05:31:26 np0005486808 systemd-machined[214636]: Machine qemu-170-instance-0000008a terminated.
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.764 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c2ac24-c59d-445e-a9f5-db025f818744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.801 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6b078a-238e-45cd-8080-7d3c384668d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.804 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6bbc5c-4064-4338-adfa-545606295bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.835 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1c194fdc-f4c2-41bc-86b2-9adc288e7f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 podman[401426]: 2025-10-14 09:31:26.847675032 +0000 UTC m=+0.091113537 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.851 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d80cd22e-5939-4cd1-9677-e44b929a7bb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e05ecd2-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:c2:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801041, 'reachable_time': 33084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401469, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.872 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0047c4f6-80ad-494d-b3b8-715d99ca187c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801053, 'tstamp': 801053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401473, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e05ecd2-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801056, 'tstamp': 801056}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401473, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.873 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e05ecd2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e05ecd2-80, col_values=(('external_ids', {'iface-id': '4697e43c-b02d-4f27-aea8-a54cad6fa2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:26.881 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.886 2 INFO nova.virt.libvirt.driver [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Instance destroyed successfully.#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.886 2 DEBUG nova.objects.instance [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4ff65022-c3e1-4ee6-b866-7892555ef52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.902 2 DEBUG nova.virt.libvirt.vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-876162503',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=138,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-69wk0x4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:08Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4ff65022-c3e1-4ee6-b866-7892555ef52f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.903 2 DEBUG nova.network.os_vif_util [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "20f15fae-2789-43f5-8ca3-2a412dba5625", "address": "fa:16:3e:59:13:28", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20f15fae-27", "ovs_interfaceid": "20f15fae-2789-43f5-8ca3-2a412dba5625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.903 2 DEBUG nova.network.os_vif_util [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.904 2 DEBUG os_vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20f15fae-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:26 np0005486808 nova_compute[259627]: 2025-10-14 09:31:26.910 2 INFO os_vif [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:13:28,bridge_name='br-int',has_traffic_filtering=True,id=20f15fae-2789-43f5-8ca3-2a412dba5625,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20f15fae-27')#033[00m
Oct 14 05:31:26 np0005486808 podman[401438]: 2025-10-14 09:31:26.912941214 +0000 UTC m=+0.116092770 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.022 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.023 2 DEBUG oslo_concurrency.lockutils [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.024 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.024 2 DEBUG nova.compute.manager [req-3081ddd7-04a4-4cdb-b06f-0090d35650d8 req-eed429d9-f3c8-4afa-8a46-e88f0a82e948 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-unplugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.273 2 INFO nova.virt.libvirt.driver [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deleting instance files /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f_del#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.274 2 INFO nova.virt.libvirt.driver [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deletion of /var/lib/nova/instances/4ff65022-c3e1-4ee6-b866-7892555ef52f_del complete#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.369 2 INFO nova.compute.manager [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG oslo.service.loopingcall [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:31:27 np0005486808 nova_compute[259627]: 2025-10-14 09:31:27.369 2 DEBUG nova.network.neutron [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:31:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.218 2 DEBUG nova.network.neutron [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.238 2 INFO nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.310 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.311 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.400 2 DEBUG oslo_concurrency.processutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct 14 05:31:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431981852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.862 2 DEBUG oslo_concurrency.processutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.870 2 DEBUG nova.compute.provider_tree [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.890 2 DEBUG nova.scheduler.client.report [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.925 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:28 np0005486808 nova_compute[259627]: 2025-10-14 09:31:28.970 2 INFO nova.scheduler.client.report [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4ff65022-c3e1-4ee6-b866-7892555ef52f#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.078 2 DEBUG oslo_concurrency.lockutils [None req-3cbb2b91-d913-4972-90af-b3fdac994395 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.134 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.135 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.136 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.137 2 DEBUG oslo_concurrency.lockutils [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4ff65022-c3e1-4ee6-b866-7892555ef52f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.137 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] No waiting events found dispatching network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.138 2 WARNING nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received unexpected event network-vif-plugged-20f15fae-2789-43f5-8ca3-2a412dba5625 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:31:29 np0005486808 nova_compute[259627]: 2025-10-14 09:31:29.139 2 DEBUG nova.compute.manager [req-2d33660f-1316-4def-8066-366f3584cd4f req-715af3c3-5836-4154-8990-5fa3e62df7c9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Received event network-vif-deleted-20f15fae-2789-43f5-8ca3-2a412dba5625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.233 2 DEBUG nova.compute.manager [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-changed-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.233 2 DEBUG nova.compute.manager [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing instance network info cache due to event network-changed-2f6bb222-680e-469f-83d5-517735604bb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.234 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Refreshing network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.314 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.314 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.315 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.315 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.316 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.317 2 INFO nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Terminating instance#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.319 2 DEBUG nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:31:30 np0005486808 kernel: tap2f6bb222-68 (unregistering): left promiscuous mode
Oct 14 05:31:30 np0005486808 NetworkManager[44885]: <info>  [1760434290.3879] device (tap2f6bb222-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:31:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:30Z|01482|binding|INFO|Releasing lport 2f6bb222-680e-469f-83d5-517735604bb0 from this chassis (sb_readonly=0)
Oct 14 05:31:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:30Z|01483|binding|INFO|Setting lport 2f6bb222-680e-469f-83d5-517735604bb0 down in Southbound
Oct 14 05:31:30 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:30Z|01484|binding|INFO|Removing iface tap2f6bb222-68 ovn-installed in OVS
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 255 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 4.3 MiB/s wr, 139 op/s
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 14 05:31:30 np0005486808 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Consumed 15.002s CPU time.
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 systemd-machined[214636]: Machine qemu-169-instance-00000088 terminated.
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.474 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:bb:97 10.100.0.9'], port_security=['fa:16:3e:57:bb:97 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e0f5391-ee56-46d7-8aa9-9ba00efbe0e1 93b5966a-7949-42d1-a83d-7ff7c7667c63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a89aaec7-5335-48cb-8b51-b7dcd7e1d5f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2f6bb222-680e-469f-83d5-517735604bb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.477 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6bb222-680e-469f-83d5-517735604bb0 in datapath 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a unbound from our chassis#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.479 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.481 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8d3d56-5022-47d9-ab07-93ea9ef4a7a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.481 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a namespace which is not needed anymore#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.567 2 INFO nova.virt.libvirt.driver [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Instance destroyed successfully.#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.568 2 DEBUG nova.objects.instance [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.588 2 DEBUG nova.virt.libvirt.vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-541480337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=136,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEydbvfC6BL8JKAwPsMYw33jsGt7x8pmldi0JdcLroJxQM9Cv9y2WyT4/kdGyYCi1f6St8cX/paY9O5VNQjPEflw/+0a0KCs3SETvjwSZyH4RtpOVgJ2yOdUlm6DCl0mg==',key_name='tempest-TestSecurityGroupsBasicOps-55432425',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:30:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-jzso6cvr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:30:32Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.589 2 DEBUG nova.network.os_vif_util [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.590 2 DEBUG nova.network.os_vif_util [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.594 2 DEBUG os_vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f6bb222-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.607 2 INFO os_vif [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:bb:97,bridge_name='br-int',has_traffic_filtering=True,id=2f6bb222-680e-469f-83d5-517735604bb0,network=Network(6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6bb222-68')#033[00m
Oct 14 05:31:30 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : haproxy version is 2.8.14-c23fe91
Oct 14 05:31:30 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [NOTICE]   (399422) : path to executable is /usr/sbin/haproxy
Oct 14 05:31:30 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [WARNING]  (399422) : Exiting Master process...
Oct 14 05:31:30 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [ALERT]    (399422) : Current worker (399424) exited with code 143 (Terminated)
Oct 14 05:31:30 np0005486808 neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a[399418]: [WARNING]  (399422) : All workers exited. Exiting... (0)
Oct 14 05:31:30 np0005486808 systemd[1]: libpod-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope: Deactivated successfully.
Oct 14 05:31:30 np0005486808 podman[401565]: 2025-10-14 09:31:30.69973129 +0000 UTC m=+0.072802928 container died ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:31:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f2a4ae5bac5fb7737a47f6362c904610fbce4171335446f4e8312252dc95d3e9-merged.mount: Deactivated successfully.
Oct 14 05:31:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4-userdata-shm.mount: Deactivated successfully.
Oct 14 05:31:30 np0005486808 podman[401565]: 2025-10-14 09:31:30.766222141 +0000 UTC m=+0.139293489 container cleanup ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:31:30 np0005486808 systemd[1]: libpod-conmon-ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4.scope: Deactivated successfully.
Oct 14 05:31:30 np0005486808 podman[401610]: 2025-10-14 09:31:30.838471904 +0000 UTC m=+0.047807684 container remove ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.844 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fae4238d-bad7-4f3b-a6e3-d140c50493c4]: (4, ('Tue Oct 14 09:31:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a (ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4)\nddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4\nTue Oct 14 09:31:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a (ddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4)\nddef496733dc05bead0fc14ab2c0f7b4cf9e7cb1d8862dec34a66b8d04ab77a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.846 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c88f41c-860d-4190-ac7c-687efc585e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.846 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e05ecd2-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 kernel: tap6e05ecd2-80: left promiscuous mode
Oct 14 05:31:30 np0005486808 nova_compute[259627]: 2025-10-14 09:31:30.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.869 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2459610b-7a9d-494b-932f-98ca8a3af4b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.897 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3634177d-77ee-47b6-9239-7e2a65bc1267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.898 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35c9d179-fedb-458b-95ea-deb9027c715d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f78bb363-28ca-4a05-b895-8fc1af4aa0bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801032, 'reachable_time': 41238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401630, 'error': None, 'target': 'ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.918 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:31:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:30.918 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3a0577-4259-4339-b81e-60213f8ae0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:30 np0005486808 systemd[1]: run-netns-ovnmeta\x2d6e05ecd2\x2d8dfd\x2d4d78\x2d8fed\x2d31885a3cdf0a.mount: Deactivated successfully.
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.020 2 INFO nova.virt.libvirt.driver [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deleting instance files /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_del#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.021 2 INFO nova.virt.libvirt.driver [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deletion of /var/lib/nova/instances/4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4_del complete#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.070 2 INFO nova.compute.manager [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG oslo.service.loopingcall [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.071 2 DEBUG nova.network.neutron [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.242 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.242 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-unplugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.243 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG oslo_concurrency.lockutils [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.244 2 DEBUG nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] No waiting events found dispatching network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:31 np0005486808 nova_compute[259627]: 2025-10-14 09:31:31.244 2 WARNING nova.compute.manager [req-8316a671-349e-499b-9b72-0e9555632477 req-596f39d6-f580-41d5-b129-babb90fb3b62 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received unexpected event network-vif-plugged-2f6bb222-680e-469f-83d5-517735604bb0 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 610 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:32.508 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:32.509 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.525 2 DEBUG nova.network.neutron [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.558 2 INFO nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Took 1.49 seconds to deallocate network for instance.#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.611 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.612 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.638 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updated VIF entry in instance network info cache for port 2f6bb222-680e-469f-83d5-517735604bb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.638 2 DEBUG nova.network.neutron [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [{"id": "2f6bb222-680e-469f-83d5-517735604bb0", "address": "fa:16:3e:57:bb:97", "network": {"id": "6e05ecd2-8dfd-4d78-8fed-31885a3cdf0a", "bridge": "br-int", "label": "tempest-network-smoke--1016720292", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6bb222-68", "ovs_interfaceid": "2f6bb222-680e-469f-83d5-517735604bb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.658 2 DEBUG oslo_concurrency.lockutils [req-04cda5f4-151b-4be3-bc25-8b3d0fecd96f req-3be8ecc8-c6e6-4e6f-953c-cb49f810c3f7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:32 np0005486808 nova_compute[259627]: 2025-10-14 09:31:32.698 2 DEBUG oslo_concurrency.processutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:31:32
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 14 05:31:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:31:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799793029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.147 2 DEBUG oslo_concurrency.processutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.158 2 DEBUG nova.compute.provider_tree [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.178 2 DEBUG nova.scheduler.client.report [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.209 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.240 2 INFO nova.scheduler.client.report [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4#033[00m
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:31:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.370 2 DEBUG oslo_concurrency.lockutils [None req-81045620-aa90-4c05-aef7-da592077ef88 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.377 2 DEBUG nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Received event network-vif-deleted-2f6bb222-680e-469f-83d5-517735604bb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.378 2 INFO nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Neutron deleted interface 2f6bb222-680e-469f-83d5-517735604bb0; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.379 2 DEBUG nova.network.neutron [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:33 np0005486808 nova_compute[259627]: 2025-10-14 09:31:33.398 2 DEBUG nova.compute.manager [req-e5663e77-7867-4032-a6f7-b5d3d754eede req-713d39ad-2d1f-4f1d-852c-85bd115d6437 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Detach interface failed, port_id=2f6bb222-680e-469f-83d5-517735604bb0, reason: Instance 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:31:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.505 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.505 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.527 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.655 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.656 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.675 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.676 2 INFO nova.compute.claims [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:31:35 np0005486808 nova_compute[259627]: 2025-10-14 09:31:35.881 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000465943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.335 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.342 2 DEBUG nova.compute.provider_tree [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.359 2 DEBUG nova.scheduler.client.report [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.382 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.383 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.434 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.435 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:31:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.467 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.492 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.597 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.601 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.602 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating image(s)#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.628 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:36Z|01485|binding|INFO|Releasing lport 1308be16-f790-4063-acf0-2c8f6fdde665 from this chassis (sb_readonly=0)
Oct 14 05:31:36 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:36Z|01486|binding|INFO|Releasing lport 9754cafe-7819-456f-943e-9907e7f07233 from this chassis (sb_readonly=0)
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.657 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.684 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.688 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.734 2 DEBUG nova.policy [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.766 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.768 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.792 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:36 np0005486808 nova_compute[259627]: 2025-10-14 09:31:36.796 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c9315047-de1c-423a-adfa-118d77df3c94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.095 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c9315047-de1c-423a-adfa-118d77df3c94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.185 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.290 2 DEBUG nova.objects.instance [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.317 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.318 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Ensure instance console log exists: /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.318 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.319 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.319 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:37 np0005486808 nova_compute[259627]: 2025-10-14 09:31:37.588 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully created port: 63edf6de-b6e6-4be7-870e-062e8186ec37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:38 np0005486808 nova_compute[259627]: 2025-10-14 09:31:38.355 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully created port: 787335a5-4b97-43d1-ba56-12091ebdecdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 121 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.409 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully updated port: 63edf6de-b6e6-4be7-870e-062e8186ec37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:39 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:39.511 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.572 2 DEBUG nova.compute.manager [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.573 2 DEBUG nova.compute.manager [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.574 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.575 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.575 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:39 np0005486808 nova_compute[259627]: 2025-10-14 09:31:39.783 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 134 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 572 KiB/s wr, 68 op/s
Oct 14 05:31:40 np0005486808 nova_compute[259627]: 2025-10-14 09:31:40.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:40 np0005486808 nova_compute[259627]: 2025-10-14 09:31:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.845 2 DEBUG nova.network.neutron [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.875 2 DEBUG oslo_concurrency.lockutils [req-7d3412ba-6193-42af-a2cf-2a76e7af25ce req-7c12588c-68ba-436d-8189-b0d2d8ae03f5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.885 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434286.8846073, 4ff65022-c3e1-4ee6-b866-7892555ef52f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.885 2 INFO nova.compute.manager [-] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.931 2 DEBUG nova.compute.manager [None req-facf29a5-1ed0-4a0e-a922-d9052f6e75a2 - - - - - -] [instance: 4ff65022-c3e1-4ee6-b866-7892555ef52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:41 np0005486808 nova_compute[259627]: 2025-10-14 09:31:41.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.585 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Successfully updated port: 787335a5-4b97-43d1-ba56-12091ebdecdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.602 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.602 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.603 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.705 2 DEBUG nova.compute.manager [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.705 2 DEBUG nova.compute.manager [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-787335a5-4b97-43d1-ba56-12091ebdecdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.706 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:42 np0005486808 nova_compute[259627]: 2025-10-14 09:31:42.797 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011053336833365891 of space, bias 1.0, pg target 0.33160010500097675 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:31:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:31:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.997 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.998 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:31:44 np0005486808 nova_compute[259627]: 2025-10-14 09:31:44.999 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036949519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.501 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.565 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434290.5645478, 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.566 2 INFO nova.compute.manager [-] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.590 2 DEBUG nova.compute.manager [None req-444bcb3f-9f01-4ea4-aeae-8da2ede8bcca - - - - - -] [instance: 4fbbe4ed-658d-4f5a-ba5b-d9f5dad073f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.667 2 DEBUG nova.network.neutron [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.681 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.682 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.689 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.690 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance network_info: |[{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.690 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.691 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 787335a5-4b97-43d1-ba56-12091ebdecdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:45 np0005486808 podman[401865]: 2025-10-14 09:31:45.697080383 +0000 UTC m=+0.099529984 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.698 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start _get_guest_xml network_info=[{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.705 2 WARNING nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.716 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.717 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:31:45 np0005486808 podman[401866]: 2025-10-14 09:31:45.72874713 +0000 UTC m=+0.129687664 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.732 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.733 2 DEBUG nova.virt.libvirt.host [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.734 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.734 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.735 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.736 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.737 2 DEBUG nova.virt.hardware [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.741 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.939 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.940 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3395MB free_disk=59.921974182128906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.941 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:45 np0005486808 nova_compute[259627]: 2025-10-14 09:31:45.941 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c9315047-de1c-423a-adfa-118d77df3c94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.020 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.072 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121337902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.166 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.203 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.209 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579285411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.591 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.596 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.608 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.631 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.631 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144975596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.648 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.650 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.651 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.652 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.654 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.654 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.655 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.657 2 DEBUG nova.objects.instance [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.674 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <uuid>c9315047-de1c-423a-adfa-118d77df3c94</uuid>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <name>instance-0000008b</name>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-71039575</nova:name>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:31:45</nova:creationTime>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:port uuid="63edf6de-b6e6-4be7-870e-062e8186ec37">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <nova:port uuid="787335a5-4b97-43d1-ba56-12091ebdecdb">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fedf:a910" ipVersion="6"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="serial">c9315047-de1c-423a-adfa-118d77df3c94</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="uuid">c9315047-de1c-423a-adfa-118d77df3c94</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c9315047-de1c-423a-adfa-118d77df3c94_disk">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c9315047-de1c-423a-adfa-118d77df3c94_disk.config">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:20:cb:ea"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <target dev="tap63edf6de-b6"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:df:a9:10"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <target dev="tap787335a5-4b"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/console.log" append="off"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:31:46 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:31:46 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:31:46 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:31:46 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.675 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Preparing to wait for external event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.676 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.676 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Preparing to wait for external event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.678 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.678 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.679 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.680 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.681 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.681 2 DEBUG os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63edf6de-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63edf6de-b6, col_values=(('external_ids', {'iface-id': '63edf6de-b6e6-4be7-870e-062e8186ec37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:cb:ea', 'vm-uuid': 'c9315047-de1c-423a-adfa-118d77df3c94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 NetworkManager[44885]: <info>  [1760434306.6918] manager: (tap63edf6de-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.700 2 INFO os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6')#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.702 2 DEBUG nova.virt.libvirt.vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:36Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.702 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.703 2 DEBUG nova.network.os_vif_util [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.704 2 DEBUG os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap787335a5-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap787335a5-4b, col_values=(('external_ids', {'iface-id': '787335a5-4b97-43d1-ba56-12091ebdecdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:a9:10', 'vm-uuid': 'c9315047-de1c-423a-adfa-118d77df3c94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 NetworkManager[44885]: <info>  [1760434306.7127] manager: (tap787335a5-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.721 2 INFO os_vif [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b')#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.765 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:20:cb:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.766 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:df:a9:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.766 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Using config drive#033[00m
Oct 14 05:31:46 np0005486808 nova_compute[259627]: 2025-10-14 09:31:46.784 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:31:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1434 writes, 6460 keys, 1434 commit groups, 1.0 writes per commit group, ingest: 9.01 MB, 0.02 MB/s#012Interval WAL: 1434 writes, 1434 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    100.2      0.59              0.23        34    0.017       0      0       0.0       0.0#012  L6      1/0    7.97 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    185.3    155.2      1.71              0.89        33    0.052    193K    18K       0.0       0.0#012 Sum      1/0    7.97 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5    137.6    141.0      2.31              1.12        67    0.034    193K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4    141.4    143.2      0.37              0.22        10    0.037     36K   2545       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    185.3    155.2      1.71              0.89        33    0.052    193K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    101.3      0.59              0.23        33    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.058, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.31 GB read, 0.08 MB/s read, 2.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 36.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000351 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2368,34.73 MB,11.4228%) FilterBlock(68,547.48 KB,0.175873%) IndexBlock(68,932.62 KB,0.299594%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.187 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 787335a5-4b97-43d1-ba56-12091ebdecdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.188 2 DEBUG nova.network.neutron [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.209 2 DEBUG oslo_concurrency.lockutils [req-d54eab3c-374b-43d7-a604-a250974e2e61 req-f7645abd-53ce-47c6-a23a-e0b006b93202 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.221 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Creating config drive at /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.231 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8vhsv43 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.401 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe8vhsv43" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.444 2 DEBUG nova.storage.rbd_utils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c9315047-de1c-423a-adfa-118d77df3c94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.451 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config c9315047-de1c-423a-adfa-118d77df3c94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.628 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.630 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.630 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.673 2 DEBUG oslo_concurrency.processutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config c9315047-de1c-423a-adfa-118d77df3c94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.673 2 INFO nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deleting local config drive /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94/disk.config because it was imported into RBD.#033[00m
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.7507] manager: (tap63edf6de-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct 14 05:31:47 np0005486808 kernel: tap63edf6de-b6: entered promiscuous mode
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01487|binding|INFO|Claiming lport 63edf6de-b6e6-4be7-870e-062e8186ec37 for this chassis.
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01488|binding|INFO|63edf6de-b6e6-4be7-870e-062e8186ec37: Claiming fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.807 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:cb:ea 10.100.0.7'], port_security=['fa:16:3e:20:cb:ea 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63edf6de-b6e6-4be7-870e-062e8186ec37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.808 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63edf6de-b6e6-4be7-870e-062e8186ec37 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 bound to our chassis#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.810 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17#033[00m
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.8120] manager: (tap787335a5-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Oct 14 05:31:47 np0005486808 kernel: tap787335a5-4b: entered promiscuous mode
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01489|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 ovn-installed in OVS
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01490|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 up in Southbound
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01491|if_status|INFO|Dropped 1 log messages in last 38 seconds (most recently, 38 seconds ago) due to excessive rate
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01492|if_status|INFO|Not updating pb chassis for 787335a5-4b97-43d1-ba56-12091ebdecdb now as sb is readonly
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.827 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[697b11c5-8734-4f41-9e0a-d7eec01155bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01493|binding|INFO|Claiming lport 787335a5-4b97-43d1-ba56-12091ebdecdb for this chassis.
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01494|binding|INFO|787335a5-4b97-43d1-ba56-12091ebdecdb: Claiming fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.840 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], port_security=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedf:a910/64', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=787335a5-4b97-43d1-ba56-12091ebdecdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:31:47 np0005486808 systemd-udevd[402068]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:47 np0005486808 systemd-udevd[402069]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01495|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb ovn-installed in OVS
Oct 14 05:31:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:31:47Z|01496|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb up in Southbound
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.8623] device (tap63edf6de-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.8685] device (tap63edf6de-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:31:47 np0005486808 systemd-machined[214636]: New machine qemu-172-instance-0000008b.
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.871 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[81f956b5-0173-45ef-ab99-3335ef100328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.875 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ec476d9d-d464-4a72-9526-b49a0c47f358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.8774] device (tap787335a5-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:31:47 np0005486808 NetworkManager[44885]: <info>  [1760434307.8785] device (tap787335a5-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:31:47 np0005486808 systemd[1]: Started Virtual Machine qemu-172-instance-0000008b.
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.909 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5b737054-44bd-4480-ae28-88d3e28ef5e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1125d8-9ec3-436e-8123-5585d16723a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402078, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.952 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d44174f5-9fb8-4285-960a-25b54f3c35cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804851, 'tstamp': 804851}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402082, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804855, 'tstamp': 804855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402082, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.954 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:47 np0005486808 nova_compute[259627]: 2025-10-14 09:31:47.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.956 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.957 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.958 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 787335a5-4b97-43d1-ba56-12091ebdecdb in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.960 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848#033[00m
Oct 14 05:31:47 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:47.977 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65570420-9355-460c-82c4-3c1f264c1252]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.016 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc6a4c7-7a39-404f-b915-0db2f60025c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.021 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8b23fa29-4beb-4cf4-b7ad-7aaf26f4c40a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.072 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[2684fcc2-53d4-4634-ba48-7802f2204918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.100 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e81b1a0-b459-4ae5-8ecf-d7b8f620a25f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 40867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402091, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0199fc79-1b93-4f1f-82cb-87152ec4ea8d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1dc63515-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804965, 'tstamp': 804965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402092, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.128 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.132 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.133 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:48 np0005486808 nova_compute[259627]: 2025-10-14 09:31:48.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.133 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:31:48.134 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:48 np0005486808 nova_compute[259627]: 2025-10-14 09:31:48.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:31:48 np0005486808 nova_compute[259627]: 2025-10-14 09:31:48.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:48 np0005486808 nova_compute[259627]: 2025-10-14 09:31:48.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.205 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434309.2044435, c9315047-de1c-423a-adfa-118d77df3c94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.205 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Started (Lifecycle Event)#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.234 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.240 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434309.204789, c9315047-de1c-423a-adfa-118d77df3c94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.241 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.266 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.270 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.289 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:31:49 np0005486808 nova_compute[259627]: 2025-10-14 09:31:49.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.088 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.234 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.235 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 05:31:50 np0005486808 nova_compute[259627]: 2025-10-14 09:31:50.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:51 np0005486808 nova_compute[259627]: 2025-10-14 09:31:51.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct 14 05:31:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.765 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.767 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.786 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.795 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.821 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.822 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.849 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.849 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.855 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.855 2 INFO nova.compute.claims [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:31:53 np0005486808 nova_compute[259627]: 2025-10-14 09:31:53.986 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.147 2 DEBUG nova.compute.manager [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.147 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG oslo_concurrency.lockutils [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.148 2 DEBUG nova.compute.manager [req-77bc4e8a-0300-415f-aa9c-c53dbb6754d8 req-0999d274-9fe7-4872-b16f-984057d97c59 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Processing event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:31:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:31:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173069999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.417 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.426 2 DEBUG nova.compute.provider_tree [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.449 2 DEBUG nova.scheduler.client.report [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:31:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.489 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.491 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.552 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.553 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.573 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.594 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.688 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.691 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.692 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating image(s)#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.734 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.763 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.788 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.792 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.898 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.899 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.922 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.925 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:54 np0005486808 nova_compute[259627]: 2025-10-14 09:31:54.989 2 DEBUG nova.policy [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.211 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.276 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.388 2 DEBUG nova.objects.instance [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.449 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.449 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Ensure instance console log exists: /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.450 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.450 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.451 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:55 np0005486808 nova_compute[259627]: 2025-10-14 09:31:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.374 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Successfully created port: 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:31:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.584 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.586 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.586 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No event matching network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 in dict_keys([('network-vif-plugged', '787335a5-4b97-43d1-ba56-12091ebdecdb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.587 2 WARNING nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.587 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.588 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.589 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.589 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Processing event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.590 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.591 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.591 2 DEBUG oslo_concurrency.lockutils [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.592 2 DEBUG nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.592 2 WARNING nova.compute.manager [req-6b97890b-929b-4997-a4fd-6be51b27adc2 req-ab15df22-5a86-41f5-b086-7e698aef34f8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.593 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance event wait completed in 7 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.598 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434316.5974936, c9315047-de1c-423a-adfa-118d77df3c94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.598 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.601 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.607 2 INFO nova.virt.libvirt.driver [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance spawned successfully.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.607 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.635 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.640 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.641 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.641 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.642 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.642 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.643 2 DEBUG nova.virt.libvirt.driver [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.649 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.686 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.717 2 INFO nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 20.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.717 2 DEBUG nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.803 2 INFO nova.compute.manager [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 21.20 seconds to build instance.#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.820 2 DEBUG oslo_concurrency.lockutils [None req-f7070656-0859-4b10-9b37-e2b3b1bd8910 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:56 np0005486808 nova_compute[259627]: 2025-10-14 09:31:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.421 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Successfully updated port: 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.443 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.523 2 DEBUG nova.compute.manager [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.523 2 DEBUG nova.compute.manager [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.524 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:31:57 np0005486808 podman[402326]: 2025-10-14 09:31:57.642673497 +0000 UTC m=+0.053575046 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:31:57 np0005486808 podman[402325]: 2025-10-14 09:31:57.678461135 +0000 UTC m=+0.088532713 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 14 05:31:57 np0005486808 nova_compute[259627]: 2025-10-14 09:31:57.780 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:31:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:31:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 189 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.563 2 DEBUG nova.network.neutron [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.585 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.585 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance network_info: |[{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.586 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.586 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.589 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start _get_guest_xml network_info=[{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.594 2 WARNING nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.602 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.602 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.605 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.libvirt.host [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.606 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.607 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.607 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.608 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.609 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.610 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.610 2 DEBUG nova.virt.hardware [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:31:58 np0005486808 nova_compute[259627]: 2025-10-14 09:31:58.612 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302093010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.060 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.093 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.098 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:31:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:31:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.580 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.582 2 DEBUG nova.virt.libvirt.vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:54Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.583 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.584 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.586 2 DEBUG nova.objects.instance [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.618 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <uuid>8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</uuid>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <name>instance-0000008c</name>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777</nova:name>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:31:58</nova:creationTime>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <nova:port uuid="02b93e6c-e8fd-4ab1-bd57-84775ae34da2">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="serial">8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="uuid">8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:66:f3:32"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <target dev="tap02b93e6c-e8"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/console.log" append="off"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:31:59 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:31:59 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:31:59 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:31:59 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.630 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Preparing to wait for external event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.630 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.631 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.631 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.632 2 DEBUG nova.virt.libvirt.vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:31:54Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.632 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG nova.network.os_vif_util [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG os_vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02b93e6c-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02b93e6c-e8, col_values=(('external_ids', {'iface-id': '02b93e6c-e8fd-4ab1-bd57-84775ae34da2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:f3:32', 'vm-uuid': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:59 np0005486808 NetworkManager[44885]: <info>  [1760434319.6410] manager: (tap02b93e6c-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.650 2 INFO os_vif [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8')#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.721 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.721 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.722 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:66:f3:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.722 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Using config drive#033[00m
Oct 14 05:31:59 np0005486808 nova_compute[259627]: 2025-10-14 09:31:59.745 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.129 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Creating config drive at /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.135 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9numrr4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.175 2 DEBUG nova.compute.manager [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.175 2 DEBUG nova.compute.manager [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.176 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.196 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.197 2 DEBUG nova.network.neutron [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.210 2 DEBUG oslo_concurrency.lockutils [req-5824fcf9-f36e-423e-9539-02560540ac32 req-2a4244f1-3907-4f72-aaf3-917a25b3befb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.278 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9numrr4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.309 2 DEBUG nova.storage.rbd_utils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.312 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.465 2 DEBUG oslo_concurrency.processutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.466 2 INFO nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deleting local config drive /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0/disk.config because it was imported into RBD.#033[00m
Oct 14 05:32:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 213 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 684 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct 14 05:32:00 np0005486808 kernel: tap02b93e6c-e8: entered promiscuous mode
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.5139] manager: (tap02b93e6c-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Oct 14 05:32:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:00Z|01497|binding|INFO|Claiming lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 for this chassis.
Oct 14 05:32:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:00Z|01498|binding|INFO|02b93e6c-e8fd-4ab1-bd57-84775ae34da2: Claiming fa:16:3e:66:f3:32 10.100.0.5
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.524 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:f3:32 10.100.0.5'], port_security=['fa:16:3e:66:f3:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62ae45e1-2406-4d0e-83db-65cdec8c0b6f 6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=02b93e6c-e8fd-4ab1-bd57-84775ae34da2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.525 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 bound to our chassis#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.526 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:00Z|01499|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 ovn-installed in OVS
Oct 14 05:32:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:00Z|01500|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 up in Southbound
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9191f8d3-0bcf-4cb5-8847-65ad3d4b6ce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.542 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap026a2ce2-41 in ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.543 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap026a2ce2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.543 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8c5f55-0211-46b3-b1b5-a1e57f1475d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfcbc73-d5a1-4260-bcf2-8bffb1d78b58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 systemd-udevd[402511]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:32:00 np0005486808 systemd-machined[214636]: New machine qemu-173-instance-0000008c.
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.560 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[492aa675-f8b3-4d65-9abf-6c0d962b631a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 systemd[1]: Started Virtual Machine qemu-173-instance-0000008c.
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.5762] device (tap02b93e6c-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.5772] device (tap02b93e6c-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.584 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2e1f16-bc53-432a-b37e-ff4358591442]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.619 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[265f3fb5-9060-408c-8d95-e39631f78c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.626 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afa0ed73-48b6-4c38-9ac6-005841831090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.6303] manager: (tap026a2ce2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.663 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[817e8662-4a11-476a-a1fd-e30760f38338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.666 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7f237706-4e25-424b-905d-4b7297d18c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.6906] device (tap026a2ce2-40): carrier: link connected
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.698 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f52e35-6232-49e8-99b2-bf76dbf931f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0163886-7cfe-443d-bd48-9ebadce67bd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402542, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.744 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2a1a5c-ec40-4322-a888-769d62a09fb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:9556'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809945, 'tstamp': 809945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402543, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc198fec-64b5-4969-af9b-c9df1eefecdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402544, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.786 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b087abeb-55c9-46f7-b84b-85df26dcc0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.848 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7f91ec-8e3c-4855-a629-d12f25c503ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.849 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.850 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.850 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:00 np0005486808 kernel: tap026a2ce2-40: entered promiscuous mode
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 NetworkManager[44885]: <info>  [1760434320.8528] manager: (tap026a2ce2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.856 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:00Z|01501|binding|INFO|Releasing lport 7f009d61-2857-4109-a89b-a83c53d44768 from this chassis (sb_readonly=0)
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.875 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.877 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8260bce-fa2b-4b14-a030-fcb92053fa06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.878 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.pid.haproxy
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:32:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:00.878 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'env', 'PROCESS_TAG=haproxy-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.966 2 DEBUG nova.compute.manager [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.966 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG oslo_concurrency.lockutils [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:00 np0005486808 nova_compute[259627]: 2025-10-14 09:32:00.967 2 DEBUG nova.compute.manager [req-d1950e8f-ca09-4643-a045-f0208a379acb req-6c26c4a6-3c83-4698-817a-42f64e509894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Processing event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:32:01 np0005486808 podman[402618]: 2025-10-14 09:32:01.238108768 +0000 UTC m=+0.027620058 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:32:01 np0005486808 podman[402618]: 2025-10-14 09:32:01.3551151 +0000 UTC m=+0.144626360 container create 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 05:32:01 np0005486808 systemd[1]: Started libpod-conmon-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope.
Oct 14 05:32:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfade0dc0f4b5a18f9357cc84a838a56c45c2e6cf4ac2c1e6656f3fc9a57ba4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.467 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.468 2 DEBUG nova.network.neutron [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.497 2 DEBUG oslo_concurrency.lockutils [req-1d30ba9a-cd98-4444-89e7-097a12e54314 req-4283c644-86fb-4c76-aa6d-a8cc2ebdc1ae 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.5226917, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Started (Lifecycle Event)#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.527 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.531 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.536 2 INFO nova.virt.libvirt.driver [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance spawned successfully.#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.536 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.545 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.548 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.554 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.555 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.557 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 podman[402618]: 2025-10-14 09:32:01.557471076 +0000 UTC m=+0.346982376 container init 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.557 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.558 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.558 2 DEBUG nova.virt.libvirt.driver [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:01 np0005486808 podman[402618]: 2025-10-14 09:32:01.563502224 +0000 UTC m=+0.353013494 container start 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.567 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.568 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.5228019, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.568 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:32:01 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : New worker (402637) forked
Oct 14 05:32:01 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : Loading success.
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.590 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.594 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434321.529974, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.594 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.616 2 INFO nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 6.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.616 2 DEBUG nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.617 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.625 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.654 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.679 2 INFO nova.compute.manager [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 7.85 seconds to build instance.#033[00m
Oct 14 05:32:01 np0005486808 nova_compute[259627]: 2025-10-14 09:32:01.693 2 DEBUG oslo_concurrency.lockutils [None req-c74344ba-d950-4cf6-9f56-06819f08e0bc 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.067 2 DEBUG nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.068 2 DEBUG oslo_concurrency.lockutils [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.069 2 DEBUG nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] No waiting events found dispatching network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:03 np0005486808 nova_compute[259627]: 2025-10-14 09:32:03.069 2 WARNING nova.compute.manager [req-40090379-f5d9-465e-b3c6-79fd215a6353 req-c7d7c8dd-d999-4229-a79f-378092f8ec24 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received unexpected event network-vif-plugged-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:32:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 213 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 14 05:32:04 np0005486808 nova_compute[259627]: 2025-10-14 09:32:04.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.598 2 DEBUG nova.compute.manager [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.598 2 DEBUG nova.compute.manager [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.599 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.599 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.600 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:05 np0005486808 nova_compute[259627]: 2025-10-14 09:32:05.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:32:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:32:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:32:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013312311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:32:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 14 05:32:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.048 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:07 np0005486808 nova_compute[259627]: 2025-10-14 09:32:07.095 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:07 np0005486808 nova_compute[259627]: 2025-10-14 09:32:07.096 2 DEBUG nova.network.neutron [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:07 np0005486808 nova_compute[259627]: 2025-10-14 09:32:07.115 2 DEBUG oslo_concurrency.lockutils [req-a70e7622-dac0-409f-9228-5bf5c39b8015 req-16928165-acbd-4375-96a5-acbd82ae0d09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:07Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 05:32:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:07Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:cb:ea 10.100.0.7
Oct 14 05:32:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 214 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 150 op/s
Oct 14 05:32:09 np0005486808 nova_compute[259627]: 2025-10-14 09:32:09.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8d00cfb5-c2f9-4c6b-82fd-4604b588c2e6 does not exist
Oct 14 05:32:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0d30dccf-aca4-47eb-a56f-f18c35d77cdf does not exist
Oct 14 05:32:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev adb44b83-7c9f-4cdb-849e-3e262510d73a does not exist
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:32:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:32:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 225 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Oct 14 05:32:10 np0005486808 nova_compute[259627]: 2025-10-14 09:32:10.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.829905609 +0000 UTC m=+0.044214256 container create d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:32:10 np0005486808 systemd[1]: Started libpod-conmon-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope.
Oct 14 05:32:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.80834544 +0000 UTC m=+0.022654107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.910288652 +0000 UTC m=+0.124597329 container init d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.916895054 +0000 UTC m=+0.131203701 container start d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.920041621 +0000 UTC m=+0.134350298 container attach d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:32:10 np0005486808 eager_hypatia[402933]: 167 167
Oct 14 05:32:10 np0005486808 systemd[1]: libpod-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope: Deactivated successfully.
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.923329662 +0000 UTC m=+0.137638299 container died d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:32:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3c9c59c85d428861b8d953ac095be1036292a79a548681e5417d1e76059b26b0-merged.mount: Deactivated successfully.
Oct 14 05:32:10 np0005486808 podman[402917]: 2025-10-14 09:32:10.965967418 +0000 UTC m=+0.180276065 container remove d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:32:10 np0005486808 systemd[1]: libpod-conmon-d727975578b25c525e7d7487f84652edd10fb507d534fb60c908d68eaa395b1c.scope: Deactivated successfully.
Oct 14 05:32:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:32:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:32:11 np0005486808 podman[402957]: 2025-10-14 09:32:11.162261095 +0000 UTC m=+0.041610452 container create 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:32:11 np0005486808 systemd[1]: Started libpod-conmon-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope.
Oct 14 05:32:11 np0005486808 podman[402957]: 2025-10-14 09:32:11.142255634 +0000 UTC m=+0.021605021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:11 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:11 np0005486808 podman[402957]: 2025-10-14 09:32:11.272817078 +0000 UTC m=+0.152166485 container init 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:32:11 np0005486808 podman[402957]: 2025-10-14 09:32:11.284649519 +0000 UTC m=+0.163998876 container start 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:32:11 np0005486808 podman[402957]: 2025-10-14 09:32:11.287712804 +0000 UTC m=+0.167062161 container attach 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:32:12 np0005486808 eager_mirzakhani[402973]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:32:12 np0005486808 eager_mirzakhani[402973]: --> relative data size: 1.0
Oct 14 05:32:12 np0005486808 eager_mirzakhani[402973]: --> All data devices are unavailable
Oct 14 05:32:12 np0005486808 systemd[1]: libpod-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Deactivated successfully.
Oct 14 05:32:12 np0005486808 systemd[1]: libpod-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Consumed 1.046s CPU time.
Oct 14 05:32:12 np0005486808 podman[402957]: 2025-10-14 09:32:12.41630374 +0000 UTC m=+1.295653127 container died 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:32:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6f031ee2e7e19844c8f19b9cc57f6c89a0f91ebb7a65b5a6722b71d98a6ff2be-merged.mount: Deactivated successfully.
Oct 14 05:32:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 14 05:32:12 np0005486808 podman[402957]: 2025-10-14 09:32:12.495057192 +0000 UTC m=+1.374406579 container remove 9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:32:12 np0005486808 systemd[1]: libpod-conmon-9c7fc5bda1cb5fe07eabce4152a2a83c43820c83a865bada542d5753d90c56e3.scope: Deactivated successfully.
Oct 14 05:32:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:13 np0005486808 podman[403156]: 2025-10-14 09:32:13.398755179 +0000 UTC m=+0.057369839 container create b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 05:32:13 np0005486808 systemd[1]: Started libpod-conmon-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope.
Oct 14 05:32:13 np0005486808 podman[403156]: 2025-10-14 09:32:13.380815059 +0000 UTC m=+0.039429699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:13 np0005486808 podman[403156]: 2025-10-14 09:32:13.503372446 +0000 UTC m=+0.161987156 container init b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:32:13 np0005486808 podman[403156]: 2025-10-14 09:32:13.511364192 +0000 UTC m=+0.169978822 container start b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:32:13 np0005486808 podman[403156]: 2025-10-14 09:32:13.516364254 +0000 UTC m=+0.174978954 container attach b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:32:13 np0005486808 stupefied_morse[403172]: 167 167
Oct 14 05:32:13 np0005486808 systemd[1]: libpod-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope: Deactivated successfully.
Oct 14 05:32:13 np0005486808 conmon[403172]: conmon b1f35623487b56b73234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope/container/memory.events
Oct 14 05:32:13 np0005486808 podman[403177]: 2025-10-14 09:32:13.564475095 +0000 UTC m=+0.030791947 container died b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:32:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2e19c70f4d3ce9a933121a0d0bdde3f224053ab1865a2c24d69fe9764613d348-merged.mount: Deactivated successfully.
Oct 14 05:32:13 np0005486808 podman[403177]: 2025-10-14 09:32:13.633424877 +0000 UTC m=+0.099741739 container remove b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:32:13 np0005486808 systemd[1]: libpod-conmon-b1f35623487b56b73234400aec1e1e7cfd035a62fcf368b0d69a8f5c90977bc3.scope: Deactivated successfully.
Oct 14 05:32:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:13Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:f3:32 10.100.0.5
Oct 14 05:32:13 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:13Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:f3:32 10.100.0.5
Oct 14 05:32:13 np0005486808 podman[403199]: 2025-10-14 09:32:13.880683285 +0000 UTC m=+0.061099061 container create 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:32:13 np0005486808 systemd[1]: Started libpod-conmon-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope.
Oct 14 05:32:13 np0005486808 podman[403199]: 2025-10-14 09:32:13.850997156 +0000 UTC m=+0.031412992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:13 np0005486808 podman[403199]: 2025-10-14 09:32:13.977503111 +0000 UTC m=+0.157918857 container init 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:32:13 np0005486808 podman[403199]: 2025-10-14 09:32:13.990637353 +0000 UTC m=+0.171053099 container start 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:32:13 np0005486808 podman[403199]: 2025-10-14 09:32:13.99500476 +0000 UTC m=+0.175420526 container attach 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 05:32:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 246 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 14 05:32:14 np0005486808 nova_compute[259627]: 2025-10-14 09:32:14.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:14 np0005486808 focused_beaver[403216]: {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    "0": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "devices": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "/dev/loop3"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            ],
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_name": "ceph_lv0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_size": "21470642176",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "name": "ceph_lv0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "tags": {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_name": "ceph",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.crush_device_class": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.encrypted": "0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_id": "0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.vdo": "0"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            },
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "vg_name": "ceph_vg0"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        }
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    ],
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    "1": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "devices": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "/dev/loop4"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            ],
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_name": "ceph_lv1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_size": "21470642176",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "name": "ceph_lv1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "tags": {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_name": "ceph",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.crush_device_class": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.encrypted": "0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_id": "1",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.vdo": "0"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            },
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "vg_name": "ceph_vg1"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        }
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    ],
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    "2": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "devices": [
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "/dev/loop5"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            ],
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_name": "ceph_lv2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_size": "21470642176",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "name": "ceph_lv2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "tags": {
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.cluster_name": "ceph",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.crush_device_class": "",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.encrypted": "0",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osd_id": "2",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:                "ceph.vdo": "0"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            },
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "type": "block",
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:            "vg_name": "ceph_vg2"
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:        }
Oct 14 05:32:14 np0005486808 focused_beaver[403216]:    ]
Oct 14 05:32:14 np0005486808 focused_beaver[403216]: }
Oct 14 05:32:14 np0005486808 systemd[1]: libpod-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope: Deactivated successfully.
Oct 14 05:32:14 np0005486808 podman[403199]: 2025-10-14 09:32:14.825808148 +0000 UTC m=+1.006223894 container died 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:32:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-32fb41351e8c9322bd06d4188a235f4a94a8c56398a9cca785e6c10b608fd153-merged.mount: Deactivated successfully.
Oct 14 05:32:14 np0005486808 podman[403199]: 2025-10-14 09:32:14.921519257 +0000 UTC m=+1.101935043 container remove 0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:32:14 np0005486808 systemd[1]: libpod-conmon-0fa62734ed952236fea311af73c9687671e0578654f34b5d955899719ec35193.scope: Deactivated successfully.
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.645620777 +0000 UTC m=+0.050159402 container create fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:32:15 np0005486808 systemd[1]: Started libpod-conmon-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope.
Oct 14 05:32:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.625198356 +0000 UTC m=+0.029736971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:15 np0005486808 nova_compute[259627]: 2025-10-14 09:32:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.787795416 +0000 UTC m=+0.192334031 container init fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.797787651 +0000 UTC m=+0.202326256 container start fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.801536543 +0000 UTC m=+0.206075168 container attach fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:32:15 np0005486808 epic_hopper[403396]: 167 167
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.807436268 +0000 UTC m=+0.211974873 container died fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:32:15 np0005486808 systemd[1]: libpod-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope: Deactivated successfully.
Oct 14 05:32:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-910d394eea831c5a41c3f307d5712c12b56cab6e10854c75cfc0246836beebe6-merged.mount: Deactivated successfully.
Oct 14 05:32:15 np0005486808 podman[403380]: 2025-10-14 09:32:15.853249382 +0000 UTC m=+0.257787987 container remove fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 05:32:15 np0005486808 systemd[1]: libpod-conmon-fc2ed0c56527772f26e43319e2aadbbd4bd4e78fb6b16917525bac00f00718ac.scope: Deactivated successfully.
Oct 14 05:32:15 np0005486808 podman[403399]: 2025-10-14 09:32:15.873621792 +0000 UTC m=+0.085418977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 05:32:15 np0005486808 podman[403400]: 2025-10-14 09:32:15.884495869 +0000 UTC m=+0.090150934 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:32:16 np0005486808 podman[403457]: 2025-10-14 09:32:16.072939493 +0000 UTC m=+0.056436036 container create 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:32:16 np0005486808 systemd[1]: Started libpod-conmon-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope.
Oct 14 05:32:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:32:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:16 np0005486808 podman[403457]: 2025-10-14 09:32:16.050700257 +0000 UTC m=+0.034196770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:32:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:32:16 np0005486808 podman[403457]: 2025-10-14 09:32:16.156517624 +0000 UTC m=+0.140014147 container init 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:32:16 np0005486808 podman[403457]: 2025-10-14 09:32:16.167828632 +0000 UTC m=+0.151325165 container start 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:32:16 np0005486808 podman[403457]: 2025-10-14 09:32:16.172815854 +0000 UTC m=+0.156312387 container attach 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:32:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Oct 14 05:32:17 np0005486808 determined_colden[403473]: {
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_id": 2,
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "type": "bluestore"
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    },
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_id": 1,
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "type": "bluestore"
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    },
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_id": 0,
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:32:17 np0005486808 determined_colden[403473]:        "type": "bluestore"
Oct 14 05:32:17 np0005486808 determined_colden[403473]:    }
Oct 14 05:32:17 np0005486808 determined_colden[403473]: }
Oct 14 05:32:17 np0005486808 systemd[1]: libpod-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Deactivated successfully.
Oct 14 05:32:17 np0005486808 podman[403457]: 2025-10-14 09:32:17.207124075 +0000 UTC m=+1.190620618 container died 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:32:17 np0005486808 systemd[1]: libpod-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Consumed 1.046s CPU time.
Oct 14 05:32:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-55ab96687edc86e5a634307c4cb6477253ee7aba6c036856e8c430573c542e33-merged.mount: Deactivated successfully.
Oct 14 05:32:17 np0005486808 podman[403457]: 2025-10-14 09:32:17.291107796 +0000 UTC m=+1.274604299 container remove 1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:32:17 np0005486808 systemd[1]: libpod-conmon-1083307916ecf63ecfaf94d431aa3f17353271726ab266702f7fbfa2361afccb.scope: Deactivated successfully.
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5fadefd6-041d-4d92-8e17-e10ba5c9a362 does not exist
Oct 14 05:32:17 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9683c330-9cac-49de-bff4-07c0be46c575 does not exist
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:17 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:32:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 14 05:32:18 np0005486808 nova_compute[259627]: 2025-10-14 09:32:18.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:19 np0005486808 nova_compute[259627]: 2025-10-14 09:32:19.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.008 2 DEBUG nova.compute.manager [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.008 2 DEBUG nova.compute.manager [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing instance network info cache due to event network-changed-63edf6de-b6e6-4be7-870e-062e8186ec37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.009 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.010 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.010 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Refreshing network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.013 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.013 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.014 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.016 2 INFO nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Terminating instance#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.017 2 DEBUG nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:32:20 np0005486808 kernel: tap63edf6de-b6 (unregistering): left promiscuous mode
Oct 14 05:32:20 np0005486808 NetworkManager[44885]: <info>  [1760434340.0795] device (tap63edf6de-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01502|binding|INFO|Releasing lport 63edf6de-b6e6-4be7-870e-062e8186ec37 from this chassis (sb_readonly=0)
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01503|binding|INFO|Setting lport 63edf6de-b6e6-4be7-870e-062e8186ec37 down in Southbound
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01504|binding|INFO|Removing iface tap63edf6de-b6 ovn-installed in OVS
Oct 14 05:32:20 np0005486808 kernel: tap787335a5-4b (unregistering): left promiscuous mode
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 NetworkManager[44885]: <info>  [1760434340.1091] device (tap787335a5-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.113 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:cb:ea 10.100.0.7'], port_security=['fa:16:3e:20:cb:ea 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=63edf6de-b6e6-4be7-870e-062e8186ec37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.115 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 63edf6de-b6e6-4be7-870e-062e8186ec37 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 unbound from our chassis#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.117 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.141 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[42735aaf-c777-4bf8-8e2a-1b0a362029e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01505|binding|INFO|Releasing lport 787335a5-4b97-43d1-ba56-12091ebdecdb from this chassis (sb_readonly=0)
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01506|binding|INFO|Setting lport 787335a5-4b97-43d1-ba56-12091ebdecdb down in Southbound
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:20Z|01507|binding|INFO|Removing iface tap787335a5-4b ovn-installed in OVS
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.167 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], port_security=['fa:16:3e:df:a9:10 2001:db8::f816:3eff:fedf:a910'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedf:a910/64', 'neutron:device_id': 'c9315047-de1c-423a-adfa-118d77df3c94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=787335a5-4b97-43d1-ba56-12091ebdecdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.190 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c6beb8cf-cc36-42a2-9cca-a6f57ec2b687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 14 05:32:20 np0005486808 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Consumed 13.799s CPU time.
Oct 14 05:32:20 np0005486808 systemd-machined[214636]: Machine qemu-172-instance-0000008b terminated.
Oct 14 05:32:20 np0005486808 NetworkManager[44885]: <info>  [1760434340.2587] manager: (tap787335a5-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.287 2 INFO nova.virt.libvirt.driver [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Instance destroyed successfully.#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.288 2 DEBUG nova.objects.instance [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid c9315047-de1c-423a-adfa-118d77df3c94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.310 2 DEBUG nova.virt.libvirt.vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.311 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.311 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.312 2 DEBUG os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63edf6de-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.317 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a400c5d3-daa1-45a1-8b97-752533829f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.328 2 INFO os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:cb:ea,bridge_name='br-int',has_traffic_filtering=True,id=63edf6de-b6e6-4be7-870e-062e8186ec37,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63edf6de-b6')#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.329 2 DEBUG nova.virt.libvirt.vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-71039575',display_name='tempest-TestGettingAddress-server-71039575',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-71039575',id=139,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-s79hwes5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c9315047-de1c-423a-adfa-118d77df3c94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.329 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.330 2 DEBUG nova.network.os_vif_util [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.330 2 DEBUG os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap787335a5-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.340 2 INFO os_vif [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:a9:10,bridge_name='br-int',has_traffic_filtering=True,id=787335a5-4b97-43d1-ba56-12091ebdecdb,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap787335a5-4b')#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.363 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[66535d53-de4a-4202-bad9-2ac7d2c5ed1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.391 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1af3fe-c494-4d32-bd6a-8ab436f3ae65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20dd724c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:07:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804837, 'reachable_time': 24448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403626, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.410 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3788272b-fead-48b6-88e0-6ebdbc5341dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804851, 'tstamp': 804851}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403627, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20dd724c-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804855, 'tstamp': 804855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403627, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.413 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.417 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20dd724c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.418 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20dd724c-90, col_values=(('external_ids', {'iface-id': '1308be16-f790-4063-acf0-2c8f6fdde665'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.419 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.421 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 787335a5-4b97-43d1-ba56-12091ebdecdb in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.423 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1dc63515-48cf-4886-956d-024d1d9cb848#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.440 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6917ca09-814c-4570-a129-12de03f83e89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4df23189-9553-4202-afa2-37a3d5f00564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3c188d28-7ae8-4c61-9f37-6f286d88978a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.504 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[359568b0-a451-457c-98ac-62a1056dc014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0b27a43f-7296-4c96-83a7-a94ad478d39f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1dc63515-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:e0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804949, 'reachable_time': 43899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403633, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.536 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[025e6e93-15dc-4efa-9908-1deb547da8b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1dc63515-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804965, 'tstamp': 804965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403634, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.538 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.542 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dc63515-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.543 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1dc63515-40, col_values=(('external_ids', {'iface-id': '9754cafe-7819-456f-943e-9907e7f07233'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:20.544 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.729 2 INFO nova.virt.libvirt.driver [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deleting instance files /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94_del#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.730 2 INFO nova.virt.libvirt.driver [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deletion of /var/lib/nova/instances/c9315047-de1c-423a-adfa-118d77df3c94_del complete#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.809 2 INFO nova.compute.manager [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.809 2 DEBUG oslo.service.loopingcall [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.810 2 DEBUG nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:32:20 np0005486808 nova_compute[259627]: 2025-10-14 09:32:20.810 2 DEBUG nova.network.neutron [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.099 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.100 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.101 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.101 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.102 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.102 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.103 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.103 2 WARNING nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-63edf6de-b6e6-4be7-870e-062e8186ec37 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.104 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.104 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.105 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.105 2 DEBUG oslo_concurrency.lockutils [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.106 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.106 2 DEBUG nova.compute.manager [req-bbeb17f3-2404-4314-8ea6-4e4657cb998f req-fde09c9f-ac71-4f5b-b9f0-e9069c6055ca 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-unplugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:32:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.5 MiB/s wr, 113 op/s
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.541 2 DEBUG nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-deleted-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.542 2 INFO nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Neutron deleted interface 787335a5-4b97-43d1-ba56-12091ebdecdb; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.542 2 DEBUG nova.network.neutron [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.566 2 DEBUG nova.compute.manager [req-798e3814-1e40-4820-a8bd-d905556750bd req-56c53f70-944f-408b-af5b-aa111d1fd30e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Detach interface failed, port_id=787335a5-4b97-43d1-ba56-12091ebdecdb, reason: Instance c9315047-de1c-423a-adfa-118d77df3c94 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.821 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updated VIF entry in instance network info cache for port 63edf6de-b6e6-4be7-870e-062e8186ec37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.822 2 DEBUG nova.network.neutron [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [{"id": "63edf6de-b6e6-4be7-870e-062e8186ec37", "address": "fa:16:3e:20:cb:ea", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63edf6de-b6", "ovs_interfaceid": "63edf6de-b6e6-4be7-870e-062e8186ec37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "787335a5-4b97-43d1-ba56-12091ebdecdb", "address": "fa:16:3e:df:a9:10", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedf:a910", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap787335a5-4b", "ovs_interfaceid": "787335a5-4b97-43d1-ba56-12091ebdecdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.838 2 DEBUG oslo_concurrency.lockutils [req-b26dbbaf-5d2a-42c6-adbc-e0bce35ce901 req-d34f25ae-27fe-4407-a6e3-1295b85a2c65 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c9315047-de1c-423a-adfa-118d77df3c94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.905 2 DEBUG nova.network.neutron [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.923 2 INFO nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Took 2.11 seconds to deallocate network for instance.#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.962 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:22 np0005486808 nova_compute[259627]: 2025-10-14 09:32:22.963 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.052 2 DEBUG oslo_concurrency.processutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2734996394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.479 2 DEBUG oslo_concurrency.processutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.487 2 DEBUG nova.compute.provider_tree [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.505 2 DEBUG nova.scheduler.client.report [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.528 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.549 2 INFO nova.scheduler.client.report [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance c9315047-de1c-423a-adfa-118d77df3c94#033[00m
Oct 14 05:32:23 np0005486808 nova_compute[259627]: 2025-10-14 09:32:23.621 2 DEBUG oslo_concurrency.lockutils [None req-bc48dd16-d51e-4416-ab51-2e444b0b2f02 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.231 2 DEBUG nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.232 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c9315047-de1c-423a-adfa-118d77df3c94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.232 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.233 2 DEBUG oslo_concurrency.lockutils [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c9315047-de1c-423a-adfa-118d77df3c94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.233 2 DEBUG nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] No waiting events found dispatching network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.233 2 WARNING nova.compute.manager [req-8943a648-7f76-4092-a197-b7423ac061d3 req-0c8a0c55-0742-47f8-99c0-0df339abe3cb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received unexpected event network-vif-plugged-787335a5-4b97-43d1-ba56-12091ebdecdb for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:32:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 257 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct 14 05:32:24 np0005486808 nova_compute[259627]: 2025-10-14 09:32:24.706 2 DEBUG nova.compute.manager [req-d0959d75-d15e-411c-82d5-ad6b1bd1a259 req-d0e868e3-b44f-43f6-b960-a2e610b3dd3f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Received event network-vif-deleted-63edf6de-b6e6-4be7-870e-062e8186ec37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.502 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.503 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.503 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.504 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.504 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.506 2 INFO nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Terminating instance#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.507 2 DEBUG nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:32:25 np0005486808 kernel: tapf2639397-8f (unregistering): left promiscuous mode
Oct 14 05:32:25 np0005486808 NetworkManager[44885]: <info>  [1760434345.5681] device (tapf2639397-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01508|binding|INFO|Releasing lport f2639397-8fb2-4541-a298-fd68219e1e47 from this chassis (sb_readonly=0)
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01509|binding|INFO|Setting lport f2639397-8fb2-4541-a298-fd68219e1e47 down in Southbound
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01510|binding|INFO|Removing iface tapf2639397-8f ovn-installed in OVS
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.591 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:1e:d0 10.100.0.12'], port_security=['fa:16:3e:a4:1e:d0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f55c896-9e6e-44ae-a4b7-c1c60b86826e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=f2639397-8fb2-4541-a298-fd68219e1e47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.593 162547 INFO neutron.agent.ovn.metadata.agent [-] Port f2639397-8fb2-4541-a298-fd68219e1e47 in datapath 20dd724c-9d71-4931-8e8b-4dd3fbbacc17 unbound from our chassis#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.594 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[46599594-f11e-497e-a47b-17c61cb6cff0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.595 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 namespace which is not needed anymore#033[00m
Oct 14 05:32:25 np0005486808 kernel: tapc22b3ec2-f5 (unregistering): left promiscuous mode
Oct 14 05:32:25 np0005486808 NetworkManager[44885]: <info>  [1760434345.6146] device (tapc22b3ec2-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01511|binding|INFO|Releasing lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 from this chassis (sb_readonly=0)
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01512|binding|INFO|Setting lport c22b3ec2-f5a1-4c97-8648-a463e9e12545 down in Southbound
Oct 14 05:32:25 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:25Z|01513|binding|INFO|Removing iface tapc22b3ec2-f5 ovn-installed in OVS
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.634 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], port_security=['fa:16:3e:cf:84:75 2001:db8::f816:3eff:fecf:8475'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecf:8475/64', 'neutron:device_id': 'b30a994a-5fb7-4344-9944-98d3d75d3b04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1dc63515-48cf-4886-956d-024d1d9cb848', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c995d506-6f6f-4226-92d2-2bf5b9a23d7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b23756-77cb-4493-bd13-0170877e81b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=c22b3ec2-f5a1-4c97-8648-a463e9e12545) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 14 05:32:25 np0005486808 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 16.378s CPU time.
Oct 14 05:32:25 np0005486808 systemd-machined[214636]: Machine qemu-171-instance-00000089 terminated.
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 NetworkManager[44885]: <info>  [1760434345.7464] manager: (tapc22b3ec2-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Oct 14 05:32:25 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : haproxy version is 2.8.14-c23fe91
Oct 14 05:32:25 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [NOTICE]   (401300) : path to executable is /usr/sbin/haproxy
Oct 14 05:32:25 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [WARNING]  (401300) : Exiting Master process...
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [ALERT]    (401300) : Current worker (401302) exited with code 143 (Terminated)
Oct 14 05:32:25 np0005486808 neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17[401296]: [WARNING]  (401300) : All workers exited. Exiting... (0)
Oct 14 05:32:25 np0005486808 systemd[1]: libpod-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope: Deactivated successfully.
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.759 2 INFO nova.virt.libvirt.driver [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance destroyed successfully.#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.759 2 DEBUG nova.objects.instance [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid b30a994a-5fb7-4344-9944-98d3d75d3b04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:25 np0005486808 podman[403684]: 2025-10-14 09:32:25.761066228 +0000 UTC m=+0.050436559 container died 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.777 2 DEBUG nova.virt.libvirt.vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:10Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.777 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.778 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.778 2 DEBUG os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2639397-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.787 2 INFO os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:1e:d0,bridge_name='br-int',has_traffic_filtering=True,id=f2639397-8fb2-4541-a298-fd68219e1e47,network=Network(20dd724c-9d71-4931-8e8b-4dd3fbbacc17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2639397-8f')#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.788 2 DEBUG nova.virt.libvirt.vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1494982542',display_name='tempest-TestGettingAddress-server-1494982542',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1494982542',id=137,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLHJwBUwmrZMAxk7lK/S4BnH+a8XMSkpb7gUisosnWBPkykNEvL6phjMEnMU3xwMJcA4TQroXcapIfYrYDD73Y5duTeAut+KJCUs4Rpj2NKQqK/dR+iKVtPcUme/LdCECA==',key_name='tempest-TestGettingAddress-415132848',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:31:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-euyh2n7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:31:10Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=b30a994a-5fb7-4344-9944-98d3d75d3b04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.788 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406-userdata-shm.mount: Deactivated successfully.
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.793 2 DEBUG nova.network.os_vif_util [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.793 2 DEBUG os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9b3a221f83799bd67fc58856adc1cff01fbd3fa4b8be45bf1363f70f768eb22a-merged.mount: Deactivated successfully.
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc22b3ec2-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.802 2 INFO os_vif [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:84:75,bridge_name='br-int',has_traffic_filtering=True,id=c22b3ec2-f5a1-4c97-8648-a463e9e12545,network=Network(1dc63515-48cf-4886-956d-024d1d9cb848),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc22b3ec2-f5')#033[00m
Oct 14 05:32:25 np0005486808 podman[403684]: 2025-10-14 09:32:25.808649696 +0000 UTC m=+0.098020037 container cleanup 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:32:25 np0005486808 systemd[1]: libpod-conmon-2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406.scope: Deactivated successfully.
Oct 14 05:32:25 np0005486808 podman[403744]: 2025-10-14 09:32:25.878171022 +0000 UTC m=+0.045441306 container remove 2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf50c0a-c63c-4984-9866-4b72357d54db]: (4, ('Tue Oct 14 09:32:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 (2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406)\n2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406\nTue Oct 14 09:32:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 (2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406)\n2cdaaa0b9144b6ef9375e89d028328bb9f9de7274e87f2c7a0ec43c954e1d406\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.892 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b7407743-d0d1-4639-8ae3-4cb1c909681e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.893 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20dd724c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:25 np0005486808 kernel: tap20dd724c-90: left promiscuous mode
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb10bc0-812b-4af9-8781-1b4f6285e5de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.959 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.959 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.983 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc26801-228b-44f7-a255-e56eccb4e851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:25 np0005486808 nova_compute[259627]: 2025-10-14 09:32:25.984 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:32:25 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:25.985 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5fc97a-a327-40b1-be23-124b34a4b2eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.000 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[427ef789-4674-4e2b-8064-306f67468da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804830, 'reachable_time': 24547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403763, 'error': None, 'target': 'ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 systemd[1]: run-netns-ovnmeta\x2d20dd724c\x2d9d71\x2d4931\x2d8e8b\x2d4dd3fbbacc17.mount: Deactivated successfully.
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.005 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20dd724c-9d71-4931-8e8b-4dd3fbbacc17 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.005 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[edf48eed-3c14-4327-910a-e2d542a1aa36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.006 162547 INFO neutron.agent.ovn.metadata.agent [-] Port c22b3ec2-f5a1-4c97-8648-a463e9e12545 in datapath 1dc63515-48cf-4886-956d-024d1d9cb848 unbound from our chassis#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.007 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1dc63515-48cf-4886-956d-024d1d9cb848, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.008 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9a372af0-407c-471c-8d80-3aca5fd34122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.009 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 namespace which is not needed anymore#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.065 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.065 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.075 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.076 2 INFO nova.compute.claims [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : haproxy version is 2.8.14-c23fe91
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [NOTICE]   (401373) : path to executable is /usr/sbin/haproxy
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : Exiting Master process...
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : Exiting Master process...
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [ALERT]    (401373) : Current worker (401375) exited with code 143 (Terminated)
Oct 14 05:32:26 np0005486808 neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848[401369]: [WARNING]  (401373) : All workers exited. Exiting... (0)
Oct 14 05:32:26 np0005486808 systemd[1]: libpod-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope: Deactivated successfully.
Oct 14 05:32:26 np0005486808 podman[403779]: 2025-10-14 09:32:26.176156995 +0000 UTC m=+0.049741882 container died f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 14 05:32:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a-userdata-shm.mount: Deactivated successfully.
Oct 14 05:32:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5feeed86f65fad647b0e405deecf168013385c50e6fadd5dfe122e086c585633-merged.mount: Deactivated successfully.
Oct 14 05:32:26 np0005486808 podman[403779]: 2025-10-14 09:32:26.215951871 +0000 UTC m=+0.089536758 container cleanup f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:32:26 np0005486808 systemd[1]: libpod-conmon-f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a.scope: Deactivated successfully.
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.237 2 INFO nova.virt.libvirt.driver [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deleting instance files /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04_del#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.238 2 INFO nova.virt.libvirt.driver [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deletion of /var/lib/nova/instances/b30a994a-5fb7-4344-9944-98d3d75d3b04_del complete#033[00m
Oct 14 05:32:26 np0005486808 podman[403811]: 2025-10-14 09:32:26.26929783 +0000 UTC m=+0.032640592 container remove f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.274 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b1995c3e-8176-48f1-bafa-21eb9f54f312]: (4, ('Tue Oct 14 09:32:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 (f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a)\nf639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a\nTue Oct 14 09:32:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 (f639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a)\nf639b506c8f5fae7770b83d3f5a35c10628b924f4408ad78adb8ff512399896a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.274 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.275 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf195b5-00e2-463e-b83e-61f479d9974e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.276 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dc63515-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:26 np0005486808 kernel: tap1dc63515-40: left promiscuous mode
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.295 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[13867542-f6bf-4764-96d7-651b26a46bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.318 2 INFO nova.compute.manager [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG oslo.service.loopingcall [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.319 2 DEBUG nova.network.neutron [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.331 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[26100b24-1818-4b47-b527-c7a99c3cb13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.332 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[06ed826b-c1e4-489d-96cc-596579667804]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c61165f-3dc9-4f17-84b3-0f5bff1ec094]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804940, 'reachable_time': 29845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403827, 'error': None, 'target': 'ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.352 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1dc63515-48cf-4886-956d-024d1d9cb848 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:32:26 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:26.352 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7ac643-a2e5-4f2c-b493-1e75fc3e50f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing instance network info cache due to event network-changed-f2639397-8fb2-4541-a298-fd68219e1e47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.386 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.387 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.387 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Refreshing network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 14 05:32:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119166413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.718 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.726 2 DEBUG nova.compute.provider_tree [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.754 2 DEBUG nova.scheduler.client.report [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.788 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.790 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:32:26 np0005486808 systemd[1]: run-netns-ovnmeta\x2d1dc63515\x2d48cf\x2d4886\x2d956d\x2d024d1d9cb848.mount: Deactivated successfully.
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.830 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.831 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.832 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.833 2 DEBUG oslo_concurrency.lockutils [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.833 2 DEBUG nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.833 2 WARNING nova.compute.manager [req-ae53e669-6082-4200-aeb3-6870f9c49245 req-386c892a-b65b-4829-8407-5e822c07c954 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-f2639397-8fb2-4541-a298-fd68219e1e47 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.857 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.857 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.881 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:32:26 np0005486808 nova_compute[259627]: 2025-10-14 09:32:26.905 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.034 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.036 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.036 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating image(s)#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.073 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.100 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.131 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.137 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.202 2 DEBUG nova.policy [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20f3546ab30e42b5b641f67780316750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f754bf649a2404fa8dee732f5aab36e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.250 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.251 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.251 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.252 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.280 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.283 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.548 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.624 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] resizing rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.727 2 DEBUG nova.objects.instance [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'migration_context' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.740 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Ensure instance console log exists: /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.741 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:27 np0005486808 nova_compute[259627]: 2025-10-14 09:32:27.742 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.004 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Successfully created port: e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:32:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.084 2 DEBUG nova.network.neutron [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.099 2 INFO nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Took 1.78 seconds to deallocate network for instance.#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.136 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updated VIF entry in instance network info cache for port f2639397-8fb2-4541-a298-fd68219e1e47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.136 2 DEBUG nova.network.neutron [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Updating instance_info_cache with network_info: [{"id": "f2639397-8fb2-4541-a298-fd68219e1e47", "address": "fa:16:3e:a4:1e:d0", "network": {"id": "20dd724c-9d71-4931-8e8b-4dd3fbbacc17", "bridge": "br-int", "label": "tempest-network-smoke--1457285288", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2639397-8f", "ovs_interfaceid": "f2639397-8fb2-4541-a298-fd68219e1e47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "address": "fa:16:3e:cf:84:75", "network": {"id": "1dc63515-48cf-4886-956d-024d1d9cb848", "bridge": "br-int", "label": "tempest-network-smoke--1325071190", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fecf:8475", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc22b3ec2-f5", "ovs_interfaceid": "c22b3ec2-f5a1-4c97-8648-a463e9e12545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.145 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.146 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.155 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-b30a994a-5fb7-4344-9944-98d3d75d3b04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.156 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-unplugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.157 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG oslo_concurrency.lockutils [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.158 2 DEBUG nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] No waiting events found dispatching network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.158 2 WARNING nova.compute.manager [req-3f0cca03-3b61-469c-89b3-688655716994 req-62cb9dbe-3a63-49ae-882f-066890148d70 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received unexpected event network-vif-plugged-c22b3ec2-f5a1-4c97-8648-a463e9e12545 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.220 2 DEBUG oslo_concurrency.processutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 200 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 14 05:32:28 np0005486808 podman[404036]: 2025-10-14 09:32:28.669368627 +0000 UTC m=+0.076918789 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:32:28 np0005486808 podman[404035]: 2025-10-14 09:32:28.685270347 +0000 UTC m=+0.094653974 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 05:32:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228061182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.708 2 DEBUG oslo_concurrency.processutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.718 2 DEBUG nova.compute.provider_tree [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.750 2 DEBUG nova.scheduler.client.report [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.776 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.811 2 INFO nova.scheduler.client.report [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance b30a994a-5fb7-4344-9944-98d3d75d3b04#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.877 2 DEBUG oslo_concurrency.lockutils [None req-1431a75d-91b9-4557-acec-fdac6f4fc56f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "b30a994a-5fb7-4344-9944-98d3d75d3b04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.931 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-deleted-c22b3ec2-f5a1-4c97-8648-a463e9e12545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.932 2 INFO nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Neutron deleted interface c22b3ec2-f5a1-4c97-8648-a463e9e12545; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.932 2 DEBUG nova.network.neutron [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.936 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Detach interface failed, port_id=c22b3ec2-f5a1-4c97-8648-a463e9e12545, reason: Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.937 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Received event network-vif-deleted-f2639397-8fb2-4541-a298-fd68219e1e47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.937 2 INFO nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Neutron deleted interface f2639397-8fb2-4541-a298-fd68219e1e47; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.938 2 DEBUG nova.network.neutron [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.940 2 DEBUG nova.compute.manager [req-d0742608-97c6-4c2c-920d-665378097f60 req-bee631df-2cd1-4682-810c-16f0eeaec90d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Detach interface failed, port_id=f2639397-8fb2-4541-a298-fd68219e1e47, reason: Instance b30a994a-5fb7-4344-9944-98d3d75d3b04 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.965 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Successfully updated port: e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.984 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.985 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:28 np0005486808 nova_compute[259627]: 2025-10-14 09:32:28.985 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:32:29 np0005486808 nova_compute[259627]: 2025-10-14 09:32:29.049 2 DEBUG nova.compute.manager [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:29 np0005486808 nova_compute[259627]: 2025-10-14 09:32:29.050 2 DEBUG nova.compute.manager [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing instance network info cache due to event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:29 np0005486808 nova_compute[259627]: 2025-10-14 09:32:29.050 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:29 np0005486808 nova_compute[259627]: 2025-10-14 09:32:29.148 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:32:29 np0005486808 nova_compute[259627]: 2025-10-14 09:32:29.983 2 DEBUG nova.network.neutron [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.009 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.009 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance network_info: |[{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.010 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.010 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.015 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start _get_guest_xml network_info=[{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.022 2 WARNING nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.034 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.035 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.049 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.libvirt.host [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.050 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.051 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.052 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.053 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.054 2 DEBUG nova.virt.hardware [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.057 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 189 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 637 KiB/s wr, 52 op/s
Oct 14 05:32:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:32:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528779693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.504 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.538 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.542 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:30 np0005486808 nova_compute[259627]: 2025-10-14 09:32:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:32:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734407679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.010 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.012 2 DEBUG nova.virt.libvirt.vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:32:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.013 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.015 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.018 2 DEBUG nova.objects.instance [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'pci_devices' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.035 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <uuid>d1e24a24-daf7-4696-9dd4-ab29df7c8131</uuid>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <name>instance-0000008d</name>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893</nova:name>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:32:30</nova:creationTime>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:user uuid="20f3546ab30e42b5b641f67780316750">tempest-TestSecurityGroupsBasicOps-1327646173-project-member</nova:user>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:project uuid="5f754bf649a2404fa8dee732f5aab36e">tempest-TestSecurityGroupsBasicOps-1327646173</nova:project>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <nova:port uuid="e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="serial">d1e24a24-daf7-4696-9dd4-ab29df7c8131</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="uuid">d1e24a24-daf7-4696-9dd4-ab29df7c8131</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:24:dd:42"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <target dev="tape5c607d0-9f"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/console.log" append="off"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:32:31 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:32:31 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:32:31 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:32:31 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.038 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Preparing to wait for external event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.039 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.040 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.040 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.041 2 DEBUG nova.virt.libvirt.vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:32:26Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.042 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.043 2 DEBUG nova.network.os_vif_util [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.044 2 DEBUG os_vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5c607d0-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5c607d0-9f, col_values=(('external_ids', {'iface-id': 'e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:dd:42', 'vm-uuid': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:31 np0005486808 NetworkManager[44885]: <info>  [1760434351.0560] manager: (tape5c607d0-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.065 2 INFO os_vif [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f')#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.153 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.155 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.155 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] No VIF found with MAC fa:16:3e:24:dd:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.156 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Using config drive#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.192 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.404 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updated VIF entry in instance network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.405 2 DEBUG nova.network.neutron [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.422 2 DEBUG oslo_concurrency.lockutils [req-da918d9a-771d-4192-8f3c-9d466e57f872 req-acad742c-6a93-4932-9313-2ecc770a6894 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.614 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Creating config drive at /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.623 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxbkn43p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.764 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxbkn43p" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.791 2 DEBUG nova.storage.rbd_utils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] rbd image d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:32:31 np0005486808 nova_compute[259627]: 2025-10-14 09:32:31.796 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.007 2 DEBUG oslo_concurrency.processutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config d1e24a24-daf7-4696-9dd4-ab29df7c8131_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.009 2 INFO nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deleting local config drive /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131/disk.config because it was imported into RBD.#033[00m
Oct 14 05:32:32 np0005486808 kernel: tape5c607d0-9f: entered promiscuous mode
Oct 14 05:32:32 np0005486808 NetworkManager[44885]: <info>  [1760434352.0706] manager: (tape5c607d0-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct 14 05:32:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:32Z|01514|binding|INFO|Claiming lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f for this chassis.
Oct 14 05:32:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:32Z|01515|binding|INFO|e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f: Claiming fa:16:3e:24:dd:42 10.100.0.7
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.083 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:dd:42 10.100.0.7'], port_security=['fa:16:3e:24:dd:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.085 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 bound to our chassis#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.088 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.111 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[afd4725e-91f1-4cb1-9db2-bcce65e2b903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 systemd-machined[214636]: New machine qemu-174-instance-0000008d.
Oct 14 05:32:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:32Z|01516|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f up in Southbound
Oct 14 05:32:32 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:32Z|01517|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f ovn-installed in OVS
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 systemd[1]: Started Virtual Machine qemu-174-instance-0000008d.
Oct 14 05:32:32 np0005486808 systemd-udevd[404221]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.150 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f6908f05-d672-4745-b6e7-b08428e8391e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.155 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d54ca9-bbee-4620-bdfe-1e491081de70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 NetworkManager[44885]: <info>  [1760434352.1664] device (tape5c607d0-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:32:32 np0005486808 NetworkManager[44885]: <info>  [1760434352.1678] device (tape5c607d0-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.196 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[592febd9-880c-44c2-a7ce-2ee64024c3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.220 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2179d316-0ae0-4837-bef4-273aa35625fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404232, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbde556-7679-488d-a03d-00da418326dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809957, 'tstamp': 809957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404233, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809959, 'tstamp': 809959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404233, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.238 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.243 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.244 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.245 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.327 2 DEBUG nova.compute.manager [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.328 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG oslo_concurrency.lockutils [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.329 2 DEBUG nova.compute.manager [req-407f7ca0-77df-414b-8d64-20d849a378d6 req-7241a571-915f-4780-86da-f18253fed88e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Processing event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.596 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:32 np0005486808 nova_compute[259627]: 2025-10-14 09:32:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:32 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:32.598 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:32:32
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 14 05:32:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:32:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:33Z|01518|binding|INFO|Releasing lport 7f009d61-2857-4109-a89b-a83c53d44768 from this chassis (sb_readonly=0)
Oct 14 05:32:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:33 np0005486808 nova_compute[259627]: 2025-10-14 09:32:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:32:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.040 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0392473, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.040 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Started (Lifecycle Event)#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.044 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.049 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.054 2 INFO nova.virt.libvirt.driver [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance spawned successfully.#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.055 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.066 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.073 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.084 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.085 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.086 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.087 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.087 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.088 2 DEBUG nova.virt.libvirt.driver [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.094 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.095 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0394695, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.095 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.124 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.128 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434354.0487576, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.129 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.149 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.157 2 INFO nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.157 2 DEBUG nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.159 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.188 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.224 2 INFO nova.compute.manager [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 8.18 seconds to build instance.#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.241 2 DEBUG oslo_concurrency.lockutils [None req-e71ac8e1-d3eb-48cb-9b7a-e18c79c43c7a 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.393 2 DEBUG nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.394 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.394 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.395 2 DEBUG oslo_concurrency.lockutils [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.395 2 DEBUG nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] No waiting events found dispatching network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:32:34 np0005486808 nova_compute[259627]: 2025-10-14 09:32:34.395 2 WARNING nova.compute.manager [req-a6d52a19-fa5d-4694-8fcb-ed63823d89cb req-4ec8544d-7a80-4176-b9a2-ee53433e9e05 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received unexpected event network-vif-plugged-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f for instance with vm_state active and task_state None.#033[00m
Oct 14 05:32:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct 14 05:32:35 np0005486808 nova_compute[259627]: 2025-10-14 09:32:35.285 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434340.2849784, c9315047-de1c-423a-adfa-118d77df3c94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:35 np0005486808 nova_compute[259627]: 2025-10-14 09:32:35.285 2 INFO nova.compute.manager [-] [instance: c9315047-de1c-423a-adfa-118d77df3c94] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:32:35 np0005486808 nova_compute[259627]: 2025-10-14 09:32:35.303 2 DEBUG nova.compute.manager [None req-38661a3f-1dbd-48cd-adbb-3605b7f2af22 - - - - - -] [instance: c9315047-de1c-423a-adfa-118d77df3c94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:35 np0005486808 nova_compute[259627]: 2025-10-14 09:32:35.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:36 np0005486808 nova_compute[259627]: 2025-10-14 09:32:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 14 05:32:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 14 05:32:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 14 05:32:40 np0005486808 nova_compute[259627]: 2025-10-14 09:32:40.757 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434345.7567332, b30a994a-5fb7-4344-9944-98d3d75d3b04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:32:40 np0005486808 nova_compute[259627]: 2025-10-14 09:32:40.758 2 INFO nova.compute.manager [-] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:32:40 np0005486808 nova_compute[259627]: 2025-10-14 09:32:40.789 2 DEBUG nova.compute.manager [None req-10d29829-d4b6-42d3-901a-8949f01790f5 - - - - - -] [instance: b30a994a-5fb7-4344-9944-98d3d75d3b04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:32:40 np0005486808 nova_compute[259627]: 2025-10-14 09:32:40.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.039 2 DEBUG nova.compute.manager [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.039 2 DEBUG nova.compute.manager [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing instance network info cache due to event network-changed-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.040 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Refreshing network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:41 np0005486808 nova_compute[259627]: 2025-10-14 09:32:41.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct 14 05:32:42 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:42.600 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:42 np0005486808 nova_compute[259627]: 2025-10-14 09:32:42.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011079409023572312 of space, bias 1.0, pg target 0.33238227070716936 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:32:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:32:43 np0005486808 nova_compute[259627]: 2025-10-14 09:32:43.775 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updated VIF entry in instance network info cache for port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:43 np0005486808 nova_compute[259627]: 2025-10-14 09:32:43.776 2 DEBUG nova.network.neutron [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [{"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:43 np0005486808 nova_compute[259627]: 2025-10-14 09:32:43.800 2 DEBUG oslo_concurrency.lockutils [req-192c5fc0-a0f6-47e9-8a93-a30c54c0729f req-ef02d29c-e75e-48d4-9c3a-8775c755e224 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-d1e24a24-daf7-4696-9dd4-ab29df7c8131" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 05:32:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:45Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:dd:42 10.100.0.7
Oct 14 05:32:45 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:45Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:dd:42 10.100.0.7
Oct 14 05:32:45 np0005486808 nova_compute[259627]: 2025-10-14 09:32:45.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:46 np0005486808 nova_compute[259627]: 2025-10-14 09:32:46.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 14 05:32:46 np0005486808 podman[404277]: 2025-10-14 09:32:46.686307579 +0000 UTC m=+0.090807439 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 14 05:32:46 np0005486808 podman[404278]: 2025-10-14 09:32:46.700100807 +0000 UTC m=+0.093838363 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:32:46 np0005486808 nova_compute[259627]: 2025-10-14 09:32:46.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:46 np0005486808 nova_compute[259627]: 2025-10-14 09:32:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:46 np0005486808 nova_compute[259627]: 2025-10-14 09:32:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2698932554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.453 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.568 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.568 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.576 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.577 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.879 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.881 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3194MB free_disk=59.89793395996094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.883 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.884 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:47 np0005486808 nova_compute[259627]: 2025-10-14 09:32:47.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.022 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance d1e24a24-daf7-4696-9dd4-ab29df7c8131 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.023 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.079 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 195 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 14 05:32:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/493114329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.544 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.553 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.574 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.601 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:32:48 np0005486808 nova_compute[259627]: 2025-10-14 09:32:48.601 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:49 np0005486808 nova_compute[259627]: 2025-10-14 09:32:49.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:49 np0005486808 nova_compute[259627]: 2025-10-14 09:32:49.601 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 834 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 14 05:32:50 np0005486808 nova_compute[259627]: 2025-10-14 09:32:50.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:50 np0005486808 nova_compute[259627]: 2025-10-14 09:32:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:50 np0005486808 nova_compute[259627]: 2025-10-14 09:32:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.726 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.726 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.727 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.729 2 INFO nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Terminating instance#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.730 2 DEBUG nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:32:51 np0005486808 kernel: tape5c607d0-9f (unregistering): left promiscuous mode
Oct 14 05:32:51 np0005486808 NetworkManager[44885]: <info>  [1760434371.7856] device (tape5c607d0-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:51Z|01519|binding|INFO|Releasing lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f from this chassis (sb_readonly=0)
Oct 14 05:32:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:51Z|01520|binding|INFO|Setting lport e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f down in Southbound
Oct 14 05:32:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:51Z|01521|binding|INFO|Removing iface tape5c607d0-9f ovn-installed in OVS
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.855 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:dd:42 10.100.0.7'], port_security=['fa:16:3e:24:dd:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1e24a24-daf7-4696-9dd4-ab29df7c8131', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2f6b00d2-a7f2-4b95-9366-b1ecefd0105d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.856 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 unbound from our chassis#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.858 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.875 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce84b87-7f41-4b59-bdc2-84c5ab6e4b6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.912 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb3d753-8b0d-4eb4-be9e-9d677f7b2f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.915 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[07a19afb-6660-4fec-b385-0118a77bc9dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 14 05:32:51 np0005486808 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Consumed 13.202s CPU time.
Oct 14 05:32:51 np0005486808 systemd-machined[214636]: Machine qemu-174-instance-0000008d terminated.
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.954 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ef46aa-3f8f-45be-a592-d6dcd448dded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.969 2 INFO nova.virt.libvirt.driver [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Instance destroyed successfully.#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.969 2 DEBUG nova.objects.instance [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid d1e24a24-daf7-4696-9dd4-ab29df7c8131 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b17075c3-68d0-4cf2-92de-65258bf5f671]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap026a2ce2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:95:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809945, 'reachable_time': 23632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404380, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.985 2 DEBUG nova.virt.libvirt.vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-gen-1-1315509893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ge',id=141,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:32:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-dw48ywqx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:32:34Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=d1e24a24-daf7-4696-9dd4-ab29df7c8131,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.986 2 DEBUG nova.network.os_vif_util [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "address": "fa:16:3e:24:dd:42", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c607d0-9f", "ovs_interfaceid": "e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.986 2 DEBUG nova.network.os_vif_util [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.987 2 DEBUG os_vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5c607d0-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.988 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbdfdcd-df68-481d-a7e6-bfb9904af13a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809957, 'tstamp': 809957}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404386, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap026a2ce2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809959, 'tstamp': 809959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404386, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.989 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.993 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026a2ce2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap026a2ce2-40, col_values=(('external_ids', {'iface-id': '7f009d61-2857-4109-a89b-a83c53d44768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:51.994 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:32:51 np0005486808 nova_compute[259627]: 2025-10-14 09:32:51.996 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.002 2 INFO os_vif [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:dd:42,bridge_name='br-int',has_traffic_filtering=True,id=e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c607d0-9f')#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.246 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.247 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.381 2 INFO nova.virt.libvirt.driver [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deleting instance files /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131_del#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.382 2 INFO nova.virt.libvirt.driver [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deletion of /var/lib/nova/instances/d1e24a24-daf7-4696-9dd4-ab29df7c8131_del complete#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.436 2 INFO nova.compute.manager [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.437 2 DEBUG oslo.service.loopingcall [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.437 2 DEBUG nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:32:52 np0005486808 nova_compute[259627]: 2025-10-14 09:32:52.438 2 DEBUG nova.network.neutron [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:32:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:32:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.426 2 DEBUG nova.network.neutron [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.455 2 INFO nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.507 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.508 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.546 2 DEBUG nova.compute.manager [req-ed3af7d9-409f-4fbe-8ad3-b0e87dd34c61 req-bbb3134f-96e9-49d0-a127-6b026cfe78a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Received event network-vif-deleted-e5c607d0-9f66-4c2a-8f6c-4b8e27296f2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:53 np0005486808 nova_compute[259627]: 2025-10-14 09:32:53.576 2 DEBUG oslo_concurrency.processutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110739931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.061 2 DEBUG oslo_concurrency.processutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.069 2 DEBUG nova.compute.provider_tree [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.093 2 DEBUG nova.scheduler.client.report [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.125 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.162 2 INFO nova.scheduler.client.report [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance d1e24a24-daf7-4696-9dd4-ab29df7c8131#033[00m
Oct 14 05:32:54 np0005486808 nova_compute[259627]: 2025-10-14 09:32:54.243 2 DEBUG oslo_concurrency.lockutils [None req-c916f2d5-afa8-4904-8bcd-03d6127121d7 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "d1e24a24-daf7-4696-9dd4-ab29df7c8131" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 200 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.103 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.134 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.134 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.937 2 DEBUG nova.compute.manager [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.938 2 DEBUG nova.compute.manager [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing instance network info cache due to event network-changed-02b93e6c-e8fd-4ab1-bd57-84775ae34da2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.938 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.939 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.939 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Refreshing network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.942 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.942 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.943 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.943 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.944 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.945 2 INFO nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Terminating instance#033[00m
Oct 14 05:32:55 np0005486808 nova_compute[259627]: 2025-10-14 09:32:55.947 2 DEBUG nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:32:56 np0005486808 kernel: tap02b93e6c-e8 (unregistering): left promiscuous mode
Oct 14 05:32:56 np0005486808 NetworkManager[44885]: <info>  [1760434376.0212] device (tap02b93e6c-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:32:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:56Z|01522|binding|INFO|Releasing lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 from this chassis (sb_readonly=0)
Oct 14 05:32:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:56Z|01523|binding|INFO|Setting lport 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 down in Southbound
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:32:56Z|01524|binding|INFO|Removing iface tap02b93e6c-e8 ovn-installed in OVS
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.050 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:f3:32 10.100.0.5'], port_security=['fa:16:3e:66:f3:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f754bf649a2404fa8dee732f5aab36e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62ae45e1-2406-4d0e-83db-65cdec8c0b6f 6ff9262f-239c-4772-9987-411eb120736a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02951490-3f50-4bba-9cc5-21d9c0a6e4ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=02b93e6c-e8fd-4ab1-bd57-84775ae34da2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.051 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2 in datapath 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 unbound from our chassis#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.054 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.055 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9aac48c0-7d05-4faf-833d-42a5f5c302ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.056 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 namespace which is not needed anymore#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 14 05:32:56 np0005486808 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Consumed 14.488s CPU time.
Oct 14 05:32:56 np0005486808 systemd-machined[214636]: Machine qemu-173-instance-0000008c terminated.
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.188 2 INFO nova.virt.libvirt.driver [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Instance destroyed successfully.#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.189 2 DEBUG nova.objects.instance [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lazy-loading 'resources' on Instance uuid 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.204 2 DEBUG nova.virt.libvirt.vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:31:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1327646173-access_point-1433018777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1327646173-ac',id=140,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZgUpzEuWQ2yRKeeGP5cwEpEG8ix10coU0eNExva0wfqy7K02GmuQxBgal8BWbrr6/lfxNLSNkfNKLH91c4JJNbWrJCy5K0gCADFvn2wBXfeHIu8FO18gqCQFIgsuofuA==',key_name='tempest-TestSecurityGroupsBasicOps-135749728',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:32:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f754bf649a2404fa8dee732f5aab36e',ramdisk_id='',reservation_id='r-cth66w0f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1327646173',owner_user_name='tempest-TestSecurityGroupsBasicOps-1327646173-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:32:01Z,user_data=None,user_id='20f3546ab30e42b5b641f67780316750',uuid=8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.205 2 DEBUG nova.network.os_vif_util [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converting VIF {"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.205 2 DEBUG nova.network.os_vif_util [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.206 2 DEBUG os_vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02b93e6c-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.212 2 INFO os_vif [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:f3:32,bridge_name='br-int',has_traffic_filtering=True,id=02b93e6c-e8fd-4ab1-bd57-84775ae34da2,network=Network(026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02b93e6c-e8')#033[00m
Oct 14 05:32:56 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : haproxy version is 2.8.14-c23fe91
Oct 14 05:32:56 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [NOTICE]   (402635) : path to executable is /usr/sbin/haproxy
Oct 14 05:32:56 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [WARNING]  (402635) : Exiting Master process...
Oct 14 05:32:56 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [ALERT]    (402635) : Current worker (402637) exited with code 143 (Terminated)
Oct 14 05:32:56 np0005486808 neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540[402631]: [WARNING]  (402635) : All workers exited. Exiting... (0)
Oct 14 05:32:56 np0005486808 systemd[1]: libpod-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope: Deactivated successfully.
Oct 14 05:32:56 np0005486808 podman[404456]: 2025-10-14 09:32:56.239578374 +0000 UTC m=+0.064008971 container died 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:32:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c-userdata-shm.mount: Deactivated successfully.
Oct 14 05:32:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cfade0dc0f4b5a18f9357cc84a838a56c45c2e6cf4ac2c1e6656f3fc9a57ba4b-merged.mount: Deactivated successfully.
Oct 14 05:32:56 np0005486808 podman[404456]: 2025-10-14 09:32:56.288400822 +0000 UTC m=+0.112831419 container cleanup 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 14 05:32:56 np0005486808 systemd[1]: libpod-conmon-1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c.scope: Deactivated successfully.
Oct 14 05:32:56 np0005486808 podman[404513]: 2025-10-14 09:32:56.365464814 +0000 UTC m=+0.046860241 container remove 1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.371 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f705e2-c0ab-4df1-8be0-7f3c93d675b2]: (4, ('Tue Oct 14 09:32:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 (1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c)\n1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c\nTue Oct 14 09:32:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 (1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c)\n1bff649b02a538e677b02569c6d3e618fdb5dbfadde55e1db76e9948bb728f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.373 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab97edc-556d-4051-8b19-aaae2dc551f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.374 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026a2ce2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 kernel: tap026a2ce2-40: left promiscuous mode
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d21a3d76-895e-421c-9436-d26a85c1cc72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.425 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4f902801-9556-4424-8db1-919bfee81da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.427 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2af1663-3998-4639-ad4d-cece94e45763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.445 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[53639631-9245-45da-b917-ef5117987161]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809937, 'reachable_time': 32312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404528, 'error': None, 'target': 'ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 systemd[1]: run-netns-ovnmeta\x2d026a2ce2\x2d4bce\x2d4cf4\x2da66d\x2dfd1bd5dbd540.mount: Deactivated successfully.
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.450 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:32:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:56.450 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[d89cd47b-aad6-43cf-81ac-77d06feee932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.620 2 INFO nova.virt.libvirt.driver [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deleting instance files /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_del#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.621 2 INFO nova.virt.libvirt.driver [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deletion of /var/lib/nova/instances/8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0_del complete#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.682 2 INFO nova.compute.manager [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.682 2 DEBUG oslo.service.loopingcall [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.683 2 DEBUG nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.683 2 DEBUG nova.network.neutron [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:56 np0005486808 nova_compute[259627]: 2025-10-14 09:32:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.412 2 DEBUG nova.network.neutron [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.434 2 INFO nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Took 0.75 seconds to deallocate network for instance.#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.488 2 DEBUG nova.compute.manager [req-1d019f0c-8cdd-47bf-9343-564ccec2cb37 req-dddd2e06-aba0-4e1a-876e-e2d452318112 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Received event network-vif-deleted-02b93e6c-e8fd-4ab1-bd57-84775ae34da2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.506 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.506 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.547 2 DEBUG oslo_concurrency.processutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:32:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.866 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:84:37 2001:db8:0:1:f816:3eff:fea4:8437 2001:db8::f816:3eff:fea4:8437'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:8437/64 2001:db8::f816:3eff:fea4:8437/64', 'neutron:device_id': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a176eb2a-6fbd-4b8e-90b2-85a86523eb62) old=Port_Binding(mac=['fa:16:3e:a4:84:37 2001:db8::f816:3eff:fea4:8437'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:8437/64', 'neutron:device_id': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:32:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.868 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a176eb2a-6fbd-4b8e-90b2-85a86523eb62 in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 updated#033[00m
Oct 14 05:32:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.869 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ca3a81-ba03-43af-8eb7-2462170c9d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:32:57 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:32:57.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd0f392-bf39-4977-af67-9507f90baaef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:32:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:32:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496263740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.987 2 DEBUG oslo_concurrency.processutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:32:57 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.994 2 DEBUG nova.compute.provider_tree [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:57.999 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updated VIF entry in instance network info cache for port 02b93e6c-e8fd-4ab1-bd57-84775ae34da2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.000 2 DEBUG nova.network.neutron [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Updating instance_info_cache with network_info: [{"id": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "address": "fa:16:3e:66:f3:32", "network": {"id": "026a2ce2-4bce-4cf4-a66d-fd1bd5dbd540", "bridge": "br-int", "label": "tempest-network-smoke--1967998038", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f754bf649a2404fa8dee732f5aab36e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02b93e6c-e8", "ovs_interfaceid": "02b93e6c-e8fd-4ab1-bd57-84775ae34da2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.014 2 DEBUG nova.scheduler.client.report [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.019 2 DEBUG oslo_concurrency.lockutils [req-1829da97-5122-4ee0-b552-deebf71ba450 req-032b8c54-d0c7-4280-8f97-7a94ff46fd88 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.033 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.056 2 INFO nova.scheduler.client.report [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Deleted allocations for instance 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0#033[00m
Oct 14 05:32:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:32:58 np0005486808 nova_compute[259627]: 2025-10-14 09:32:58.123 2 DEBUG oslo_concurrency.lockutils [None req-1fb79e7c-d41c-4bdc-8220-6c3b0a8ed083 20f3546ab30e42b5b641f67780316750 5f754bf649a2404fa8dee732f5aab36e - - default default] Lock "8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:32:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 73 KiB/s wr, 37 op/s
Oct 14 05:32:59 np0005486808 podman[404553]: 2025-10-14 09:32:59.684944543 +0000 UTC m=+0.081850300 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 14 05:32:59 np0005486808 podman[404552]: 2025-10-14 09:32:59.699294625 +0000 UTC m=+0.110828481 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:33:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 79 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 74 KiB/s wr, 53 op/s
Oct 14 05:33:00 np0005486808 nova_compute[259627]: 2025-10-14 09:33:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:01 np0005486808 nova_compute[259627]: 2025-10-14 09:33:01.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:01 np0005486808 nova_compute[259627]: 2025-10-14 09:33:01.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:01 np0005486808 nova_compute[259627]: 2025-10-14 09:33:01.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.674 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.675 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.692 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.776 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.777 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.786 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.787 2 INFO nova.compute.claims [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:33:03 np0005486808 nova_compute[259627]: 2025-10-14 09:33:03.932 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:33:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100251005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.430 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.440 2 DEBUG nova.compute.provider_tree [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.457 2 DEBUG nova.scheduler.client.report [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.487 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.489 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:33:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.551 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.552 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.575 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.599 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.812 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.814 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.815 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating image(s)#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.849 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.884 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.920 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.925 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:04 np0005486808 nova_compute[259627]: 2025-10-14 09:33:04.984 2 DEBUG nova.policy [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.036 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.037 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.038 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.039 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.074 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.078 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aa3e17be-c995-4cab-b209-1eadaaff1634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.398 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 aa3e17be-c995-4cab-b209-1eadaaff1634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.485 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.580 2 DEBUG nova.objects.instance [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Ensure instance console log exists: /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.633 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.634 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.634 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:33:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:33:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:33:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4057076322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:05 np0005486808 nova_compute[259627]: 2025-10-14 09:33:05.934 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully created port: 51563204-46d2-4b26-bfa3-a2dc0f43701a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:33:06 np0005486808 nova_compute[259627]: 2025-10-14 09:33:06.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 839 KiB/s wr, 76 op/s
Oct 14 05:33:06 np0005486808 nova_compute[259627]: 2025-10-14 09:33:06.967 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434371.9656842, d1e24a24-daf7-4696-9dd4-ab29df7c8131 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:06 np0005486808 nova_compute[259627]: 2025-10-14 09:33:06.968 2 INFO nova.compute.manager [-] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:33:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.049 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:07.050 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:07 np0005486808 nova_compute[259627]: 2025-10-14 09:33:07.059 2 DEBUG nova.compute.manager [None req-774d386b-dfcd-460d-acf5-9b31abbaff53 - - - - - -] [instance: d1e24a24-daf7-4696-9dd4-ab29df7c8131] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:07 np0005486808 nova_compute[259627]: 2025-10-14 09:33:07.141 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully created port: e4183a77-e102-4885-9a7d-ef0431abf27c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:33:07 np0005486808 nova_compute[259627]: 2025-10-14 09:33:07.937 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully updated port: 51563204-46d2-4b26-bfa3-a2dc0f43701a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.038 2 DEBUG nova.compute.manager [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.039 2 DEBUG nova.compute.manager [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.040 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.040 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.041 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.280 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:33:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 67 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 825 KiB/s wr, 48 op/s
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.979 2 DEBUG nova.network.neutron [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:08 np0005486808 nova_compute[259627]: 2025-10-14 09:33:08.995 2 DEBUG oslo_concurrency.lockutils [req-52665427-e8ff-4bcc-8bda-4a821ae8835e req-53080a83-6b41-4554-9178-4bd7c1c18d44 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:09 np0005486808 nova_compute[259627]: 2025-10-14 09:33:09.144 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Successfully updated port: e4183a77-e102-4885-9a7d-ef0431abf27c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:33:09 np0005486808 nova_compute[259627]: 2025-10-14 09:33:09.161 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:09 np0005486808 nova_compute[259627]: 2025-10-14 09:33:09.162 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:09 np0005486808 nova_compute[259627]: 2025-10-14 09:33:09.162 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:33:09 np0005486808 nova_compute[259627]: 2025-10-14 09:33:09.373 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:33:10 np0005486808 nova_compute[259627]: 2025-10-14 09:33:10.117 2 DEBUG nova.compute.manager [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:10 np0005486808 nova_compute[259627]: 2025-10-14 09:33:10.118 2 DEBUG nova.compute.manager [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-e4183a77-e102-4885-9a7d-ef0431abf27c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:10 np0005486808 nova_compute[259627]: 2025-10-14 09:33:10.118 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 14 05:33:10 np0005486808 nova_compute[259627]: 2025-10-14 09:33:10.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.187 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434376.1859162, 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.187 2 INFO nova.compute.manager [-] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.207 2 DEBUG nova.compute.manager [None req-23daf500-5b3b-4d81-be5b-78b2d84c1ed8 - - - - - -] [instance: 8e9ceee0-6d0e-44b9-b3aa-b785f45f96f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.511 2 DEBUG nova.network.neutron [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.530 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.530 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance network_info: |[{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.531 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.531 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port e4183a77-e102-4885-9a7d-ef0431abf27c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.535 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start _get_guest_xml network_info=[{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.541 2 WARNING nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.546 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.547 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.556 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.556 2 DEBUG nova.virt.libvirt.host [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.557 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.557 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.558 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.559 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.560 2 DEBUG nova.virt.hardware [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:33:11 np0005486808 nova_compute[259627]: 2025-10-14 09:33:11.563 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:33:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205886182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.047 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.069 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.073 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:33:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974658472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:33:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.497 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.500 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.500 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.502 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.503 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.504 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.505 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.507 2 DEBUG nova.objects.instance [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.539 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <uuid>aa3e17be-c995-4cab-b209-1eadaaff1634</uuid>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <name>instance-0000008e</name>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-1334569923</nova:name>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:33:11</nova:creationTime>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:port uuid="51563204-46d2-4b26-bfa3-a2dc0f43701a">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <nova:port uuid="e4183a77-e102-4885-9a7d-ef0431abf27c">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8f:fdda" ipVersion="6"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8f:fdda" ipVersion="6"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="serial">aa3e17be-c995-4cab-b209-1eadaaff1634</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="uuid">aa3e17be-c995-4cab-b209-1eadaaff1634</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/aa3e17be-c995-4cab-b209-1eadaaff1634_disk">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8e:aa:2c"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <target dev="tap51563204-46"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:8f:fd:da"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <target dev="tape4183a77-e1"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/console.log" append="off"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:33:12 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:33:12 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:33:12 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:33:12 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.541 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Preparing to wait for external event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.542 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.543 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.543 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.544 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Preparing to wait for external event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.544 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.545 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.545 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.547 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.547 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.548 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.549 2 DEBUG os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51563204-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51563204-46, col_values=(('external_ids', {'iface-id': '51563204-46d2-4b26-bfa3-a2dc0f43701a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:aa:2c', 'vm-uuid': 'aa3e17be-c995-4cab-b209-1eadaaff1634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 NetworkManager[44885]: <info>  [1760434392.5670] manager: (tap51563204-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.573 2 INFO os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46')#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.574 2 DEBUG nova.virt.libvirt.vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.575 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.576 2 DEBUG nova.network.os_vif_util [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.576 2 DEBUG os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4183a77-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4183a77-e1, col_values=(('external_ids', {'iface-id': 'e4183a77-e102-4885-9a7d-ef0431abf27c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:fd:da', 'vm-uuid': 'aa3e17be-c995-4cab-b209-1eadaaff1634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 NetworkManager[44885]: <info>  [1760434392.5827] manager: (tape4183a77-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.590 2 INFO os_vif [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1')#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.678 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.679 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.680 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:8e:aa:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.680 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:8f:fd:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.681 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Using config drive#033[00m
Oct 14 05:33:12 np0005486808 nova_compute[259627]: 2025-10-14 09:33:12.717 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:13 np0005486808 nova_compute[259627]: 2025-10-14 09:33:13.758 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Creating config drive at /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config#033[00m
Oct 14 05:33:13 np0005486808 nova_compute[259627]: 2025-10-14 09:33:13.767 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzhggxbpc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:13 np0005486808 nova_compute[259627]: 2025-10-14 09:33:13.923 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzhggxbpc" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:13 np0005486808 nova_compute[259627]: 2025-10-14 09:33:13.960 2 DEBUG nova.storage.rbd_utils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:13 np0005486808 nova_compute[259627]: 2025-10-14 09:33:13.965 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.018 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port e4183a77-e102-4885-9a7d-ef0431abf27c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.019 2 DEBUG nova.network.neutron [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.052 2 DEBUG oslo_concurrency.lockutils [req-e81a59d0-c2e9-42b0-981e-faab3f1562d9 req-422c018d-c306-47d3-b3f6-20ffcf48c77b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.164 2 DEBUG oslo_concurrency.processutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config aa3e17be-c995-4cab-b209-1eadaaff1634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.165 2 INFO nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deleting local config drive /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634/disk.config because it was imported into RBD.#033[00m
Oct 14 05:33:14 np0005486808 kernel: tap51563204-46: entered promiscuous mode
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.2435] manager: (tap51563204-46): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01525|binding|INFO|Claiming lport 51563204-46d2-4b26-bfa3-a2dc0f43701a for this chassis.
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01526|binding|INFO|51563204-46d2-4b26-bfa3-a2dc0f43701a: Claiming fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.268 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:aa:2c 10.100.0.13'], port_security=['fa:16:3e:8e:aa:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=51563204-46d2-4b26-bfa3-a2dc0f43701a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.270 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 51563204-46d2-4b26-bfa3-a2dc0f43701a in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 bound to our chassis#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.271 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53#033[00m
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.2759] manager: (tape4183a77-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/620)
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.296 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3281ced-97e2-4224-a0c9-0bed1d6180bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.298 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ec9546c-01 in ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.301 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ec9546c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.301 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ad4567-aa75-4c41-9582-3adf48c64c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.303 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee6e1a0-493c-48a2-be49-6bf32b60f05f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 systemd-udevd[404929]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:33:14 np0005486808 systemd-udevd[404931]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.322 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[13cfe5b0-efe0-4327-acff-a88feaec7ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 systemd-machined[214636]: New machine qemu-175-instance-0000008e.
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.3369] device (tap51563204-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.3382] device (tap51563204-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:33:14 np0005486808 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.352 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8aefb6-c042-42e8-a8a4-aef8cb3a29e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 kernel: tape4183a77-e1: entered promiscuous mode
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01527|binding|INFO|Claiming lport e4183a77-e102-4885-9a7d-ef0431abf27c for this chassis.
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01528|binding|INFO|e4183a77-e102-4885-9a7d-ef0431abf27c: Claiming fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.3888] device (tape4183a77-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.3922] device (tape4183a77-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.392 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], port_security=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:fdda/64 2001:db8::f816:3eff:fe8f:fdda/64', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e4183a77-e102-4885-9a7d-ef0431abf27c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01529|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a ovn-installed in OVS
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01530|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a up in Southbound
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.400 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4d846236-18d0-4155-9059-64f74124be35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01531|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c ovn-installed in OVS
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01532|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c up in Southbound
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba056462-6974-425d-9430-8a9ffcb3550d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.4165] manager: (tap0ec9546c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.457 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca5750b-1d35-4abd-b6f1-ada015c32058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.461 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d7452f3b-3444-466d-9b11-2ba291a417cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.4897] device (tap0ec9546c-00): carrier: link connected
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.496 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f9295e48-c23f-4356-83f7-852ec8b56168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.518 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee604729-2be2-4791-b3e8-59a3455ed275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404966, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.534 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fb92a894-2cfe-46a6-be73-b32b207306e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:a286'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817325, 'tstamp': 817325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404967, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.562 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa649f1-f3d4-47d6-8ca1-a0b3131b4a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404968, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.594 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70e0e996-1c89-43a1-b670-addd74883e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.662 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d31eed7e-2fd0-4d37-b1ec-7955047c9a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.664 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:14 np0005486808 kernel: tap0ec9546c-00: entered promiscuous mode
Oct 14 05:33:14 np0005486808 NetworkManager[44885]: <info>  [1760434394.6676] manager: (tap0ec9546c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.680 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:14Z|01533|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 05:33:14 np0005486808 nova_compute[259627]: 2025-10-14 09:33:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.700 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.701 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d316ac7a-bf47-479c-a77f-ef4a253f578e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.702 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0ec9546c-0acc-437f-9f6e-7db1743faf53.pid.haproxy
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0ec9546c-0acc-437f-9f6e-7db1743faf53
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:33:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:14.703 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'env', 'PROCESS_TAG=haproxy-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ec9546c-0acc-437f-9f6e-7db1743faf53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:33:15 np0005486808 podman[405043]: 2025-10-14 09:33:15.099123116 +0000 UTC m=+0.049346122 container create e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:33:15 np0005486808 systemd[1]: Started libpod-conmon-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope.
Oct 14 05:33:15 np0005486808 podman[405043]: 2025-10-14 09:33:15.072471732 +0000 UTC m=+0.022694758 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:33:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0619add7e0ccdeb5eee5bc37d898eed13da4d773245bbb9df7aa5e8493239769/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:15 np0005486808 podman[405043]: 2025-10-14 09:33:15.200303619 +0000 UTC m=+0.150526685 container init e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.209 2 DEBUG nova.compute.manager [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.210 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.211 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.211 2 DEBUG oslo_concurrency.lockutils [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.212 2 DEBUG nova.compute.manager [req-80388964-c18e-409d-98f6-8ae50e813e64 req-d896f212-bf26-4c4d-badf-6d332819f937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Processing event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:33:15 np0005486808 podman[405043]: 2025-10-14 09:33:15.214098827 +0000 UTC m=+0.164321863 container start e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:33:15 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : New worker (405062) forked
Oct 14 05:33:15 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : Loading success.
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.296 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e4183a77-e102-4885-9a7d-ef0431abf27c in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.298 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.310 2 DEBUG nova.compute.manager [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.310 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG oslo_concurrency.lockutils [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.311 2 DEBUG nova.compute.manager [req-4e6b63fb-c662-42c3-a2ba-6553f4f967c0 req-7f63245b-608e-48e9-b0ef-500ab8b3f601 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Processing event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87dfb4fc-e471-47d9-8345-79386b57224e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.318 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ca3a81-b1 in ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.320 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ca3a81-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.320 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[38fac36a-6784-4b62-8a0f-e339f53fac41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.321 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[16609918-6f76-4ad3-8fe5-e14932600da8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.336 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[19a900bb-84f2-4055-8103-aca1f2be1117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.359 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4801916-18de-4714-b3b9-c275968d89ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.361 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3611186, aa3e17be-c995-4cab-b209-1eadaaff1634 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.362 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Started (Lifecycle Event)#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.364 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.367 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.370 2 INFO nova.virt.libvirt.driver [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance spawned successfully.#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.370 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.391 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.396 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.398 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[31ad3511-04da-4b93-b126-7f2ab074ca3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.399 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.400 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.401 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.401 2 DEBUG nova.virt.libvirt.driver [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[74b254fd-4028-44ad-943e-ce101c99a8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 NetworkManager[44885]: <info>  [1760434395.4063] manager: (tapa3ca3a81-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/623)
Oct 14 05:33:15 np0005486808 systemd-udevd[404959]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.428 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.428 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3613482, aa3e17be-c995-4cab-b209-1eadaaff1634 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.428 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.446 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0dad88f4-f326-42fd-982e-634ce19cd509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.450 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1a88f73a-bcd8-4485-a439-482f02ed38c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.464 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.467 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434395.3668435, aa3e17be-c995-4cab-b209-1eadaaff1634 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.467 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.470 2 INFO nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 10.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.471 2 DEBUG nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:15 np0005486808 NetworkManager[44885]: <info>  [1760434395.4848] device (tapa3ca3a81-b0): carrier: link connected
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.491 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e378dc39-c8bc-4cf2-9465-b24634cbc79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.495 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.499 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.517 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.519 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[65c31cba-cd6d-4cd5-91bd-7361a3cf1203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405081, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.528 2 INFO nova.compute.manager [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 11.79 seconds to build instance.#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.540 2 DEBUG oslo_concurrency.lockutils [None req-992c88e9-4c63-4aba-8f64-4a6414354282 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ad022b15-c39b-466e-9a37-32f434a8b679]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:8437'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817424, 'tstamp': 817424}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405082, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.573 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84c14942-4ee3-4c66-89d4-05de55e61299]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405083, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.616 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c86167-f31b-440e-939e-77504b498832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.660 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae2f6d8-e1e9-4017-b78d-de125c179f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.662 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.663 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:15 np0005486808 NetworkManager[44885]: <info>  [1760434395.6655] manager: (tapa3ca3a81-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct 14 05:33:15 np0005486808 kernel: tapa3ca3a81-b0: entered promiscuous mode
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.668 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:15Z|01534|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.683 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.684 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5180d169-cecb-40b3-8a1f-84a464ebdc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.685 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/a3ca3a81-ba03-43af-8eb7-2462170c9d43.pid.haproxy
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID a3ca3a81-ba03-43af-8eb7-2462170c9d43
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:33:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:15.686 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'env', 'PROCESS_TAG=haproxy-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ca3a81-ba03-43af-8eb7-2462170c9d43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:33:15 np0005486808 nova_compute[259627]: 2025-10-14 09:33:15.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:16 np0005486808 podman[405113]: 2025-10-14 09:33:16.101903114 +0000 UTC m=+0.096724635 container create 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:33:16 np0005486808 systemd[1]: Started libpod-conmon-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope.
Oct 14 05:33:16 np0005486808 podman[405113]: 2025-10-14 09:33:16.061361849 +0000 UTC m=+0.056183430 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:33:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a4b15685401cfe097a8856dcb11738118fab0652949510d103f701d6144a22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:16 np0005486808 podman[405113]: 2025-10-14 09:33:16.208194362 +0000 UTC m=+0.203015943 container init 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:33:16 np0005486808 podman[405113]: 2025-10-14 09:33:16.217108581 +0000 UTC m=+0.211930122 container start 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:33:16 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : New worker (405134) forked
Oct 14 05:33:16 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : Loading success.
Oct 14 05:33:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.357 2 DEBUG nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.358 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG oslo_concurrency.lockutils [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.359 2 DEBUG nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.359 2 WARNING nova.compute.manager [req-617a3001-ecac-4991-8852-a55a31c896c8 req-8fe8bee7-ebf5-4a8f-a16d-10583ed2e02b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.441 2 DEBUG nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.442 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.442 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.443 2 DEBUG oslo_concurrency.lockutils [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.443 2 DEBUG nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.444 2 WARNING nova.compute.manager [req-82efd6e3-a7c3-43b1-9c44-9cc47fd8a895 req-c1a890c4-0030-4778-891a-b045aff34579 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with vm_state active and task_state None.#033[00m
Oct 14 05:33:17 np0005486808 nova_compute[259627]: 2025-10-14 09:33:17.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:17 np0005486808 podman[405143]: 2025-10-14 09:33:17.661934006 +0000 UTC m=+0.062613127 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:33:17 np0005486808 podman[405145]: 2025-10-14 09:33:17.724129613 +0000 UTC m=+0.120785965 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1003 KiB/s wr, 16 op/s
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4caeb134-db52-4422-b60c-1b46481f680f does not exist
Oct 14 05:33:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 72b40ef1-5355-485e-917a-2fd6c94589ad does not exist
Oct 14 05:33:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a24966ce-8908-4fb7-bea0-edafcd9cbb58 does not exist
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:18 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:33:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:19Z|01535|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 05:33:19 np0005486808 NetworkManager[44885]: <info>  [1760434399.1945] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Oct 14 05:33:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:19Z|01536|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:19 np0005486808 NetworkManager[44885]: <info>  [1760434399.1963] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 14 05:33:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:19Z|01537|binding|INFO|Releasing lport a176eb2a-6fbd-4b8e-90b2-85a86523eb62 from this chassis (sb_readonly=0)
Oct 14 05:33:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:19Z|01538|binding|INFO|Releasing lport b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3 from this chassis (sb_readonly=0)
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.386895615 +0000 UTC m=+0.114930881 container create 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.318925127 +0000 UTC m=+0.046960403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:19 np0005486808 systemd[1]: Started libpod-conmon-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope.
Oct 14 05:33:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.494 2 DEBUG nova.compute.manager [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.496 2 DEBUG nova.compute.manager [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:19 np0005486808 nova_compute[259627]: 2025-10-14 09:33:19.497 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.507890764 +0000 UTC m=+0.235926090 container init 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.519311555 +0000 UTC m=+0.247346811 container start 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:33:19 np0005486808 relaxed_kirch[405466]: 167 167
Oct 14 05:33:19 np0005486808 systemd[1]: libpod-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope: Deactivated successfully.
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.54803541 +0000 UTC m=+0.276070686 container attach 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.548992183 +0000 UTC m=+0.277027499 container died 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:33:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-41649d140a39bfd7f067665e66b63eebad855bf6172c110a52f5a11b9b8818cf-merged.mount: Deactivated successfully.
Oct 14 05:33:19 np0005486808 podman[405449]: 2025-10-14 09:33:19.694474333 +0000 UTC m=+0.422509599 container remove 5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:33:19 np0005486808 systemd[1]: libpod-conmon-5d79f226dbce0e3a1ed2dd840424d9e5ddee6458faddddfa8691f48e6b802c84.scope: Deactivated successfully.
Oct 14 05:33:19 np0005486808 podman[405493]: 2025-10-14 09:33:19.972946447 +0000 UTC m=+0.080069536 container create 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:33:20 np0005486808 podman[405493]: 2025-10-14 09:33:19.941605738 +0000 UTC m=+0.048728887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:20 np0005486808 systemd[1]: Started libpod-conmon-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope.
Oct 14 05:33:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:20 np0005486808 podman[405493]: 2025-10-14 09:33:20.169533561 +0000 UTC m=+0.276656660 container init 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 14 05:33:20 np0005486808 podman[405493]: 2025-10-14 09:33:20.184994201 +0000 UTC m=+0.292117290 container start 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:33:20 np0005486808 podman[405493]: 2025-10-14 09:33:20.193692154 +0000 UTC m=+0.300815253 container attach 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 05:33:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1003 KiB/s wr, 61 op/s
Oct 14 05:33:20 np0005486808 nova_compute[259627]: 2025-10-14 09:33:20.620 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:33:20 np0005486808 nova_compute[259627]: 2025-10-14 09:33:20.622 2 DEBUG nova.network.neutron [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:20 np0005486808 nova_compute[259627]: 2025-10-14 09:33:20.646 2 DEBUG oslo_concurrency.lockutils [req-e636d05b-4cfe-4f36-b7cf-b814a9407bb5 req-73f43c63-53de-480a-953d-ca9155a101cf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:20 np0005486808 nova_compute[259627]: 2025-10-14 09:33:20.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:21 np0005486808 eager_leavitt[405510]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:33:21 np0005486808 eager_leavitt[405510]: --> relative data size: 1.0
Oct 14 05:33:21 np0005486808 eager_leavitt[405510]: --> All data devices are unavailable
Oct 14 05:33:21 np0005486808 systemd[1]: libpod-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Deactivated successfully.
Oct 14 05:33:21 np0005486808 systemd[1]: libpod-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Consumed 1.187s CPU time.
Oct 14 05:33:21 np0005486808 podman[405493]: 2025-10-14 09:33:21.422383626 +0000 UTC m=+1.529506715 container died 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:33:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e954e74529623b608ac188e70e113996c1f67f069fc87f6d3689cd5b6c9a22b3-merged.mount: Deactivated successfully.
Oct 14 05:33:21 np0005486808 podman[405493]: 2025-10-14 09:33:21.499725223 +0000 UTC m=+1.606848292 container remove 92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:33:21 np0005486808 systemd[1]: libpod-conmon-92dfcf30771b1f3b43a61e79fae2bf96fc5cad74652e5fac423359b87748b357.scope: Deactivated successfully.
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.36920547 +0000 UTC m=+0.062000982 container create 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:33:22 np0005486808 systemd[1]: Started libpod-conmon-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope.
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.337916303 +0000 UTC m=+0.030711875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.457430425 +0000 UTC m=+0.150225937 container init 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.468563059 +0000 UTC m=+0.161358541 container start 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.473318675 +0000 UTC m=+0.166114257 container attach 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:33:22 np0005486808 elastic_feistel[405706]: 167 167
Oct 14 05:33:22 np0005486808 systemd[1]: libpod-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope: Deactivated successfully.
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.476502354 +0000 UTC m=+0.169297866 container died 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:33:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 05:33:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6840e6789d229a28ba1d5288ddd8d57e9aed1c781deab2c586db8ef52c9ce0a3-merged.mount: Deactivated successfully.
Oct 14 05:33:22 np0005486808 podman[405690]: 2025-10-14 09:33:22.529646848 +0000 UTC m=+0.222442340 container remove 8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:33:22 np0005486808 systemd[1]: libpod-conmon-8e63e07ed5a675b961d1e847931bfede0551265759135a89054ae5f5037fd7a7.scope: Deactivated successfully.
Oct 14 05:33:22 np0005486808 nova_compute[259627]: 2025-10-14 09:33:22.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:22 np0005486808 podman[405730]: 2025-10-14 09:33:22.734442173 +0000 UTC m=+0.071994537 container create 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:33:22 np0005486808 systemd[1]: Started libpod-conmon-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope.
Oct 14 05:33:22 np0005486808 podman[405730]: 2025-10-14 09:33:22.704370896 +0000 UTC m=+0.041923270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:22 np0005486808 podman[405730]: 2025-10-14 09:33:22.856852597 +0000 UTC m=+0.194404991 container init 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:33:22 np0005486808 podman[405730]: 2025-10-14 09:33:22.869123379 +0000 UTC m=+0.206675743 container start 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:33:22 np0005486808 podman[405730]: 2025-10-14 09:33:22.872206634 +0000 UTC m=+0.209759008 container attach 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:33:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]: {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    "0": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "devices": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "/dev/loop3"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            ],
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_name": "ceph_lv0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_size": "21470642176",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "name": "ceph_lv0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "tags": {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_name": "ceph",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.crush_device_class": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.encrypted": "0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_id": "0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.vdo": "0"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            },
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "vg_name": "ceph_vg0"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        }
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    ],
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    "1": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "devices": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "/dev/loop4"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            ],
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_name": "ceph_lv1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_size": "21470642176",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "name": "ceph_lv1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "tags": {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_name": "ceph",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.crush_device_class": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.encrypted": "0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_id": "1",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.vdo": "0"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            },
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "vg_name": "ceph_vg1"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        }
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    ],
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    "2": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "devices": [
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "/dev/loop5"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            ],
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_name": "ceph_lv2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_size": "21470642176",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "name": "ceph_lv2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "tags": {
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.cluster_name": "ceph",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.crush_device_class": "",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.encrypted": "0",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osd_id": "2",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:                "ceph.vdo": "0"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            },
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "type": "block",
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:            "vg_name": "ceph_vg2"
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:        }
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]:    ]
Oct 14 05:33:23 np0005486808 suspicious_feistel[405746]: }
Oct 14 05:33:23 np0005486808 systemd[1]: libpod-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope: Deactivated successfully.
Oct 14 05:33:23 np0005486808 podman[405755]: 2025-10-14 09:33:23.732854075 +0000 UTC m=+0.047230950 container died 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:33:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b5c756110a0b755972a05257486585687b8ba4edccf1221b145cbbeee6a10021-merged.mount: Deactivated successfully.
Oct 14 05:33:23 np0005486808 podman[405755]: 2025-10-14 09:33:23.792816236 +0000 UTC m=+0.107193071 container remove 38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:33:23 np0005486808 systemd[1]: libpod-conmon-38dfe904146d9eabf4c44da5d9f14aac0eb71d38524ce7874bdb846303a1ecef.scope: Deactivated successfully.
Oct 14 05:33:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.577451251 +0000 UTC m=+0.049368992 container create ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:33:24 np0005486808 systemd[1]: Started libpod-conmon-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope.
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.549154687 +0000 UTC m=+0.021072458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.678594633 +0000 UTC m=+0.150512454 container init ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.689756477 +0000 UTC m=+0.161674238 container start ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.694118014 +0000 UTC m=+0.166035765 container attach ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 05:33:24 np0005486808 inspiring_hermann[405927]: 167 167
Oct 14 05:33:24 np0005486808 systemd[1]: libpod-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope: Deactivated successfully.
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.696217666 +0000 UTC m=+0.168135397 container died ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:33:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-356f62557e60a213d744394ade3ea97c99228890944dac8e54d36464abe526b6-merged.mount: Deactivated successfully.
Oct 14 05:33:24 np0005486808 podman[405911]: 2025-10-14 09:33:24.74405995 +0000 UTC m=+0.215977711 container remove ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:33:24 np0005486808 systemd[1]: libpod-conmon-ea22c19f676369784f84f37e9def840be80f63f404339561db956367605cc98d.scope: Deactivated successfully.
Oct 14 05:33:24 np0005486808 podman[405949]: 2025-10-14 09:33:24.931857638 +0000 UTC m=+0.045942838 container create 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:33:24 np0005486808 systemd[1]: Started libpod-conmon-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope.
Oct 14 05:33:25 np0005486808 podman[405949]: 2025-10-14 09:33:24.909266844 +0000 UTC m=+0.023352064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:33:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:33:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:33:25 np0005486808 podman[405949]: 2025-10-14 09:33:25.039678083 +0000 UTC m=+0.153763373 container init 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:33:25 np0005486808 podman[405949]: 2025-10-14 09:33:25.048186842 +0000 UTC m=+0.162272022 container start 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:33:25 np0005486808 podman[405949]: 2025-10-14 09:33:25.052349924 +0000 UTC m=+0.166435134 container attach 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:33:25 np0005486808 nova_compute[259627]: 2025-10-14 09:33:25.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:26 np0005486808 brave_fermi[405968]: {
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_id": 2,
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "type": "bluestore"
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    },
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_id": 1,
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "type": "bluestore"
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    },
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_id": 0,
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:        "type": "bluestore"
Oct 14 05:33:26 np0005486808 brave_fermi[405968]:    }
Oct 14 05:33:26 np0005486808 brave_fermi[405968]: }
Oct 14 05:33:26 np0005486808 systemd[1]: libpod-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Deactivated successfully.
Oct 14 05:33:26 np0005486808 systemd[1]: libpod-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Consumed 1.039s CPU time.
Oct 14 05:33:26 np0005486808 podman[405949]: 2025-10-14 09:33:26.099864051 +0000 UTC m=+1.213949271 container died 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:33:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0f119410268c6148e20f0edc33baff31f3b7dd3df04f0814907d89fd8441a86b-merged.mount: Deactivated successfully.
Oct 14 05:33:26 np0005486808 podman[405949]: 2025-10-14 09:33:26.176234155 +0000 UTC m=+1.290319335 container remove 20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:33:26 np0005486808 systemd[1]: libpod-conmon-20ff4d4bde94d3f32afa74191a5d312fd0c6c88262f47b9216a6656b32704afe.scope: Deactivated successfully.
Oct 14 05:33:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:33:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:33:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8b09d45e-7094-4638-b656-928b2ee5e756 does not exist
Oct 14 05:33:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ea55e277-664b-4431-8d95-55260296d7bb does not exist
Oct 14 05:33:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 14 05:33:27 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 05:33:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:33:27 np0005486808 nova_compute[259627]: 2025-10-14 09:33:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:27Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 05:33:27 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:27Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:aa:2c 10.100.0.13
Oct 14 05:33:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 14 05:33:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:33:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 42K writes, 170K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5941 writes, 25K keys, 5941 commit groups, 1.0 writes per commit group, ingest: 27.85 MB, 0.05 MB/s#012Interval WAL: 5941 writes, 2282 syncs, 2.60 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:33:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 120 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 14 05:33:30 np0005486808 podman[406065]: 2025-10-14 09:33:30.661225426 +0000 UTC m=+0.071725832 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:33:30 np0005486808 podman[406064]: 2025-10-14 09:33:30.700088089 +0000 UTC m=+0.113922066 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:33:30 np0005486808 nova_compute[259627]: 2025-10-14 09:33:30.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 932 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 14 05:33:32 np0005486808 nova_compute[259627]: 2025-10-14 09:33:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:33:32
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'vms', 'volumes', 'default.rgw.control']
Oct 14 05:33:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:33:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:33:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:33:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:33:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:33:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s#012Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:33:35 np0005486808 nova_compute[259627]: 2025-10-14 09:33:35.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:33:37 np0005486808 nova_compute[259627]: 2025-10-14 09:33:37.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:37 np0005486808 nova_compute[259627]: 2025-10-14 09:33:37.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:37 np0005486808 nova_compute[259627]: 2025-10-14 09:33:37.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:33:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:33:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:33:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s#012Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.473 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.473 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.491 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:33:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.560 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.560 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.568 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.568 2 INFO nova.compute.claims [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:33:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.680 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:40 np0005486808 nova_compute[259627]: 2025-10-14 09:33:40.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:33:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316624555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.131 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.139 2 DEBUG nova.compute.provider_tree [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.155 2 DEBUG nova.scheduler.client.report [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.184 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.185 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.243 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.243 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.267 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.299 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.441 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.443 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.443 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating image(s)#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.478 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.514 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.550 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.556 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.660 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.661 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.663 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.663 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.697 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:41 np0005486808 nova_compute[259627]: 2025-10-14 09:33:41.702 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.017 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.053 2 DEBUG nova.policy [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.095 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.209 2 DEBUG nova.objects.instance [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.235 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.236 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Ensure instance console log exists: /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.237 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.237 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.238 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 77 KiB/s wr, 10 op/s
Oct 14 05:33:42 np0005486808 nova_compute[259627]: 2025-10-14 09:33:42.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:33:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:33:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:43.528 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:33:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:43.529 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:33:43 np0005486808 nova_compute[259627]: 2025-10-14 09:33:43.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:43 np0005486808 nova_compute[259627]: 2025-10-14 09:33:43.630 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully created port: b8b13ccf-81a6-410e-a209-ce58758d66f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:33:43 np0005486808 nova_compute[259627]: 2025-10-14 09:33:43.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.201 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully created port: bf1fcf69-c1da-4a76-8005-54c5457a915a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:33:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.757 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully updated port: b8b13ccf-81a6-410e-a209-ce58758d66f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.860 2 DEBUG nova.compute.manager [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.861 2 DEBUG nova.compute.manager [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:44 np0005486808 nova_compute[259627]: 2025-10-14 09:33:44.862 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.038 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.298 2 DEBUG nova.network.neutron [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.336 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Successfully updated port: bf1fcf69-c1da-4a76-8005-54c5457a915a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.337 2 DEBUG oslo_concurrency.lockutils [req-12363ac1-7176-4687-a7f7-ab4a507b96a7 req-1e9b3b97-b5d1-42ed-9196-6ac3590a2af1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.356 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.357 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.357 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:33:45 np0005486808 nova_compute[259627]: 2025-10-14 09:33:45.480 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:33:46 np0005486808 nova_compute[259627]: 2025-10-14 09:33:46.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 14 05:33:46 np0005486808 nova_compute[259627]: 2025-10-14 09:33:46.964 2 DEBUG nova.compute.manager [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:46 np0005486808 nova_compute[259627]: 2025-10-14 09:33:46.965 2 DEBUG nova.compute.manager [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-bf1fcf69-c1da-4a76-8005-54c5457a915a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:46 np0005486808 nova_compute[259627]: 2025-10-14 09:33:46.965 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:47 np0005486808 nova_compute[259627]: 2025-10-14 09:33:47.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:47 np0005486808 nova_compute[259627]: 2025-10-14 09:33:47.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.005 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.006 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.007 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.204 2 DEBUG nova.network.neutron [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.232 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.233 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance network_info: |[{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.234 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.235 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port bf1fcf69-c1da-4a76-8005-54c5457a915a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.242 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start _get_guest_xml network_info=[{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.249 2 WARNING nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.258 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.259 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.271 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.272 2 DEBUG nova.virt.libvirt.host [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.273 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.273 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.274 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.274 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.275 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.275 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.276 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.276 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.277 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.277 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.278 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.278 2 DEBUG nova.virt.hardware [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.285 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:33:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152093417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.493 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:33:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:48.531 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.585 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.586 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:33:48 np0005486808 podman[406338]: 2025-10-14 09:33:48.644753328 +0000 UTC m=+0.078903187 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 05:33:48 np0005486808 podman[406337]: 2025-10-14 09:33:48.665119448 +0000 UTC m=+0.103330997 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:33:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:33:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086076806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.804 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.833 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.837 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.903 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.904 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3400MB free_disk=59.92195129394531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.905 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:48 np0005486808 nova_compute[259627]: 2025-10-14 09:33:48.905 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.099 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance aa3e17be-c995-4cab-b209-1eadaaff1634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.099 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.100 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.100 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:33:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:33:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244745326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.275 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.277 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.278 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.279 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.281 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.281 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.282 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.284 2 DEBUG nova.objects.instance [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.303 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <uuid>1a22f837-6095-4ccc-8e71-79e69b15bc5b</uuid>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <name>instance-0000008f</name>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-521253734</nova:name>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:33:48</nova:creationTime>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:port uuid="b8b13ccf-81a6-410e-a209-ce58758d66f4">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <nova:port uuid="bf1fcf69-c1da-4a76-8005-54c5457a915a">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe61:ef18" ipVersion="6"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe61:ef18" ipVersion="6"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="serial">1a22f837-6095-4ccc-8e71-79e69b15bc5b</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="uuid">1a22f837-6095-4ccc-8e71-79e69b15bc5b</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:c0:91:ad"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <target dev="tapb8b13ccf-81"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:61:ef:18"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <target dev="tapbf1fcf69-c1"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/console.log" append="off"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:33:49 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:33:49 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:33:49 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:33:49 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.304 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Preparing to wait for external event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.305 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.306 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Preparing to wait for external event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.306 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.307 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.307 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.308 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.308 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.309 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.310 2 DEBUG os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.311 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8b13ccf-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8b13ccf-81, col_values=(('external_ids', {'iface-id': 'b8b13ccf-81a6-410e-a209-ce58758d66f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:91:ad', 'vm-uuid': '1a22f837-6095-4ccc-8e71-79e69b15bc5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 NetworkManager[44885]: <info>  [1760434429.3638] manager: (tapb8b13ccf-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.372 2 INFO os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81')#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.374 2 DEBUG nova.virt.libvirt.vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:33:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.375 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.377 2 DEBUG nova.network.os_vif_util [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.378 2 DEBUG os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf1fcf69-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf1fcf69-c1, col_values=(('external_ids', {'iface-id': 'bf1fcf69-c1da-4a76-8005-54c5457a915a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:ef:18', 'vm-uuid': '1a22f837-6095-4ccc-8e71-79e69b15bc5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 NetworkManager[44885]: <info>  [1760434429.3868] manager: (tapbf1fcf69-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.397 2 INFO os_vif [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1')#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.473 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.475 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.476 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:c0:91:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.476 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:61:ef:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.478 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Using config drive#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.514 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:33:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/436293421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.754 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.761 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.781 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.821 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.822 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:49 np0005486808 nova_compute[259627]: 2025-10-14 09:33:49.992 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Creating config drive at /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.003 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo5c6258 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.175 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvo5c6258" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.219 2 DEBUG nova.storage.rbd_utils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.225 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.340 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port bf1fcf69-c1da-4a76-8005-54c5457a915a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.341 2 DEBUG nova.network.neutron [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.370 2 DEBUG oslo_concurrency.lockutils [req-fa044821-a20f-4444-9b96-f68f43b01e45 req-c735924c-98c0-42bf-984f-604ff0203a9b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.445 2 DEBUG oslo_concurrency.processutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config 1a22f837-6095-4ccc-8e71-79e69b15bc5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.446 2 INFO nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deleting local config drive /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b/disk.config because it was imported into RBD.#033[00m
Oct 14 05:33:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.5305] manager: (tapb8b13ccf-81): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Oct 14 05:33:50 np0005486808 kernel: tapb8b13ccf-81: entered promiscuous mode
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01539|binding|INFO|Claiming lport b8b13ccf-81a6-410e-a209-ce58758d66f4 for this chassis.
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01540|binding|INFO|b8b13ccf-81a6-410e-a209-ce58758d66f4: Claiming fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.550 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:91:ad 10.100.0.3'], port_security=['fa:16:3e:c0:91:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b8b13ccf-81a6-410e-a209-ce58758d66f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.552 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b8b13ccf-81a6-410e-a209-ce58758d66f4 in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 bound to our chassis#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.554 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53#033[00m
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.5591] manager: (tapbf1fcf69-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct 14 05:33:50 np0005486808 kernel: tapbf1fcf69-c1: entered promiscuous mode
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01541|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 ovn-installed in OVS
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01542|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 up in Southbound
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01543|if_status|INFO|Dropped 1 log messages in last 123 seconds (most recently, 123 seconds ago) due to excessive rate
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01544|if_status|INFO|Not updating pb chassis for bf1fcf69-c1da-4a76-8005-54c5457a915a now as sb is readonly
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01545|binding|INFO|Claiming lport bf1fcf69-c1da-4a76-8005-54c5457a915a for this chassis.
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01546|binding|INFO|bf1fcf69-c1da-4a76-8005-54c5457a915a: Claiming fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.581 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[32451339-8aaf-407d-a1b3-dbeeef67d4e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 systemd-udevd[406519]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:33:50 np0005486808 systemd-udevd[406520]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.599 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], port_security=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:ef18/64 2001:db8::f816:3eff:fe61:ef18/64', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bf1fcf69-c1da-4a76-8005-54c5457a915a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01547|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a ovn-installed in OVS
Oct 14 05:33:50 np0005486808 ovn_controller[152662]: 2025-10-14T09:33:50Z|01548|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a up in Southbound
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 systemd-machined[214636]: New machine qemu-176-instance-0000008f.
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.6234] device (tapbf1fcf69-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.6255] device (tapbf1fcf69-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.6273] device (tapb8b13ccf-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:33:50 np0005486808 NetworkManager[44885]: <info>  [1760434430.6292] device (tapb8b13ccf-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:33:50 np0005486808 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.634 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea0d0a9-e1de-4f32-b518-4eb86bef8057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.639 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[668725e0-f9ef-41ec-9776-36650821aaf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.675 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7cb4d4-fe04-4259-92cb-f0f361162b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.701 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9311c835-2384-4946-a225-bb931f3cf5df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406530, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.722 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e7b590-cf67-4fe7-9528-40f0c8ab87f8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817338, 'tstamp': 817338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406534, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817341, 'tstamp': 817341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406534, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.725 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.728 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.729 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.729 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.730 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bf1fcf69-c1da-4a76-8005-54c5457a915a in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.732 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.752 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c16ca85d-1810-4568-a169-a38640ac0121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.789 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3501c799-c916-4ab3-839f-cbe72d8a3c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.793 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2ddaa-85f8-4529-bde0-973232336b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.808 2 DEBUG nova.compute.manager [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.809 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.809 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.810 2 DEBUG oslo_concurrency.lockutils [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.810 2 DEBUG nova.compute.manager [req-f7ad3765-c5b9-4dd3-9646-44e5dd82310e req-7f8747e0-0584-4556-93f2-66f1b31b54ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Processing event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.818 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.819 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.820 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.846 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[627672be-f17d-4daa-8daa-0ea5b3baf0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[343792ca-0af3-4006-ac62-3add82adb011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406541, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.900 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7c553cfd-7595-4a47-8be7-86068b85e567]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3ca3a81-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817441, 'tstamp': 817441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406542, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.902 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.948 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.948 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.948 2 DEBUG nova.compute.manager [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.949 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.950 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.950 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.950 2 DEBUG oslo_concurrency.lockutils [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:33:50.951 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.951 2 DEBUG nova.compute.manager [req-84493c0e-fd78-4213-a233-35419d48a487 req-ea8dd674-72a8-4245-bdf2-39564c11f9a4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Processing event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:50 np0005486808 nova_compute[259627]: 2025-10-14 09:33:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.726 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.7260845, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.728 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Started (Lifecycle Event)#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.731 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.734 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.738 2 INFO nova.virt.libvirt.driver [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance spawned successfully.#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.738 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.767 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.777 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.785 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.785 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.786 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.787 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.788 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.789 2 DEBUG nova.virt.libvirt.driver [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.826 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.727609, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.826 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.862 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.869 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434431.7332778, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.870 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.875 2 INFO nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 10.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.876 2 DEBUG nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.891 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.896 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.920 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.941 2 INFO nova.compute.manager [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 11.41 seconds to build instance.#033[00m
Oct 14 05:33:51 np0005486808 nova_compute[259627]: 2025-10-14 09:33:51.959 2 DEBUG oslo_concurrency.lockutils [None req-6aa29dbc-7c9f-41cf-92be-7b77f4f3a374 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.924 2 DEBUG nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG oslo_concurrency.lockutils [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.925 2 DEBUG nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:33:52 np0005486808 nova_compute[259627]: 2025-10-14 09:33:52.926 2 WARNING nova.compute.manager [req-ddb12f3f-f07b-422c-9c62-8d5cff6b6920 req-e0cfe959-2344-404e-bfda-026d55ddeeb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.027 2 DEBUG nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.028 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.028 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.029 2 DEBUG oslo_concurrency.lockutils [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.029 2 DEBUG nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.030 2 WARNING nova.compute.manager [req-9d33d1ff-d97a-49aa-9d4d-6d997ee63a94 req-ea498fb9-8ead-4b34-9e6f-20fb8c81b8d0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with vm_state active and task_state None.#033[00m
Oct 14 05:33:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:33:53 np0005486808 nova_compute[259627]: 2025-10-14 09:33:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:33:54 np0005486808 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:54 np0005486808 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:54 np0005486808 nova_compute[259627]: 2025-10-14 09:33:54.185 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:33:54 np0005486808 nova_compute[259627]: 2025-10-14 09:33:54.186 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:33:54 np0005486808 nova_compute[259627]: 2025-10-14 09:33:54.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 14 05:33:55 np0005486808 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG nova.compute.manager [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:33:55 np0005486808 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG nova.compute.manager [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:33:55 np0005486808 nova_compute[259627]: 2025-10-14 09:33:55.107 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:33:55 np0005486808 nova_compute[259627]: 2025-10-14 09:33:55.108 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:33:55 np0005486808 nova_compute[259627]: 2025-10-14 09:33:55.108 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:33:56 np0005486808 nova_compute[259627]: 2025-10-14 09:33:56.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:33:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.288 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.289 2 DEBUG nova.network.neutron [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.312 2 DEBUG oslo_concurrency.lockutils [req-d5c0fe1d-6162-4c99-a940-f04a1a40c7eb req-f73db494-1594-4ea7-a537-11a12dc3bcd6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.430 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.452 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.453 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.454 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:57 np0005486808 nova_compute[259627]: 2025-10-14 09:33:57.454 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:33:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:33:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:33:59 np0005486808 nova_compute[259627]: 2025-10-14 09:33:59.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 14 05:34:00 np0005486808 nova_compute[259627]: 2025-10-14 09:34:00.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:01 np0005486808 nova_compute[259627]: 2025-10-14 09:34:01.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:01 np0005486808 podman[406588]: 2025-10-14 09:34:01.686724736 +0000 UTC m=+0.084539626 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent)
Oct 14 05:34:01 np0005486808 podman[406587]: 2025-10-14 09:34:01.771390794 +0000 UTC m=+0.178463921 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:04 np0005486808 nova_compute[259627]: 2025-10-14 09:34:04.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 69 op/s
Oct 14 05:34:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:04Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 05:34:04 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:04Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:91:ad 10.100.0.3
Oct 14 05:34:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:34:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:34:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:34:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640755761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:34:06 np0005486808 nova_compute[259627]: 2025-10-14 09:34:06.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 14 05:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.050 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.051 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:34:09 np0005486808 nova_compute[259627]: 2025-10-14 09:34:09.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:34:11 np0005486808 nova_compute[259627]: 2025-10-14 09:34:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:34:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:14 np0005486808 nova_compute[259627]: 2025-10-14 09:34:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.452 2 DEBUG nova.compute.manager [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.452 2 DEBUG nova.compute.manager [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing instance network info cache due to event network-changed-b8b13ccf-81a6-410e-a209-ce58758d66f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.453 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.453 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.454 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Refreshing network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.515 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.516 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.517 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.517 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.518 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.519 2 INFO nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Terminating instance#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.521 2 DEBUG nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:34:15 np0005486808 kernel: tapb8b13ccf-81 (unregistering): left promiscuous mode
Oct 14 05:34:15 np0005486808 NetworkManager[44885]: <info>  [1760434455.5827] device (tapb8b13ccf-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01549|binding|INFO|Releasing lport b8b13ccf-81a6-410e-a209-ce58758d66f4 from this chassis (sb_readonly=0)
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01550|binding|INFO|Setting lport b8b13ccf-81a6-410e-a209-ce58758d66f4 down in Southbound
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01551|binding|INFO|Removing iface tapb8b13ccf-81 ovn-installed in OVS
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.609 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:91:ad 10.100.0.3'], port_security=['fa:16:3e:c0:91:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b8b13ccf-81a6-410e-a209-ce58758d66f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.612 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b8b13ccf-81a6-410e-a209-ce58758d66f4 in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 unbound from our chassis#033[00m
Oct 14 05:34:15 np0005486808 kernel: tapbf1fcf69-c1 (unregistering): left promiscuous mode
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.615 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ec9546c-0acc-437f-9f6e-7db1743faf53#033[00m
Oct 14 05:34:15 np0005486808 NetworkManager[44885]: <info>  [1760434455.6199] device (tapbf1fcf69-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01552|binding|INFO|Releasing lport bf1fcf69-c1da-4a76-8005-54c5457a915a from this chassis (sb_readonly=0)
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01553|binding|INFO|Setting lport bf1fcf69-c1da-4a76-8005-54c5457a915a down in Southbound
Oct 14 05:34:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:15Z|01554|binding|INFO|Removing iface tapbf1fcf69-c1 ovn-installed in OVS
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.644 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[245a2c10-1750-42d0-9e5b-2120d7b14fad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.647 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], port_security=['fa:16:3e:61:ef:18 2001:db8:0:1:f816:3eff:fe61:ef18 2001:db8::f816:3eff:fe61:ef18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:ef18/64 2001:db8::f816:3eff:fe61:ef18/64', 'neutron:device_id': '1a22f837-6095-4ccc-8e71-79e69b15bc5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bf1fcf69-c1da-4a76-8005-54c5457a915a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 14 05:34:15 np0005486808 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 14.112s CPU time.
Oct 14 05:34:15 np0005486808 systemd-machined[214636]: Machine qemu-176-instance-0000008f terminated.
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.685 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b543d727-d6c1-49e7-bdb8-4ed3357b6890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.689 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[20c2add3-b248-46ec-a91d-4ed36f1d3bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.724 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5c6111-8cbd-4993-96cb-4e91c5d1659f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.754 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ee78346e-3766-49bb-8e5e-62b81a76f2dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ec9546c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:a2:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817325, 'reachable_time': 33619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406648, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.774 2 INFO nova.virt.libvirt.driver [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Instance destroyed successfully.#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.775 2 DEBUG nova.objects.instance [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 1a22f837-6095-4ccc-8e71-79e69b15bc5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.779 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaca4df-86ad-4676-b015-02f25dd1655f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817338, 'tstamp': 817338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406668, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ec9546c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817341, 'tstamp': 817341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406668, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.781 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.788 2 DEBUG nova.virt.libvirt.vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:51Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.788 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.789 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.789 2 DEBUG os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ec9546c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b13ccf-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.791 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ec9546c-00, col_values=(('external_ids', {'iface-id': 'b37ec0e2-6cb5-44b1-98a2-2c39cdd204b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.792 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.793 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bf1fcf69-c1da-4a76-8005-54c5457a915a in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.795 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ca3a81-ba03-43af-8eb7-2462170c9d43#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.798 2 INFO os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:91:ad,bridge_name='br-int',has_traffic_filtering=True,id=b8b13ccf-81a6-410e-a209-ce58758d66f4,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b13ccf-81')#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.799 2 DEBUG nova.virt.libvirt.vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-521253734',display_name='tempest-TestGettingAddress-server-521253734',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-521253734',id=143,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-93iw5j7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:51Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=1a22f837-6095-4ccc-8e71-79e69b15bc5b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.800 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.800 2 DEBUG nova.network.os_vif_util [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.801 2 DEBUG os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf1fcf69-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.807 2 INFO os_vif [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:ef:18,bridge_name='br-int',has_traffic_filtering=True,id=bf1fcf69-c1da-4a76-8005-54c5457a915a,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1fcf69-c1')#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.814 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8942ddd3-95ce-4b34-8bdc-f80a00a066b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.851 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[17602cce-83c2-4d63-8bed-c3354b888bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.853 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.854 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG oslo_concurrency.lockutils [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.855 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0cd00e-9e6b-4e76-bd2b-8461a5ccddd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.855 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.856 2 DEBUG nova.compute.manager [req-fb39a4ee-8a3b-44a7-ae9f-e3cb178f4eed req-15587318-ea77-4b8d-a691-7b5ae7305051 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.895 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9861c404-ba06-4c59-9104-3fe0d6dba1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.916 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be133d74-67f1-44f2-8ae1-85f4ba1597df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ca3a81-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:84:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817424, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406698, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.937 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[23fa1465-8230-4cf4-9c8c-6960bca9c0b6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3ca3a81-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817441, 'tstamp': 817441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406699, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.940 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ca3a81-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.945 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.946 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ca3a81-b0, col_values=(('external_ids', {'iface-id': 'a176eb2a-6fbd-4b8e-90b2-85a86523eb62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:15.947 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.997 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:15 np0005486808 nova_compute[259627]: 2025-10-14 09:34:15.997 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.016 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.203 2 INFO nova.virt.libvirt.driver [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deleting instance files /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b_del#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.205 2 INFO nova.virt.libvirt.driver [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deletion of /var/lib/nova/instances/1a22f837-6095-4ccc-8e71-79e69b15bc5b_del complete#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.254 2 INFO nova.compute.manager [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG oslo.service.loopingcall [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:34:16 np0005486808 nova_compute[259627]: 2025-10-14 09:34:16.255 2 DEBUG nova.network.neutron [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:34:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.555 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.556 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.556 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.557 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.557 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.558 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-unplugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.558 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.559 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.559 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.560 2 DEBUG oslo_concurrency.lockutils [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.560 2 DEBUG nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.560 2 WARNING nova.compute.manager [req-559b47d0-27e4-4292-a56a-cfc47c281730 req-7e9ef822-7aa5-4226-9738-18f66546a003 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-bf1fcf69-c1da-4a76-8005-54c5457a915a for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.603 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updated VIF entry in instance network info cache for port b8b13ccf-81a6-410e-a209-ce58758d66f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.604 2 DEBUG nova.network.neutron [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "address": "fa:16:3e:61:ef:18", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:ef18", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1fcf69-c1", "ovs_interfaceid": "bf1fcf69-c1da-4a76-8005-54c5457a915a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.628 2 DEBUG oslo_concurrency.lockutils [req-afd73267-4148-4d24-a854-e64bcfd37a09 req-42837d26-5b2b-4cb3-8f00-d89c722a44e7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-1a22f837-6095-4ccc-8e71-79e69b15bc5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.953 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.954 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.954 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.955 2 DEBUG oslo_concurrency.lockutils [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.956 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] No waiting events found dispatching network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.956 2 WARNING nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received unexpected event network-vif-plugged-b8b13ccf-81a6-410e-a209-ce58758d66f4 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.956 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-deleted-bf1fcf69-c1da-4a76-8005-54c5457a915a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.957 2 INFO nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Neutron deleted interface bf1fcf69-c1da-4a76-8005-54c5457a915a; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.957 2 DEBUG nova.network.neutron [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [{"id": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "address": "fa:16:3e:c0:91:ad", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b13ccf-81", "ovs_interfaceid": "b8b13ccf-81a6-410e-a209-ce58758d66f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:17 np0005486808 nova_compute[259627]: 2025-10-14 09:34:17.992 2 DEBUG nova.network.neutron [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.002 2 DEBUG nova.compute.manager [req-d7761951-547b-4aff-a7c8-84bb57e11117 req-3b535b10-f394-4e51-a29e-86100f56cea8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Detach interface failed, port_id=bf1fcf69-c1da-4a76-8005-54c5457a915a, reason: Instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.013 2 INFO nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Took 1.76 seconds to deallocate network for instance.#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.080 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.082 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.214 2 DEBUG oslo_concurrency.processutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:34:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 14 05:34:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:34:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063906089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.710 2 DEBUG oslo_concurrency.processutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.723 2 DEBUG nova.compute.provider_tree [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.751 2 DEBUG nova.scheduler.client.report [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.785 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.823 2 INFO nova.scheduler.client.report [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 1a22f837-6095-4ccc-8e71-79e69b15bc5b#033[00m
Oct 14 05:34:18 np0005486808 nova_compute[259627]: 2025-10-14 09:34:18.921 2 DEBUG oslo_concurrency.lockutils [None req-3b81ca9c-789c-445e-950d-9c07c475a777 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "1a22f837-6095-4ccc-8e71-79e69b15bc5b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:19 np0005486808 podman[406723]: 2025-10-14 09:34:19.693712412 +0000 UTC m=+0.102948988 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 14 05:34:19 np0005486808 podman[406724]: 2025-10-14 09:34:19.714470791 +0000 UTC m=+0.119824222 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 14 05:34:19 np0005486808 nova_compute[259627]: 2025-10-14 09:34:19.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:20 np0005486808 nova_compute[259627]: 2025-10-14 09:34:20.091 2 DEBUG nova.compute.manager [req-9f9264d3-080a-4e50-8206-671ede7bdc16 req-b444b626-7b0b-4ee9-bac2-71c348743970 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Received event network-vif-deleted-b8b13ccf-81a6-410e-a209-ce58758d66f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 142 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 24 KiB/s wr, 26 op/s
Oct 14 05:34:20 np0005486808 nova_compute[259627]: 2025-10-14 09:34:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:21 np0005486808 nova_compute[259627]: 2025-10-14 09:34:21.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Oct 14 05:34:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.157 2 DEBUG nova.compute.manager [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.158 2 DEBUG nova.compute.manager [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing instance network info cache due to event network-changed-51563204-46d2-4b26-bfa3-a2dc0f43701a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.159 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.159 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.160 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Refreshing network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.228 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.229 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.230 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.230 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.231 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.232 2 INFO nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Terminating instance#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.234 2 DEBUG nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:34:23 np0005486808 kernel: tap51563204-46 (unregistering): left promiscuous mode
Oct 14 05:34:23 np0005486808 NetworkManager[44885]: <info>  [1760434463.3027] device (tap51563204-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01555|binding|INFO|Releasing lport 51563204-46d2-4b26-bfa3-a2dc0f43701a from this chassis (sb_readonly=0)
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01556|binding|INFO|Setting lport 51563204-46d2-4b26-bfa3-a2dc0f43701a down in Southbound
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01557|binding|INFO|Removing iface tap51563204-46 ovn-installed in OVS
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.378 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:aa:2c 10.100.0.13'], port_security=['fa:16:3e:8e:aa:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fc600f-219c-48ab-94ed-7d3694dfd14e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=51563204-46d2-4b26-bfa3-a2dc0f43701a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.380 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 51563204-46d2-4b26-bfa3-a2dc0f43701a in datapath 0ec9546c-0acc-437f-9f6e-7db1743faf53 unbound from our chassis#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.382 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ec9546c-0acc-437f-9f6e-7db1743faf53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.383 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[591fe5f7-f5e5-4bb5-b5c7-36d2e3931c82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.384 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 namespace which is not needed anymore#033[00m
Oct 14 05:34:23 np0005486808 kernel: tape4183a77-e1 (unregistering): left promiscuous mode
Oct 14 05:34:23 np0005486808 NetworkManager[44885]: <info>  [1760434463.4008] device (tape4183a77-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01558|binding|INFO|Releasing lport e4183a77-e102-4885-9a7d-ef0431abf27c from this chassis (sb_readonly=0)
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01559|binding|INFO|Setting lport e4183a77-e102-4885-9a7d-ef0431abf27c down in Southbound
Oct 14 05:34:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:34:23Z|01560|binding|INFO|Removing iface tape4183a77-e1 ovn-installed in OVS
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.421 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], port_security=['fa:16:3e:8f:fd:da 2001:db8:0:1:f816:3eff:fe8f:fdda 2001:db8::f816:3eff:fe8f:fdda'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:fdda/64 2001:db8::f816:3eff:fe8f:fdda/64', 'neutron:device_id': 'aa3e17be-c995-4cab-b209-1eadaaff1634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a1ad0e-ac41-4d92-be07-56e1d94d37c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de0556a-f319-4356-b742-06d48d854bbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e4183a77-e102-4885-9a7d-ef0431abf27c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 14 05:34:23 np0005486808 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 15.942s CPU time.
Oct 14 05:34:23 np0005486808 systemd-machined[214636]: Machine qemu-175-instance-0000008e terminated.
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : haproxy version is 2.8.14-c23fe91
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [NOTICE]   (405060) : path to executable is /usr/sbin/haproxy
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [WARNING]  (405060) : Exiting Master process...
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [ALERT]    (405060) : Current worker (405062) exited with code 143 (Terminated)
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53[405056]: [WARNING]  (405060) : All workers exited. Exiting... (0)
Oct 14 05:34:23 np0005486808 systemd[1]: libpod-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope: Deactivated successfully.
Oct 14 05:34:23 np0005486808 podman[406791]: 2025-10-14 09:34:23.572694612 +0000 UTC m=+0.059138453 container died e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 05:34:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16-userdata-shm.mount: Deactivated successfully.
Oct 14 05:34:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0619add7e0ccdeb5eee5bc37d898eed13da4d773245bbb9df7aa5e8493239769-merged.mount: Deactivated successfully.
Oct 14 05:34:23 np0005486808 podman[406791]: 2025-10-14 09:34:23.618459645 +0000 UTC m=+0.104903446 container cleanup e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.624 2 DEBUG oslo_concurrency.lockutils [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.625 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.625 2 DEBUG nova.compute.manager [req-e272e949-46aa-4802-a233-639c2b63e6a1 req-0c5c3e18-5ffe-46e6-85af-be7ff90f7d47 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:34:23 np0005486808 systemd[1]: libpod-conmon-e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16.scope: Deactivated successfully.
Oct 14 05:34:23 np0005486808 NetworkManager[44885]: <info>  [1760434463.6621] manager: (tap51563204-46): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Oct 14 05:34:23 np0005486808 NetworkManager[44885]: <info>  [1760434463.6717] manager: (tape4183a77-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.692 2 INFO nova.virt.libvirt.driver [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Instance destroyed successfully.#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.693 2 DEBUG nova.objects.instance [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid aa3e17be-c995-4cab-b209-1eadaaff1634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.707 2 DEBUG nova.virt.libvirt.vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:15Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.707 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.708 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.708 2 DEBUG os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51563204-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.720 2 INFO os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:aa:2c,bridge_name='br-int',has_traffic_filtering=True,id=51563204-46d2-4b26-bfa3-a2dc0f43701a,network=Network(0ec9546c-0acc-437f-9f6e-7db1743faf53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51563204-46')#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.720 2 DEBUG nova.virt.libvirt.vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:33:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1334569923',display_name='tempest-TestGettingAddress-server-1334569923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1334569923',id=142,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8dHWwoASzwaEtDMH2Wi+f2synjGjjZ9XZG0MWQfZpxTlFQLXxLEnN5PLSLnAZWxtveM7NKytid8Sra6Q2ThMibNL4mHEXIr9+f4S2pxjfer3narHj+r/DZfKSVQL2czA==',key_name='tempest-TestGettingAddress-1935488696',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:33:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-z7alpslz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:33:15Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=aa3e17be-c995-4cab-b209-1eadaaff1634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG nova.network.os_vif_util [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.721 2 DEBUG os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4183a77-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 podman[406819]: 2025-10-14 09:34:23.729286144 +0000 UTC m=+0.064095164 container remove e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.726 2 INFO os_vif [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:fd:da,bridge_name='br-int',has_traffic_filtering=True,id=e4183a77-e102-4885-9a7d-ef0431abf27c,network=Network(a3ca3a81-ba03-43af-8eb7-2462170c9d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4183a77-e1')#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.736 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0e381778-9ef1-4a7b-909c-5932cf191980]: (4, ('Tue Oct 14 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 (e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16)\ne0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16\nTue Oct 14 09:34:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 (e0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16)\ne0bee49b8e1f8f8bcbcfebd7f24148d81c24d2e3d5c9ca6dd59307eb33e5da16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.738 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[278c35a6-e0b9-4099-b381-ddd9b0465d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.739 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ec9546c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:23 np0005486808 kernel: tap0ec9546c-00: left promiscuous mode
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 nova_compute[259627]: 2025-10-14 09:34:23.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.763 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0edb0996-93f7-4a53-b6fa-3b55bab90e76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.791 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7d56abaf-7187-4dbd-a445-2188505a2de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.793 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf45dc0-8f75-40a8-883b-d03be183854a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.811 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3154b-2aac-488f-bbc6-b615a530f6c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817315, 'reachable_time': 37820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406872, 'error': None, 'target': 'ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0ec9546c\x2d0acc\x2d437f\x2d9f6e\x2d7db1743faf53.mount: Deactivated successfully.
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.819 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ec9546c-0acc-437f-9f6e-7db1743faf53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.819 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[74848880-86cf-4292-aa43-d97fb64d34f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.820 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e4183a77-e102-4885-9a7d-ef0431abf27c in datapath a3ca3a81-ba03-43af-8eb7-2462170c9d43 unbound from our chassis#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.821 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ca3a81-ba03-43af-8eb7-2462170c9d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.822 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[906067f5-d1ee-4c44-8307-111743499e18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:23.823 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 namespace which is not needed anymore#033[00m
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : haproxy version is 2.8.14-c23fe91
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [NOTICE]   (405132) : path to executable is /usr/sbin/haproxy
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : Exiting Master process...
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : Exiting Master process...
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [ALERT]    (405132) : Current worker (405134) exited with code 143 (Terminated)
Oct 14 05:34:23 np0005486808 neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43[405128]: [WARNING]  (405132) : All workers exited. Exiting... (0)
Oct 14 05:34:23 np0005486808 systemd[1]: libpod-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope: Deactivated successfully.
Oct 14 05:34:23 np0005486808 podman[406887]: 2025-10-14 09:34:23.986434855 +0000 UTC m=+0.046065162 container died 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:34:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63-userdata-shm.mount: Deactivated successfully.
Oct 14 05:34:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c0a4b15685401cfe097a8856dcb11738118fab0652949510d103f701d6144a22-merged.mount: Deactivated successfully.
Oct 14 05:34:24 np0005486808 podman[406887]: 2025-10-14 09:34:24.032594798 +0000 UTC m=+0.092225105 container cleanup 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:34:24 np0005486808 systemd[1]: libpod-conmon-552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63.scope: Deactivated successfully.
Oct 14 05:34:24 np0005486808 podman[406916]: 2025-10-14 09:34:24.104556984 +0000 UTC m=+0.050135832 container remove 552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.110 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7cfc1e-471e-4d0f-beb3-1cac78028da5]: (4, ('Tue Oct 14 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 (552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63)\n552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63\nTue Oct 14 09:34:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 (552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63)\n552b988adfeb56997fa30aa5977645dc5867546abc2f816808fd3e2568c1ed63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.112 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ec2115-ed9d-4387-8cce-99147d29f529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.114 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ca3a81-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:34:24 np0005486808 kernel: tapa3ca3a81-b0: left promiscuous mode
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.150 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[45f85b5c-2ce6-4279-8b3b-713bb774663f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.156 2 INFO nova.virt.libvirt.driver [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deleting instance files /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634_del#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.157 2 INFO nova.virt.libvirt.driver [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deletion of /var/lib/nova/instances/aa3e17be-c995-4cab-b209-1eadaaff1634_del complete#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c82e8f-2700-4bd0-8872-189f47689f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.182 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d513f7-c6a2-437f-82d8-43b8b8115188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.202 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[baebe3bf-afb3-4e8a-b297-e74382388c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817415, 'reachable_time': 44086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406933, 'error': None, 'target': 'ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.205 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ca3a81-ba03-43af-8eb7-2462170c9d43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:34:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:24.205 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[48413248-99c0-4ca3-a7f9-69e38d662435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.229 2 INFO nova.compute.manager [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.230 2 DEBUG oslo.service.loopingcall [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.231 2 DEBUG nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:34:24 np0005486808 nova_compute[259627]: 2025-10-14 09:34:24.231 2 DEBUG nova.network.neutron [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:34:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 121 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 30 op/s
Oct 14 05:34:24 np0005486808 systemd[1]: run-netns-ovnmeta\x2da3ca3a81\x2dba03\x2d43af\x2d8eb7\x2d2462170c9d43.mount: Deactivated successfully.
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.251 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.252 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.253 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.253 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.254 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.254 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-unplugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.255 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.255 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG oslo_concurrency.lockutils [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.256 2 DEBUG nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.257 2 WARNING nova.compute.manager [req-6bfc246e-6d82-4978-942e-e1b52c436d8b req-aa36d6e3-b766-49cb-acc9-d22c8dc70fc8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-e4183a77-e102-4885-9a7d-ef0431abf27c for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.502 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updated VIF entry in instance network info cache for port 51563204-46d2-4b26-bfa3-a2dc0f43701a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.502 2 DEBUG nova.network.neutron [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e4183a77-e102-4885-9a7d-ef0431abf27c", "address": "fa:16:3e:8f:fd:da", "network": {"id": "a3ca3a81-ba03-43af-8eb7-2462170c9d43", "bridge": "br-int", "label": "tempest-network-smoke--776641956", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:fdda", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4183a77-e1", "ovs_interfaceid": "e4183a77-e102-4885-9a7d-ef0431abf27c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.518 2 DEBUG oslo_concurrency.lockutils [req-c0acee91-09b3-4adc-8f78-b67a42fea93c req-3ccd0aed-08f3-4cbe-962f-ef4955b09e58 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-aa3e17be-c995-4cab-b209-1eadaaff1634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.752 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.752 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.753 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.753 2 DEBUG oslo_concurrency.lockutils [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.754 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] No waiting events found dispatching network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.754 2 WARNING nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received unexpected event network-vif-plugged-51563204-46d2-4b26-bfa3-a2dc0f43701a for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.754 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-deleted-e4183a77-e102-4885-9a7d-ef0431abf27c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.755 2 INFO nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Neutron deleted interface e4183a77-e102-4885-9a7d-ef0431abf27c; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.755 2 DEBUG nova.network.neutron [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [{"id": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "address": "fa:16:3e:8e:aa:2c", "network": {"id": "0ec9546c-0acc-437f-9f6e-7db1743faf53", "bridge": "br-int", "label": "tempest-network-smoke--1828977783", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51563204-46", "ovs_interfaceid": "51563204-46d2-4b26-bfa3-a2dc0f43701a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.798 2 DEBUG nova.network.neutron [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.802 2 DEBUG nova.compute.manager [req-e52e889d-1a1d-4d61-8d80-7eed7156a165 req-48142e50-d3c8-45c3-8211-dd6464651ed3 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Detach interface failed, port_id=e4183a77-e102-4885-9a7d-ef0431abf27c, reason: Instance aa3e17be-c995-4cab-b209-1eadaaff1634 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.826 2 INFO nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Took 1.59 seconds to deallocate network for instance.#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.879 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.880 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:25 np0005486808 nova_compute[259627]: 2025-10-14 09:34:25.928 2 DEBUG oslo_concurrency.processutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:34:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031465038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.450 2 DEBUG oslo_concurrency.processutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.457 2 DEBUG nova.compute.provider_tree [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.481 2 DEBUG nova.scheduler.client.report [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.512 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.550 2 INFO nova.scheduler.client.report [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance aa3e17be-c995-4cab-b209-1eadaaff1634#033[00m
Oct 14 05:34:26 np0005486808 nova_compute[259627]: 2025-10-14 09:34:26.630 2 DEBUG oslo_concurrency.lockutils [None req-b8a38a0b-bcd1-431b-b93e-da20e2adce00 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "aa3e17be-c995-4cab-b209-1eadaaff1634" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:27 np0005486808 nova_compute[259627]: 2025-10-14 09:34:27.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:27 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 221da1bf-9db7-40cf-83dc-2877b2af0e5d does not exist
Oct 14 05:34:27 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f9b2dfa8-131b-4146-a7c7-074dc1ba0490 does not exist
Oct 14 05:34:27 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 145e96c0-de5a-4540-a538-c7780faf845d does not exist
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:34:27 np0005486808 nova_compute[259627]: 2025-10-14 09:34:27.873 2 DEBUG nova.compute.manager [req-a67e81d0-a272-4964-80d5-f972d6b9ad20 req-7b7dfdce-1d54-492f-a7e5-fccfe7faea5d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Received event network-vif-deleted-51563204-46d2-4b26-bfa3-a2dc0f43701a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.060321507 +0000 UTC m=+0.060755222 container create 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:34:28 np0005486808 systemd[1]: Started libpod-conmon-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope.
Oct 14 05:34:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.026482986 +0000 UTC m=+0.026916701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.141612772 +0000 UTC m=+0.142046447 container init 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.149664679 +0000 UTC m=+0.150098354 container start 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.153200386 +0000 UTC m=+0.153634081 container attach 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:34:28 np0005486808 hungry_goldberg[407245]: 167 167
Oct 14 05:34:28 np0005486808 systemd[1]: libpod-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope: Deactivated successfully.
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.156970569 +0000 UTC m=+0.157404244 container died 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:34:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a9a774f21be8fda3194a00316d7ff69c4d7e071b94d0b64bdbd09f3da7ff9953-merged.mount: Deactivated successfully.
Oct 14 05:34:28 np0005486808 podman[407229]: 2025-10-14 09:34:28.191692941 +0000 UTC m=+0.192126626 container remove 1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:34:28 np0005486808 systemd[1]: libpod-conmon-1782e04e82bfb019fd129030a8ff774cd46206414416fa0edff32fda11b7f182.scope: Deactivated successfully.
Oct 14 05:34:28 np0005486808 podman[407269]: 2025-10-14 09:34:28.395343188 +0000 UTC m=+0.050363557 container create c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:34:28 np0005486808 systemd[1]: Started libpod-conmon-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope.
Oct 14 05:34:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:28 np0005486808 podman[407269]: 2025-10-14 09:34:28.374285052 +0000 UTC m=+0.029305441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:28 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:28 np0005486808 podman[407269]: 2025-10-14 09:34:28.481291488 +0000 UTC m=+0.136311847 container init c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:34:28 np0005486808 podman[407269]: 2025-10-14 09:34:28.496236964 +0000 UTC m=+0.151257323 container start c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:34:28 np0005486808 podman[407269]: 2025-10-14 09:34:28.500434167 +0000 UTC m=+0.155454556 container attach c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:34:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 05:34:28 np0005486808 nova_compute[259627]: 2025-10-14 09:34:28.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:29 np0005486808 intelligent_volhard[407285]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:34:29 np0005486808 intelligent_volhard[407285]: --> relative data size: 1.0
Oct 14 05:34:29 np0005486808 intelligent_volhard[407285]: --> All data devices are unavailable
Oct 14 05:34:29 np0005486808 systemd[1]: libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Deactivated successfully.
Oct 14 05:34:29 np0005486808 systemd[1]: libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Consumed 1.176s CPU time.
Oct 14 05:34:29 np0005486808 conmon[407285]: conmon c84d789ff8d91e463e69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope/container/memory.events
Oct 14 05:34:29 np0005486808 podman[407269]: 2025-10-14 09:34:29.726227087 +0000 UTC m=+1.381247466 container died c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:34:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ab696a0d33ab9e3cd41e4190253d12bfd0acdc497627c06e7902343eb992c445-merged.mount: Deactivated successfully.
Oct 14 05:34:29 np0005486808 podman[407269]: 2025-10-14 09:34:29.802879949 +0000 UTC m=+1.457900328 container remove c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:34:29 np0005486808 systemd[1]: libpod-conmon-c84d789ff8d91e463e690dd47d813c7ee0ff7ef1c95bd3f510a2431906a41f65.scope: Deactivated successfully.
Oct 14 05:34:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.581224759 +0000 UTC m=+0.057878871 container create e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:34:30 np0005486808 systemd[1]: Started libpod-conmon-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope.
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.555225681 +0000 UTC m=+0.031879843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:30 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.684401891 +0000 UTC m=+0.161056033 container init e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.695327579 +0000 UTC m=+0.171981691 container start e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.698804725 +0000 UTC m=+0.175458817 container attach e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:34:30 np0005486808 zen_murdock[407485]: 167 167
Oct 14 05:34:30 np0005486808 systemd[1]: libpod-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope: Deactivated successfully.
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.703218843 +0000 UTC m=+0.179872985 container died e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:34:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-df0005df252c108518a7fa3b51a136eae93e44c122abeca4af18c03d50c98d97-merged.mount: Deactivated successfully.
Oct 14 05:34:30 np0005486808 podman[407469]: 2025-10-14 09:34:30.752943183 +0000 UTC m=+0.229597295 container remove e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:34:30 np0005486808 systemd[1]: libpod-conmon-e5481875930041df5755cd75c78062aa9b8227876c73ad2d413b9ea51d59e2ca.scope: Deactivated successfully.
Oct 14 05:34:30 np0005486808 nova_compute[259627]: 2025-10-14 09:34:30.773 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434455.7726548, 1a22f837-6095-4ccc-8e71-79e69b15bc5b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:34:30 np0005486808 nova_compute[259627]: 2025-10-14 09:34:30.777 2 INFO nova.compute.manager [-] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:34:30 np0005486808 nova_compute[259627]: 2025-10-14 09:34:30.793 2 DEBUG nova.compute.manager [None req-8b3d1b52-59b4-46bd-93f9-a34fd22a1f3a - - - - - -] [instance: 1a22f837-6095-4ccc-8e71-79e69b15bc5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:34:30 np0005486808 podman[407507]: 2025-10-14 09:34:30.935893593 +0000 UTC m=+0.050043059 container create 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:34:30 np0005486808 systemd[1]: Started libpod-conmon-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope.
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:30.911546185 +0000 UTC m=+0.025695691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:31.043197186 +0000 UTC m=+0.157346712 container init 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:34:31 np0005486808 nova_compute[259627]: 2025-10-14 09:34:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:31.062159411 +0000 UTC m=+0.176308907 container start 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:31.066243462 +0000 UTC m=+0.180392998 container attach 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]: {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    "0": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "devices": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "/dev/loop3"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            ],
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_name": "ceph_lv0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_size": "21470642176",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "name": "ceph_lv0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "tags": {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_name": "ceph",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.crush_device_class": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.encrypted": "0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_id": "0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.vdo": "0"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            },
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "vg_name": "ceph_vg0"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        }
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    ],
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    "1": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "devices": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "/dev/loop4"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            ],
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_name": "ceph_lv1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_size": "21470642176",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "name": "ceph_lv1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "tags": {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_name": "ceph",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.crush_device_class": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.encrypted": "0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_id": "1",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.vdo": "0"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            },
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "vg_name": "ceph_vg1"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        }
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    ],
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    "2": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "devices": [
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "/dev/loop5"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            ],
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_name": "ceph_lv2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_size": "21470642176",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "name": "ceph_lv2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "tags": {
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.cluster_name": "ceph",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.crush_device_class": "",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.encrypted": "0",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osd_id": "2",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:                "ceph.vdo": "0"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            },
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "type": "block",
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:            "vg_name": "ceph_vg2"
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:        }
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]:    ]
Oct 14 05:34:31 np0005486808 romantic_jackson[407524]: }
Oct 14 05:34:31 np0005486808 systemd[1]: libpod-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope: Deactivated successfully.
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:31.858895273 +0000 UTC m=+0.973044779 container died 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:34:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f3c9e47383d6db986c0cc001a35518b16323a053de0ed54cbc28656122e24791-merged.mount: Deactivated successfully.
Oct 14 05:34:31 np0005486808 podman[407507]: 2025-10-14 09:34:31.945686653 +0000 UTC m=+1.059836119 container remove 90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:34:31 np0005486808 systemd[1]: libpod-conmon-90e7bc2bbe85813b03aaf1f0b01f0ea40d29d5ec420917942639df463bf3a88a.scope: Deactivated successfully.
Oct 14 05:34:32 np0005486808 podman[407541]: 2025-10-14 09:34:32.012479192 +0000 UTC m=+0.101030780 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:34:32 np0005486808 podman[407534]: 2025-10-14 09:34:32.111191315 +0000 UTC m=+0.204526120 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.782913879 +0000 UTC m=+0.062487304 container create d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:34:32 np0005486808 systemd[1]: Started libpod-conmon-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope.
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.759535895 +0000 UTC m=+0.039109400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:34:32
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'images', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 14 05:34:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:34:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.88157743 +0000 UTC m=+0.161150865 container init d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.893323579 +0000 UTC m=+0.172897004 container start d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.896595109 +0000 UTC m=+0.176168554 container attach d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:34:32 np0005486808 hardcore_wescoff[407741]: 167 167
Oct 14 05:34:32 np0005486808 systemd[1]: libpod-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope: Deactivated successfully.
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.900476084 +0000 UTC m=+0.180049539 container died d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:34:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ea5385b7ae71cb3cb1a0c36f444ce84ab73620722125ecd6096f4326dc4848fc-merged.mount: Deactivated successfully.
Oct 14 05:34:32 np0005486808 podman[407725]: 2025-10-14 09:34:32.938920887 +0000 UTC m=+0.218494312 container remove d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:34:32 np0005486808 systemd[1]: libpod-conmon-d7bf53991094ce066ccc2a4009d12539b0c715062c0a4da9a5a0cc74edb6f079.scope: Deactivated successfully.
Oct 14 05:34:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:33 np0005486808 podman[407765]: 2025-10-14 09:34:33.107452932 +0000 UTC m=+0.047908176 container create 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:34:33 np0005486808 systemd[1]: Started libpod-conmon-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope.
Oct 14 05:34:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:34:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:34:33 np0005486808 podman[407765]: 2025-10-14 09:34:33.091540292 +0000 UTC m=+0.031995556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:34:33 np0005486808 podman[407765]: 2025-10-14 09:34:33.190762887 +0000 UTC m=+0.131218201 container init 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:34:33 np0005486808 podman[407765]: 2025-10-14 09:34:33.202843353 +0000 UTC m=+0.143298617 container start 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:34:33 np0005486808 podman[407765]: 2025-10-14 09:34:33.207231041 +0000 UTC m=+0.147686365 container attach 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:34:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:34:33 np0005486808 nova_compute[259627]: 2025-10-14 09:34:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]: {
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_id": 2,
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "type": "bluestore"
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    },
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_id": 1,
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "type": "bluestore"
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    },
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_id": 0,
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:        "type": "bluestore"
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]:    }
Oct 14 05:34:34 np0005486808 trusting_bardeen[407782]: }
Oct 14 05:34:34 np0005486808 systemd[1]: libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Deactivated successfully.
Oct 14 05:34:34 np0005486808 systemd[1]: libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Consumed 1.035s CPU time.
Oct 14 05:34:34 np0005486808 conmon[407782]: conmon 655e6b7f05d022203716 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope/container/memory.events
Oct 14 05:34:34 np0005486808 podman[407765]: 2025-10-14 09:34:34.229448406 +0000 UTC m=+1.169903650 container died 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:34:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1e35abfabc94db8e5eccae1a7a71c9b6fce752cec4b76449f14159e866ca5325-merged.mount: Deactivated successfully.
Oct 14 05:34:34 np0005486808 podman[407765]: 2025-10-14 09:34:34.296819189 +0000 UTC m=+1.237274433 container remove 655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:34:34 np0005486808 systemd[1]: libpod-conmon-655e6b7f05d02220371623cf6ae0f2ad925278f1a889a18a9f2607bccc9748c4.scope: Deactivated successfully.
Oct 14 05:34:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:34:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:34:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:34 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ff7b34cb-e596-4737-b577-14bb9ee63091 does not exist
Oct 14 05:34:34 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev de0c9931-b3e3-46eb-b8c1-1fc01af283d4 does not exist
Oct 14 05:34:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:34:34 np0005486808 nova_compute[259627]: 2025-10-14 09:34:34.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:34 np0005486808 nova_compute[259627]: 2025-10-14 09:34:34.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:34:36 np0005486808 nova_compute[259627]: 2025-10-14 09:34:36.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:34:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:34:38 np0005486808 nova_compute[259627]: 2025-10-14 09:34:38.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434463.6909003, aa3e17be-c995-4cab-b209-1eadaaff1634 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:34:38 np0005486808 nova_compute[259627]: 2025-10-14 09:34:38.692 2 INFO nova.compute.manager [-] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:34:38 np0005486808 nova_compute[259627]: 2025-10-14 09:34:38.722 2 DEBUG nova.compute.manager [None req-b9167f18-7cf2-464c-8002-9786f27e265a - - - - - -] [instance: aa3e17be-c995-4cab-b209-1eadaaff1634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:34:38 np0005486808 nova_compute[259627]: 2025-10-14 09:34:38.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 14 05:34:41 np0005486808 nova_compute[259627]: 2025-10-14 09:34:41.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 05:34:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:34:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:34:43 np0005486808 nova_compute[259627]: 2025-10-14 09:34:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:43 np0005486808 nova_compute[259627]: 2025-10-14 09:34:43.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 14 05:34:46 np0005486808 nova_compute[259627]: 2025-10-14 09:34:46.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:34:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:34:48 np0005486808 nova_compute[259627]: 2025-10-14 09:34:48.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:48 np0005486808 nova_compute[259627]: 2025-10-14 09:34:48.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:48 np0005486808 nova_compute[259627]: 2025-10-14 09:34:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:48 np0005486808 nova_compute[259627]: 2025-10-14 09:34:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.015 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:34:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:49.355 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:49.357 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:34:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510582438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.588 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.851 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.854 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3620MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.854 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.855 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.954 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:34:49 np0005486808 nova_compute[259627]: 2025-10-14 09:34:49.975 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.009 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.010 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.030 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.059 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.084 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:34:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:34:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73745318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:34:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.549 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.560 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.576 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.603 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:34:50 np0005486808 nova_compute[259627]: 2025-10-14 09:34:50.603 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:34:50 np0005486808 podman[407925]: 2025-10-14 09:34:50.669100971 +0000 UTC m=+0.077523983 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 14 05:34:50 np0005486808 podman[407924]: 2025-10-14 09:34:50.688114748 +0000 UTC m=+0.090445061 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:34:51 np0005486808 nova_compute[259627]: 2025-10-14 09:34:51.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 14 05:34:52 np0005486808 nova_compute[259627]: 2025-10-14 09:34:52.600 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:52 np0005486808 nova_compute[259627]: 2025-10-14 09:34:52.600 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:52 np0005486808 nova_compute[259627]: 2025-10-14 09:34:52.601 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:34:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:53 np0005486808 nova_compute[259627]: 2025-10-14 09:34:53.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:53 np0005486808 nova_compute[259627]: 2025-10-14 09:34:53.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:53 np0005486808 nova_compute[259627]: 2025-10-14 09:34:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:34:54 np0005486808 nova_compute[259627]: 2025-10-14 09:34:54.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:34:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 05:34:56 np0005486808 nova_compute[259627]: 2025-10-14 09:34:56.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.460 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:2e:6c 2001:db8:0:1:f816:3eff:fe38:2e6c 2001:db8::f816:3eff:fe38:2e6c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe38:2e6c/64 2001:db8::f816:3eff:fe38:2e6c/64', 'neutron:device_id': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1caef19e-e76a-434d-84c0-dc762554a564) old=Port_Binding(mac=['fa:16:3e:38:2e:6c 2001:db8::f816:3eff:fe38:2e6c'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe38:2e6c/64', 'neutron:device_id': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:34:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.462 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1caef19e-e76a-434d-84c0-dc762554a564 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 updated#033[00m
Oct 14 05:34:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.464 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:34:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:56.465 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d280a9-63fa-47db-9f25-c3b59625e780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:34:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 05:34:56 np0005486808 nova_compute[259627]: 2025-10-14 09:34:56.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:34:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.711831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498711876, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3375488, "memory_usage": 3433976, "flush_reason": "Manual Compaction"}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498734730, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3319864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49806, "largest_seqno": 51861, "table_properties": {"data_size": 3310506, "index_size": 5916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18957, "raw_average_key_size": 20, "raw_value_size": 3291963, "raw_average_value_size": 3502, "num_data_blocks": 262, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434273, "oldest_key_time": 1760434273, "file_creation_time": 1760434498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 22968 microseconds, and 15065 cpu microseconds.
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.734796) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3319864 bytes OK
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.734828) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736902) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736926) EVENT_LOG_v1 {"time_micros": 1760434498736918, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.736952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3366869, prev total WAL file size 3366869, number of live WAL files 2.
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.738590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3242KB)], [116(8158KB)]
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498738656, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11674677, "oldest_snapshot_seqno": -1}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7399 keys, 9956735 bytes, temperature: kUnknown
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498799693, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 9956735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9907561, "index_size": 29542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191545, "raw_average_key_size": 25, "raw_value_size": 9775688, "raw_average_value_size": 1321, "num_data_blocks": 1156, "num_entries": 7399, "num_filter_entries": 7399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.800127) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9956735 bytes
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.801751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.0 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7913, records dropped: 514 output_compression: NoCompression
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.801782) EVENT_LOG_v1 {"time_micros": 1760434498801767, "job": 70, "event": "compaction_finished", "compaction_time_micros": 61134, "compaction_time_cpu_micros": 43736, "output_level": 6, "num_output_files": 1, "total_output_size": 9956735, "num_input_records": 7913, "num_output_records": 7399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498803238, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434498806342, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.738421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:34:58.806417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:34:58 np0005486808 nova_compute[259627]: 2025-10-14 09:34:58.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:34:58 np0005486808 nova_compute[259627]: 2025-10-14 09:34:58.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:34:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:34:59.359 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:35:01 np0005486808 nova_compute[259627]: 2025-10-14 09:35:01.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:35:02 np0005486808 podman[407967]: 2025-10-14 09:35:02.679299947 +0000 UTC m=+0.088569794 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:35:02 np0005486808 podman[407966]: 2025-10-14 09:35:02.709249512 +0000 UTC m=+0.119761970 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.846 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.847 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.861 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.956 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.957 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.967 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:35:03 np0005486808 nova_compute[259627]: 2025-10-14 09:35:03.968 2 INFO nova.compute.claims [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.085 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 41 MiB data, 935 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:35:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:35:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292228845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.577 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.585 2 DEBUG nova.compute.provider_tree [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.601 2 DEBUG nova.scheduler.client.report [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.624 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.624 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.682 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.682 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.707 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.726 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.836 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.838 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.839 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating image(s)#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.872 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.908 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.943 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:04 np0005486808 nova_compute[259627]: 2025-10-14 09:35:04.948 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.058 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.060 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.061 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.061 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.094 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.101 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.158 2 DEBUG nova.policy [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.414 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.498 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.595 2 DEBUG nova.objects.instance [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.641 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.642 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Ensure instance console log exists: /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.643 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.644 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:05 np0005486808 nova_compute[259627]: 2025-10-14 09:35:05.644 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:35:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:35:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:35:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790611074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:35:06 np0005486808 nova_compute[259627]: 2025-10-14 09:35:06.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 05:35:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.051 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:07 np0005486808 nova_compute[259627]: 2025-10-14 09:35:07.257 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully created port: 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:35:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:08 np0005486808 nova_compute[259627]: 2025-10-14 09:35:08.118 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully created port: b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:35:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 65 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct 14 05:35:08 np0005486808 nova_compute[259627]: 2025-10-14 09:35:08.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.299 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully updated port: 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.440 2 DEBUG nova.compute.manager [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.441 2 DEBUG nova.compute.manager [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.441 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.442 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.442 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:09 np0005486808 nova_compute[259627]: 2025-10-14 09:35:09.719 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.100 2 DEBUG nova.network.neutron [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.115 2 DEBUG oslo_concurrency.lockutils [req-c879878c-4841-47de-8381-417bc7345cba req-874074e5-35c9-40d9-a417-242191317c03 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.243 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Successfully updated port: b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.259 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.260 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.260 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:35:10 np0005486808 nova_compute[259627]: 2025-10-14 09:35:10.392 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:35:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:11 np0005486808 nova_compute[259627]: 2025-10-14 09:35:11.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:11 np0005486808 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG nova.compute.manager [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:11 np0005486808 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG nova.compute.manager [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-b2457aed-ba7c-4d69-b93d-9f4c98e456b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:11 np0005486808 nova_compute[259627]: 2025-10-14 09:35:11.530 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.066 2 DEBUG nova.network.neutron [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.096 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.096 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance network_info: |[{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.097 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.098 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.104 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start _get_guest_xml network_info=[{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:35:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.111 2 WARNING nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.116 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.117 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.127 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.128 2 DEBUG nova.virt.libvirt.host [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.128 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.129 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.129 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.130 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.130 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.131 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.131 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.132 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.133 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.133 2 DEBUG nova.virt.hardware [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.139 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:35:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535914132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.625 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.657 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.661 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:13 np0005486808 nova_compute[259627]: 2025-10-14 09:35:13.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:35:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264708059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.152 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.154 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.155 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.157 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.158 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.159 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.161 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.163 2 DEBUG nova.objects.instance [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.186 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <uuid>c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</uuid>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <name>instance-00000090</name>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-1125974325</nova:name>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:35:13</nova:creationTime>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:port uuid="13d4f68b-234a-4c46-9e1d-79f28a907bf2">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <nova:port uuid="b2457aed-ba7c-4d69-b93d-9f4c98e456b2">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe4e:2d2e" ipVersion="6"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4e:2d2e" ipVersion="6"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="serial">c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="uuid">c977bdc6-8dd7-4cb4-b50d-28e7313a16e8</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:2a:ad:d7"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <target dev="tap13d4f68b-23"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:4e:2d:2e"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <target dev="tapb2457aed-ba"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/console.log" append="off"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:35:14 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:35:14 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:35:14 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:35:14 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.188 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Preparing to wait for external event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.189 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.190 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Preparing to wait for external event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.191 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.191 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.192 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.193 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.194 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.195 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.196 2 DEBUG os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13d4f68b-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13d4f68b-23, col_values=(('external_ids', {'iface-id': '13d4f68b-234a-4c46-9e1d-79f28a907bf2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:ad:d7', 'vm-uuid': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 NetworkManager[44885]: <info>  [1760434514.2107] manager: (tap13d4f68b-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.219 2 INFO os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23')#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.221 2 DEBUG nova.virt.libvirt.vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:04Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.221 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.223 2 DEBUG nova.network.os_vif_util [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.227 2 DEBUG os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2457aed-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2457aed-ba, col_values=(('external_ids', {'iface-id': 'b2457aed-ba7c-4d69-b93d-9f4c98e456b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:2d:2e', 'vm-uuid': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 NetworkManager[44885]: <info>  [1760434514.2372] manager: (tapb2457aed-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.245 2 INFO os_vif [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba')#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.305 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:2a:ad:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.306 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:4e:2d:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.306 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Using config drive#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.331 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.683 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Creating config drive at /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.687 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygrw41k6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.841 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygrw41k6" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.871 2 DEBUG nova.storage.rbd_utils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.876 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.997 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port b2457aed-ba7c-4d69-b93d-9f4c98e456b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:35:14 np0005486808 nova_compute[259627]: 2025-10-14 09:35:14.998 2 DEBUG nova.network.neutron [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.021 2 DEBUG oslo_concurrency.lockutils [req-b05eec00-2867-4265-8344-4cac23eef77e req-dd2480ea-d713-4356-a20c-956ad4b80b0f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.082 2 DEBUG oslo_concurrency.processutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.083 2 INFO nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deleting local config drive /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8/disk.config because it was imported into RBD.#033[00m
Oct 14 05:35:15 np0005486808 kernel: tap13d4f68b-23: entered promiscuous mode
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.1606] manager: (tap13d4f68b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/635)
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01561|binding|INFO|Claiming lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 for this chassis.
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01562|binding|INFO|13d4f68b-234a-4c46-9e1d-79f28a907bf2: Claiming fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 05:35:15 np0005486808 kernel: tapb2457aed-ba: entered promiscuous mode
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.1897] manager: (tapb2457aed-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Oct 14 05:35:15 np0005486808 systemd-udevd[408335]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:35:15 np0005486808 systemd-udevd[408334]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.197 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:ad:d7 10.100.0.5'], port_security=['fa:16:3e:2a:ad:d7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=13d4f68b-234a-4c46-9e1d-79f28a907bf2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.200 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 bound to our chassis#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.202 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8#033[00m
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.2047] device (tap13d4f68b-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.2059] device (tap13d4f68b-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.2086] device (tapb2457aed-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.2101] device (tapb2457aed-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.222 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[56f1a2c1-5ec0-49b4-be9b-6966c116dcb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.223 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9cb6ee52-31 in ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.225 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9cb6ee52-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a70c541c-e9bd-4312-81ce-5298f48a9701]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f21fea8f-866c-40cd-8eac-8d15c8453f85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.250 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[37b5520c-6274-4606-a98b-6a7ba23c008c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 systemd-machined[214636]: New machine qemu-177-instance-00000090.
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01563|binding|INFO|Claiming lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for this chassis.
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01564|binding|INFO|b2457aed-ba7c-4d69-b93d-9f4c98e456b2: Claiming fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e
Oct 14 05:35:15 np0005486808 systemd[1]: Started Virtual Machine qemu-177-instance-00000090.
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.285 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57feb7ef-cc27-4eb0-93a6-0b555c43ae99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.294 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], port_security=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4e:2d2e/64 2001:db8::f816:3eff:fe4e:2d2e/64', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b2457aed-ba7c-4d69-b93d-9f4c98e456b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01565|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 ovn-installed in OVS
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01566|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 up in Southbound
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01567|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 ovn-installed in OVS
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01568|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 up in Southbound
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.335 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77a4e1f2-5a67-4255-af71-3e0176a0d059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.3435] manager: (tap9cb6ee52-30): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5452685f-70ee-404b-8e02-ab2262527556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.376 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d3a1df-17d9-441b-885f-67ce2380b8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.381 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d94db442-64a8-4fc5-b0dc-97ab2d05725a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.4021] device (tap9cb6ee52-30): carrier: link connected
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.408 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5e7a1b-f4aa-48c3-9dc9-5a622236ac47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f99321-698f-432b-8f7b-5744d3f3b1ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408372, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.455 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fd1541-1676-45f6-97cf-04b8afeb340e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:6535'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829416, 'tstamp': 829416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408373, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.479 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2631da82-02bb-4b6b-ab84-e63fc0a79ee7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408374, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.517 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0b7802-e2ba-4e7a-8135-86524bf0545f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.582 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3a48da-3a65-4d2a-8625-45a4283ee4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.584 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.584 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.585 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:15 np0005486808 kernel: tap9cb6ee52-30: entered promiscuous mode
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 NetworkManager[44885]: <info>  [1760434515.5875] manager: (tap9cb6ee52-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.590 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:15Z|01569|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 nova_compute[259627]: 2025-10-14 09:35:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.614 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.615 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c903f56a-eb59-4b6b-a51e-2df312f702df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.616 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/9cb6ee52-3808-410f-9854-68ac8ffadab8.pid.haproxy
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 9cb6ee52-3808-410f-9854-68ac8ffadab8
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:35:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:15.616 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'env', 'PROCESS_TAG=haproxy-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9cb6ee52-3808-410f-9854-68ac8ffadab8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:35:16 np0005486808 podman[408450]: 2025-10-14 09:35:16.036758207 +0000 UTC m=+0.080183418 container create 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:35:16 np0005486808 podman[408450]: 2025-10-14 09:35:15.995675329 +0000 UTC m=+0.039100610 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:35:16 np0005486808 systemd[1]: Started libpod-conmon-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope.
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd38037ad8c5df564412ddc4d5b130a78e747af4052f1d4f025faaa569ef1ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:16 np0005486808 podman[408450]: 2025-10-14 09:35:16.23370859 +0000 UTC m=+0.277133851 container init 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.235 2 DEBUG nova.compute.manager [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG oslo_concurrency.lockutils [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.236 2 DEBUG nova.compute.manager [req-6d443c75-8c67-48d6-a0b8-ca31c7a6bf2b req-e00e2eee-ff78-43ea-aeb3-bf60c3f600d4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Processing event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:35:16 np0005486808 podman[408450]: 2025-10-14 09:35:16.240515757 +0000 UTC m=+0.283940968 container start 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:35:16 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : New worker (408472) forked
Oct 14 05:35:16 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : Loading success.
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.330 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.3294785, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.330 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Started (Lifecycle Event)#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.335 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.336 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.348 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df1db499-49b6-4b51-afd3-9677838430db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.348 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d3b36cd-f1 in ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.349 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d3b36cd-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[deb90654-ee4e-4eb9-b73d-892ce204f650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.350 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29209e-2b8a-4cc8-9be1-dfe2bf40e765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.351 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.355 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.329695, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.355 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG nova.compute.manager [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG oslo_concurrency.lockutils [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.364 2 DEBUG nova.compute.manager [req-928382a8-2eea-45e0-a3a0-c413fa61129d req-bd12f7c3-34c9-4300-acd4-b58dd70fdad8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Processing event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.364 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[56caa5aa-b0dd-4d72-a415-3a4cd05bfe5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.365 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.369 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.371 2 INFO nova.virt.libvirt.driver [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance spawned successfully.#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.372 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.374 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.376 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434516.3678784, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.376 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.388 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.389 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.389 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.390 2 DEBUG nova.virt.libvirt.driver [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.390 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[981891dc-7d49-42fb-9b31-1a5b90e2a727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.393 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.395 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.422 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.424 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba907a-0c80-4dcc-86b8-ed5e4a374a85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 systemd-udevd[408363]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[24953f2f-a65e-4a77-9dcd-5b7f4a46769e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 NetworkManager[44885]: <info>  [1760434516.4349] manager: (tap0d3b36cd-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/639)
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.443 2 INFO nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 11.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.443 2 DEBUG nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.467 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfe7569-cbb9-425d-a670-d7acbe7770d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fb7ed1-a81c-4d4f-b3ec-ac0ec35aa58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.498 2 INFO nova.compute.manager [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 12.59 seconds to build instance.#033[00m
Oct 14 05:35:16 np0005486808 NetworkManager[44885]: <info>  [1760434516.5005] device (tap0d3b36cd-f0): carrier: link connected
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.506 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd3de1f-72b2-4410-9fba-a0b31162ae63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.513 2 DEBUG oslo_concurrency.lockutils [None req-e8cd691e-5241-4e6d-baa8-9d7fecf1f048 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.524 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7de696f7-41e1-4a1e-8b1a-4e8335f30326]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408491, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.541 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8cf5b3-27f4-47be-959f-4f60b8687235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:2e6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829526, 'tstamp': 829526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408492, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.560 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f7539fe4-f6be-421c-b04c-f88f9d0a940d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408493, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.595 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f972d90b-b4ff-41a7-852a-bb3700c427c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.641 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6d210679-ddcd-4bb7-95de-0ad6973433a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:16 np0005486808 kernel: tap0d3b36cd-f0: entered promiscuous mode
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:16 np0005486808 NetworkManager[44885]: <info>  [1760434516.6506] manager: (tap0d3b36cd-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.652 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:16 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:16Z|01570|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:16 np0005486808 nova_compute[259627]: 2025-10-14 09:35:16.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.667 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.668 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[43dc1ee3-5dfa-470d-815e-e188090c34a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.669 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.pid.haproxy
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 0d3b36cd-f345-4ba4-8ea7-29b299ab0543
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:35:16 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:16.671 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'env', 'PROCESS_TAG=haproxy-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d3b36cd-f345-4ba4-8ea7-29b299ab0543.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:35:17 np0005486808 podman[408525]: 2025-10-14 09:35:17.098210505 +0000 UTC m=+0.064307189 container create 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:35:17 np0005486808 systemd[1]: Started libpod-conmon-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope.
Oct 14 05:35:17 np0005486808 podman[408525]: 2025-10-14 09:35:17.074432692 +0000 UTC m=+0.040529396 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:35:17 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f24b8bd883a1a9d5a46c8f33163dab05be4377ce726a9e20ef9f68b79d27d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:17 np0005486808 podman[408525]: 2025-10-14 09:35:17.188636314 +0000 UTC m=+0.154733068 container init 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:35:17 np0005486808 podman[408525]: 2025-10-14 09:35:17.199313426 +0000 UTC m=+0.165410150 container start 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:35:17 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : New worker (408546) forked
Oct 14 05:35:17 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : Loading success.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.116468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518116497, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 414, "num_deletes": 257, "total_data_size": 273713, "memory_usage": 281792, "flush_reason": "Manual Compaction"}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518120817, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 271272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51862, "largest_seqno": 52275, "table_properties": {"data_size": 268841, "index_size": 531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5827, "raw_average_key_size": 17, "raw_value_size": 263975, "raw_average_value_size": 814, "num_data_blocks": 24, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434499, "oldest_key_time": 1760434499, "file_creation_time": 1760434518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4420 microseconds, and 1545 cpu microseconds.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.120883) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 271272 bytes OK
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.120908) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122909) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122937) EVENT_LOG_v1 {"time_micros": 1760434518122927, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.122960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 271086, prev total WAL file size 271086, number of live WAL files 2.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.123822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(264KB)], [119(9723KB)]
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518123864, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10228007, "oldest_snapshot_seqno": -1}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7201 keys, 10117828 bytes, temperature: kUnknown
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518177225, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10117828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10069174, "index_size": 29534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 188378, "raw_average_key_size": 26, "raw_value_size": 9939907, "raw_average_value_size": 1380, "num_data_blocks": 1152, "num_entries": 7201, "num_filter_entries": 7201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.177462) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10117828 bytes
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.178989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.4 rd, 189.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.5 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.0) write-amplify(37.3) OK, records in: 7723, records dropped: 522 output_compression: NoCompression
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.179030) EVENT_LOG_v1 {"time_micros": 1760434518178999, "job": 72, "event": "compaction_finished", "compaction_time_micros": 53443, "compaction_time_cpu_micros": 27006, "output_level": 6, "num_output_files": 1, "total_output_size": 10117828, "num_input_records": 7723, "num_output_records": 7201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518179182, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434518180984, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.123739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:35:18.181133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.443 2 DEBUG nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.443 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG oslo_concurrency.lockutils [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.444 2 DEBUG nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.444 2 WARNING nova.compute.manager [req-2e223da0-40bf-467c-8c76-b09fe50a958f req-7b4823a6-c0bd-4e8c-a58a-00328bd2cf77 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:35:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 699 KiB/s wr, 10 op/s
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.564 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.565 2 DEBUG oslo_concurrency.lockutils [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.565 2 DEBUG nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:35:18 np0005486808 nova_compute[259627]: 2025-10-14 09:35:18.565 2 WARNING nova.compute.manager [req-04c95c46-7b60-4bc8-b941-fe9ea68d816a req-3de01230-eccb-4cf7-a945-40dc7dc6b484 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:35:19 np0005486808 nova_compute[259627]: 2025-10-14 09:35:19.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:19 np0005486808 nova_compute[259627]: 2025-10-14 09:35:19.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:19 np0005486808 NetworkManager[44885]: <info>  [1760434519.6695] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 14 05:35:19 np0005486808 NetworkManager[44885]: <info>  [1760434519.6706] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Oct 14 05:35:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:19Z|01571|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 05:35:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:19Z|01572|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 05:35:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:19Z|01573|binding|INFO|Releasing lport e170cfc7-b9d3-441b-8041-f001d366d5cd from this chassis (sb_readonly=0)
Oct 14 05:35:19 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:19Z|01574|binding|INFO|Releasing lport 1caef19e-e76a-434d-84c0-dc762554a564 from this chassis (sb_readonly=0)
Oct 14 05:35:19 np0005486808 nova_compute[259627]: 2025-10-14 09:35:19.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:19 np0005486808 nova_compute[259627]: 2025-10-14 09:35:19.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 699 KiB/s wr, 79 op/s
Oct 14 05:35:20 np0005486808 nova_compute[259627]: 2025-10-14 09:35:20.564 2 DEBUG nova.compute.manager [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:20 np0005486808 nova_compute[259627]: 2025-10-14 09:35:20.565 2 DEBUG nova.compute.manager [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:20 np0005486808 nova_compute[259627]: 2025-10-14 09:35:20.565 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:20 np0005486808 nova_compute[259627]: 2025-10-14 09:35:20.566 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:20 np0005486808 nova_compute[259627]: 2025-10-14 09:35:20.566 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:21 np0005486808 nova_compute[259627]: 2025-10-14 09:35:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:21 np0005486808 podman[408556]: 2025-10-14 09:35:21.662451301 +0000 UTC m=+0.076916548 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:35:21 np0005486808 podman[408557]: 2025-10-14 09:35:21.675862991 +0000 UTC m=+0.083886000 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 14 05:35:22 np0005486808 nova_compute[259627]: 2025-10-14 09:35:22.058 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:35:22 np0005486808 nova_compute[259627]: 2025-10-14 09:35:22.059 2 DEBUG nova.network.neutron [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:22 np0005486808 nova_compute[259627]: 2025-10-14 09:35:22.083 2 DEBUG oslo_concurrency.lockutils [req-2f9d732e-8696-484b-9f8b-bd88a5551128 req-eb6bba73-fdf1-4bc9-930c-76aa124f2509 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:35:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:24 np0005486808 nova_compute[259627]: 2025-10-14 09:35:24.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:35:26 np0005486808 nova_compute[259627]: 2025-10-14 09:35:26.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:35:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:28Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 05:35:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:28Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:ad:d7 10.100.0.5
Oct 14 05:35:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 88 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 05:35:29 np0005486808 nova_compute[259627]: 2025-10-14 09:35:29.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 14 05:35:31 np0005486808 nova_compute[259627]: 2025-10-14 09:35:31.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:35:32
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Oct 14 05:35:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:35:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:35:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:35:33 np0005486808 podman[408598]: 2025-10-14 09:35:33.7077246 +0000 UTC m=+0.107280294 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:35:33 np0005486808 podman[408597]: 2025-10-14 09:35:33.727880514 +0000 UTC m=+0.142271362 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:35:34 np0005486808 nova_compute[259627]: 2025-10-14 09:35:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e85b6f19-2277-49ba-b555-467ee082f42d does not exist
Oct 14 05:35:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 77c85dfb-5b25-4e55-b4cb-0f207de16fcb does not exist
Oct 14 05:35:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 9c40edd6-333d-40e6-a00d-46ea17d469d6 does not exist
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:35:36 np0005486808 nova_compute[259627]: 2025-10-14 09:35:36.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.792351786 +0000 UTC m=+0.060438374 container create 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 05:35:36 np0005486808 systemd[1]: Started libpod-conmon-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope.
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.755627245 +0000 UTC m=+0.023713893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.903388001 +0000 UTC m=+0.171474599 container init 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.915375095 +0000 UTC m=+0.183461643 container start 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.919283381 +0000 UTC m=+0.187369949 container attach 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:35:36 np0005486808 stoic_villani[409049]: 167 167
Oct 14 05:35:36 np0005486808 systemd[1]: libpod-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope: Deactivated successfully.
Oct 14 05:35:36 np0005486808 conmon[409049]: conmon 4ea76e14d4c923f81beb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope/container/memory.events
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.923358371 +0000 UTC m=+0.191444939 container died 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:35:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bf61b532bdc0ebab71f6ec0e5e1252d81ddc46c945cb9e32d4aa0ca9b7c82247-merged.mount: Deactivated successfully.
Oct 14 05:35:36 np0005486808 podman[409032]: 2025-10-14 09:35:36.966799167 +0000 UTC m=+0.234885705 container remove 4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:35:36 np0005486808 systemd[1]: libpod-conmon-4ea76e14d4c923f81beb8b285b63f0cb85784d8c0eaeb50381ea02cca4240a78.scope: Deactivated successfully.
Oct 14 05:35:37 np0005486808 podman[409073]: 2025-10-14 09:35:37.178074262 +0000 UTC m=+0.049948337 container create 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:35:37 np0005486808 systemd[1]: Started libpod-conmon-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope.
Oct 14 05:35:37 np0005486808 podman[409073]: 2025-10-14 09:35:37.161356572 +0000 UTC m=+0.033230617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:37 np0005486808 podman[409073]: 2025-10-14 09:35:37.297764579 +0000 UTC m=+0.169638654 container init 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:35:37 np0005486808 podman[409073]: 2025-10-14 09:35:37.3104128 +0000 UTC m=+0.182286885 container start 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 14 05:35:37 np0005486808 podman[409073]: 2025-10-14 09:35:37.314831498 +0000 UTC m=+0.186705563 container attach 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:35:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:38 np0005486808 modest_yonath[409090]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:35:38 np0005486808 modest_yonath[409090]: --> relative data size: 1.0
Oct 14 05:35:38 np0005486808 modest_yonath[409090]: --> All data devices are unavailable
Oct 14 05:35:38 np0005486808 systemd[1]: libpod-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Deactivated successfully.
Oct 14 05:35:38 np0005486808 systemd[1]: libpod-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Consumed 1.156s CPU time.
Oct 14 05:35:38 np0005486808 podman[409119]: 2025-10-14 09:35:38.554489199 +0000 UTC m=+0.029744861 container died 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:35:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:35:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ca865753bacfa3f72a8fd67ec174b1bb6802edba580cd08f5859ea33c7e3331e-merged.mount: Deactivated successfully.
Oct 14 05:35:38 np0005486808 podman[409119]: 2025-10-14 09:35:38.618551681 +0000 UTC m=+0.093807323 container remove 03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:35:38 np0005486808 systemd[1]: libpod-conmon-03aea7bbbd1724a3bc8f782ad004f4c4279bc7613285dfcc747583a01c5e75a0.scope: Deactivated successfully.
Oct 14 05:35:39 np0005486808 nova_compute[259627]: 2025-10-14 09:35:39.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.378698385 +0000 UTC m=+0.047166089 container create 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:35:39 np0005486808 systemd[1]: Started libpod-conmon-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope.
Oct 14 05:35:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.360798275 +0000 UTC m=+0.029265979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.461731522 +0000 UTC m=+0.130199236 container init 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.46857375 +0000 UTC m=+0.137041474 container start 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:35:39 np0005486808 funny_blackburn[409292]: 167 167
Oct 14 05:35:39 np0005486808 systemd[1]: libpod-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope: Deactivated successfully.
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.472204969 +0000 UTC m=+0.140672703 container attach 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.475705985 +0000 UTC m=+0.144173709 container died 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:35:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e06e773ce4dd6a51fccc7acb3e3554068c1ad07749730ba984036f0171ffec68-merged.mount: Deactivated successfully.
Oct 14 05:35:39 np0005486808 podman[409274]: 2025-10-14 09:35:39.520495444 +0000 UTC m=+0.188963168 container remove 8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_blackburn, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:35:39 np0005486808 systemd[1]: libpod-conmon-8bdd64368951e6ff1cfc180b9733535dbd6ba07d0c2728e577977976230a94c5.scope: Deactivated successfully.
Oct 14 05:35:39 np0005486808 podman[409318]: 2025-10-14 09:35:39.741410976 +0000 UTC m=+0.051978207 container create fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:35:39 np0005486808 systemd[1]: Started libpod-conmon-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope.
Oct 14 05:35:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:39 np0005486808 podman[409318]: 2025-10-14 09:35:39.722594444 +0000 UTC m=+0.033161635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:39 np0005486808 podman[409318]: 2025-10-14 09:35:39.83695146 +0000 UTC m=+0.147518701 container init fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:35:39 np0005486808 podman[409318]: 2025-10-14 09:35:39.849768225 +0000 UTC m=+0.160335416 container start fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:35:39 np0005486808 podman[409318]: 2025-10-14 09:35:39.852875301 +0000 UTC m=+0.163442492 container attach fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.301 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.303 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.325 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.416 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.416 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.425 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.425 2 INFO nova.compute.claims [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:35:40 np0005486808 nova_compute[259627]: 2025-10-14 09:35:40.562 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:35:40 np0005486808 practical_hermann[409334]: {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    "0": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "devices": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "/dev/loop3"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            ],
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_name": "ceph_lv0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_size": "21470642176",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "name": "ceph_lv0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "tags": {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_name": "ceph",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.crush_device_class": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.encrypted": "0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_id": "0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.vdo": "0"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            },
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "vg_name": "ceph_vg0"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        }
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    ],
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    "1": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "devices": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "/dev/loop4"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            ],
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_name": "ceph_lv1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_size": "21470642176",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "name": "ceph_lv1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "tags": {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_name": "ceph",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.crush_device_class": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.encrypted": "0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_id": "1",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.vdo": "0"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            },
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "vg_name": "ceph_vg1"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        }
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    ],
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    "2": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "devices": [
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "/dev/loop5"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            ],
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_name": "ceph_lv2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_size": "21470642176",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "name": "ceph_lv2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "tags": {
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.cluster_name": "ceph",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.crush_device_class": "",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.encrypted": "0",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osd_id": "2",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:                "ceph.vdo": "0"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            },
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "type": "block",
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:            "vg_name": "ceph_vg2"
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:        }
Oct 14 05:35:40 np0005486808 practical_hermann[409334]:    ]
Oct 14 05:35:40 np0005486808 practical_hermann[409334]: }
Oct 14 05:35:40 np0005486808 systemd[1]: libpod-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope: Deactivated successfully.
Oct 14 05:35:40 np0005486808 podman[409318]: 2025-10-14 09:35:40.61524433 +0000 UTC m=+0.925811581 container died fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:35:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-98ffb2ffce46884eece6258fa009ee925e6de28a3a0d76fed36b7f6992ba34c7-merged.mount: Deactivated successfully.
Oct 14 05:35:40 np0005486808 podman[409318]: 2025-10-14 09:35:40.758496685 +0000 UTC m=+1.069063876 container remove fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:35:40 np0005486808 systemd[1]: libpod-conmon-fbf76d36b7a9ad770cb4e807853a47720fa1bcb5a83578a9f83ef695929a41d9.scope: Deactivated successfully.
Oct 14 05:35:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:35:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682684492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.020 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.029 2 DEBUG nova.compute.provider_tree [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.055 2 DEBUG nova.scheduler.client.report [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.084 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.084 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.145 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.145 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.171 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.189 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.292 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.294 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.294 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating image(s)#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.323 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.356 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.385 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.390 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.473 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.474 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.475 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.475 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.495 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:41 np0005486808 nova_compute[259627]: 2025-10-14 09:35:41.498 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d97e325-a4e8-4595-9697-04219277474d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.51096469 +0000 UTC m=+0.105113841 container create c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.426918957 +0000 UTC m=+0.021068118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:41 np0005486808 systemd[1]: Started libpod-conmon-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope.
Oct 14 05:35:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.672105894 +0000 UTC m=+0.266255095 container init c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.685091513 +0000 UTC m=+0.279240694 container start c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:35:41 np0005486808 exciting_ride[409613]: 167 167
Oct 14 05:35:41 np0005486808 systemd[1]: libpod-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope: Deactivated successfully.
Oct 14 05:35:41 np0005486808 conmon[409613]: conmon c95f9b6f78e6d815dcba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope/container/memory.events
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.824298599 +0000 UTC m=+0.418447830 container attach c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:35:41 np0005486808 podman[409568]: 2025-10-14 09:35:41.82553748 +0000 UTC m=+0.419686671 container died c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.074 2 DEBUG nova.policy [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:35:42 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1654488baf9a4d61e7b130cbc7bb744a54fdd018180d98857aefa458ff6229e3-merged.mount: Deactivated successfully.
Oct 14 05:35:42 np0005486808 podman[409568]: 2025-10-14 09:35:42.451750987 +0000 UTC m=+1.045900138 container remove c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:35:42 np0005486808 systemd[1]: libpod-conmon-c95f9b6f78e6d815dcba629a5c2f56914164fb53ea12ee950c62a1ca5d6a1a9b.scope: Deactivated successfully.
Oct 14 05:35:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:35:42 np0005486808 podman[409647]: 2025-10-14 09:35:42.670160957 +0000 UTC m=+0.069178319 container create 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:35:42 np0005486808 systemd[1]: Started libpod-conmon-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope.
Oct 14 05:35:42 np0005486808 podman[409647]: 2025-10-14 09:35:42.632948304 +0000 UTC m=+0.031965706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:35:42 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:35:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:35:42 np0005486808 podman[409647]: 2025-10-14 09:35:42.806914243 +0000 UTC m=+0.205931645 container init 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:35:42 np0005486808 podman[409647]: 2025-10-14 09:35:42.819365018 +0000 UTC m=+0.218382390 container start 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:35:42 np0005486808 podman[409647]: 2025-10-14 09:35:42.823375677 +0000 UTC m=+0.222393089 container attach 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.831 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 2d97e325-a4e8-4595-9697-04219277474d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.897 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.970 2 DEBUG nova.objects.instance [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Ensure instance console log exists: /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.997 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.998 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:42 np0005486808 nova_compute[259627]: 2025-10-14 09:35:42.998 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:35:43 np0005486808 nova_compute[259627]: 2025-10-14 09:35:43.597 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully created port: 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:35:43 np0005486808 busy_golick[409664]: {
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_id": 2,
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "type": "bluestore"
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    },
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_id": 1,
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "type": "bluestore"
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    },
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_id": 0,
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:35:43 np0005486808 busy_golick[409664]:        "type": "bluestore"
Oct 14 05:35:43 np0005486808 busy_golick[409664]:    }
Oct 14 05:35:43 np0005486808 busy_golick[409664]: }
Oct 14 05:35:43 np0005486808 systemd[1]: libpod-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope: Deactivated successfully.
Oct 14 05:35:43 np0005486808 podman[409647]: 2025-10-14 09:35:43.741620871 +0000 UTC m=+1.140638233 container died 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:35:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e38200cff88c0551f664013e5b7164e9332bc60024677b8dff2fb17e62e42efa-merged.mount: Deactivated successfully.
Oct 14 05:35:43 np0005486808 podman[409647]: 2025-10-14 09:35:43.808560463 +0000 UTC m=+1.207577835 container remove 2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:35:43 np0005486808 systemd[1]: libpod-conmon-2cc567e6a8ce87a28a1d3517e6bdf01459fe44f77e30c886457c0eeb7a5721c1.scope: Deactivated successfully.
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7828fad9-e245-49fa-908a-c5bfafdc672b does not exist
Oct 14 05:35:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 957d3098-7fc4-43be-a63f-4c60070743b8 does not exist
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:35:44 np0005486808 nova_compute[259627]: 2025-10-14 09:35:44.042 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully created port: da98587d-dc59-4e4f-bc3f-d3e70dafd21e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:35:44 np0005486808 nova_compute[259627]: 2025-10-14 09:35:44.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:35:44 np0005486808 nova_compute[259627]: 2025-10-14 09:35:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.396 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully updated port: 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.486 2 DEBUG nova.compute.manager [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.487 2 DEBUG nova.compute.manager [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:45 np0005486808 nova_compute[259627]: 2025-10-14 09:35:45.488 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:46 np0005486808 nova_compute[259627]: 2025-10-14 09:35:46.037 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:35:46 np0005486808 nova_compute[259627]: 2025-10-14 09:35:46.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:46 np0005486808 nova_compute[259627]: 2025-10-14 09:35:46.286 2 DEBUG nova.network.neutron [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:46 np0005486808 nova_compute[259627]: 2025-10-14 09:35:46.301 2 DEBUG oslo_concurrency.lockutils [req-9d08a026-0b4f-4b6a-9f1e-7d5cc7de7bc9 req-8a32e1f0-5d1f-4ad0-816d-277714396e09 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.061 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Successfully updated port: da98587d-dc59-4e4f-bc3f-d3e70dafd21e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.075 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.075 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.076 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.203 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.602 2 DEBUG nova.compute.manager [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.602 2 DEBUG nova.compute.manager [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-da98587d-dc59-4e4f-bc3f-d3e70dafd21e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:47 np0005486808 nova_compute[259627]: 2025-10-14 09:35:47.603 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.522 2 DEBUG nova.network.neutron [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.543 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.544 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance network_info: |[{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.544 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.545 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port da98587d-dc59-4e4f-bc3f-d3e70dafd21e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.551 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start _get_guest_xml network_info=[{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.558 2 WARNING nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.565 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.566 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.576 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.577 2 DEBUG nova.virt.libvirt.host [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.578 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.578 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.579 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.579 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.580 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.581 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.581 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.582 2 DEBUG nova.virt.hardware [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.587 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:49 np0005486808 nova_compute[259627]: 2025-10-14 09:35:49.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:35:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750887776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.039 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.071 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.077 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:35:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885423904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.538 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.541 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.542 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.544 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.545 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.546 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.547 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.550 2 DEBUG nova.objects.instance [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:35:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.573 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <uuid>2d97e325-a4e8-4595-9697-04219277474d</uuid>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <name>instance-00000091</name>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-529814172</nova:name>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:35:49</nova:creationTime>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:port uuid="2e84cd2e-9f09-482a-9a06-adcfff2088d5">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <nova:port uuid="da98587d-dc59-4e4f-bc3f-d3e70dafd21e">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe61:8d23" ipVersion="6"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe61:8d23" ipVersion="6"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="serial">2d97e325-a4e8-4595-9697-04219277474d</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="uuid">2d97e325-a4e8-4595-9697-04219277474d</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2d97e325-a4e8-4595-9697-04219277474d_disk">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/2d97e325-a4e8-4595-9697-04219277474d_disk.config">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:20:70:35"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <target dev="tap2e84cd2e-9f"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:61:8d:23"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <target dev="tapda98587d-dc"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/console.log" append="off"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:35:50 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:35:50 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:35:50 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:35:50 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.575 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Preparing to wait for external event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.576 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.577 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.578 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.578 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Preparing to wait for external event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.579 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.579 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.580 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.581 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.582 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.583 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.584 2 DEBUG os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e84cd2e-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e84cd2e-9f, col_values=(('external_ids', {'iface-id': '2e84cd2e-9f09-482a-9a06-adcfff2088d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:70:35', 'vm-uuid': '2d97e325-a4e8-4595-9697-04219277474d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 NetworkManager[44885]: <info>  [1760434550.5995] manager: (tap2e84cd2e-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.611 2 INFO os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f')#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.613 2 DEBUG nova.virt.libvirt.vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:35:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.614 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.615 2 DEBUG nova.network.os_vif_util [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.616 2 DEBUG os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda98587d-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda98587d-dc, col_values=(('external_ids', {'iface-id': 'da98587d-dc59-4e4f-bc3f-d3e70dafd21e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:8d:23', 'vm-uuid': '2d97e325-a4e8-4595-9697-04219277474d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:50 np0005486808 NetworkManager[44885]: <info>  [1760434550.6270] manager: (tapda98587d-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.637 2 INFO os_vif [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc')#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.720 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.720 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.721 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:20:70:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.721 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:61:8d:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.721 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Using config drive#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.748 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:50 np0005486808 nova_compute[259627]: 2025-10-14 09:35:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.019 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.268 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Creating config drive at /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.277 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnv_87gl_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.434 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnv_87gl_" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.471 2 DEBUG nova.storage.rbd_utils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 2d97e325-a4e8-4595-9697-04219277474d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.476 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config 2d97e325-a4e8-4595-9697-04219277474d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:35:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638846683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.522 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.528 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port da98587d-dc59-4e4f-bc3f-d3e70dafd21e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.529 2 DEBUG nova.network.neutron [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.545 2 DEBUG oslo_concurrency.lockutils [req-2d5c6996-6df7-4547-ae23-1050ac9e5e6b req-001e0e5d-f141-4840-9ae2-556b2224b469 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.582 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.587 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.711 2 DEBUG oslo_concurrency.processutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config 2d97e325-a4e8-4595-9697-04219277474d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.711 2 INFO nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deleting local config drive /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d/disk.config because it was imported into RBD.#033[00m
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.7844] manager: (tap2e84cd2e-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Oct 14 05:35:51 np0005486808 kernel: tap2e84cd2e-9f: entered promiscuous mode
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01575|binding|INFO|Claiming lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 for this chassis.
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01576|binding|INFO|2e84cd2e-9f09-482a-9a06-adcfff2088d5: Claiming fa:16:3e:20:70:35 10.100.0.8
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.801 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:70:35 10.100.0.8'], port_security=['fa:16:3e:20:70:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2e84cd2e-9f09-482a-9a06-adcfff2088d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.802 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 bound to our chassis#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.804 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8#033[00m
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01577|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 ovn-installed in OVS
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01578|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 up in Southbound
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.8246] manager: (tapda98587d-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Oct 14 05:35:51 np0005486808 kernel: tapda98587d-dc: entered promiscuous mode
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01579|binding|INFO|Claiming lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e for this chassis.
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01580|binding|INFO|da98587d-dc59-4e4f-bc3f-d3e70dafd21e: Claiming fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.838 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57f606ef-c5b3-4d12-b114-5858ad64a87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.844 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], port_security=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:8d23/64 2001:db8::f816:3eff:fe61:8d23/64', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da98587d-dc59-4e4f-bc3f-d3e70dafd21e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01581|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e ovn-installed in OVS
Oct 14 05:35:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:35:51Z|01582|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e up in Southbound
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 systemd-udevd[410022]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:35:51 np0005486808 systemd-udevd[410020]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:35:51 np0005486808 systemd-machined[214636]: New machine qemu-178-instance-00000091.
Oct 14 05:35:51 np0005486808 systemd[1]: Started Virtual Machine qemu-178-instance-00000091.
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.8944] device (tapda98587d-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.8955] device (tapda98587d-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.891 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c8bf99-5528-4876-9942-fa8299a1e7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.8985] device (tap2e84cd2e-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:35:51 np0005486808 NetworkManager[44885]: <info>  [1760434551.8992] device (tap2e84cd2e-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.900 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[84cef066-c062-40c5-b0dc-221e3ad38661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.933 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c053c90d-ea06-456e-8c51-fbd7693e8f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 podman[409990]: 2025-10-14 09:35:51.936926423 +0000 UTC m=+0.115271990 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:35:51 np0005486808 podman[409989]: 2025-10-14 09:35:51.940663894 +0000 UTC m=+0.121239156 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.956 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0880be-af6c-4b31-8430-2551d92848d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410044, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.975 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[77a3e216-62cc-4b36-a0fc-2a0086e26821]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829431, 'tstamp': 829431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410048, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829433, 'tstamp': 829433}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410048, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.977 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.980 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.981 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.983 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.983 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da98587d-dc59-4e4f-bc3f-d3e70dafd21e in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis#033[00m
Oct 14 05:35:51 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:51.984 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.984 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3310MB free_disk=59.92197799682617GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.985 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:51 np0005486808 nova_compute[259627]: 2025-10-14 09:35:51.985 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.006 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be8131ab-f5ec-4082-9ff4-7347a42a36e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.048 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[79a8a004-2f45-42fa-8c17-4a30c0abe5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.051 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd4cac4-a400-4cb9-8d03-c52279f2a2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 2d97e325-a4e8-4595-9697-04219277474d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.081 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.097 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8864a82-aa15-4ded-b864-07d134af108e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.115 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b4fd32-9105-4947-aff8-2fcabf9534e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 4, 'rx_bytes': 2216, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 4, 'rx_bytes': 2216, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410055, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG nova.compute.manager [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.136 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.137 2 DEBUG oslo_concurrency.lockutils [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.137 2 DEBUG nova.compute.manager [req-6626413c-d412-4850-b94a-0a2c04df7c6c req-8622cd40-9964-48bd-833e-a1fba29477d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Processing event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.139 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceac88e-b746-4276-a26d-8dc197eeb0ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d3b36cd-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829539, 'tstamp': 829539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410056, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.141 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.145 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.146 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.146 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:35:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:35:52.147 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.156 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:35:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:35:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258586033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.664 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.670 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.692 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.727 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.727 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.782 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434552.781984, 2d97e325-a4e8-4595-9697-04219277474d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.783 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Started (Lifecycle Event)#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.807 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.811 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434552.7821534, 2d97e325-a4e8-4595-9697-04219277474d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.812 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.838 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.841 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:35:52 np0005486808 nova_compute[259627]: 2025-10-14 09:35:52.863 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:35:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.727 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.729 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.729 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:35:53 np0005486808 nova_compute[259627]: 2025-10-14 09:35:53.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.239 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.240 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.240 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.287 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.288 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.288 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No event matching network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 in dict_keys([('network-vif-plugged', 'da98587d-dc59-4e4f-bc3f-d3e70dafd21e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.288 2 WARNING nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.289 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.290 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.290 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Processing event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.291 2 DEBUG oslo_concurrency.lockutils [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.292 2 DEBUG nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.292 2 WARNING nova.compute.manager [req-5bf9aafb-aabb-4b32-85b8-a6df75839001 req-6c93ba2f-e816-4a78-8781-3afa5f13e5e8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.293 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.302 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434554.2972925, 2d97e325-a4e8-4595-9697-04219277474d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.302 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.306 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.312 2 INFO nova.virt.libvirt.driver [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance spawned successfully.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.312 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.346 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.356 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.366 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.366 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.367 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.368 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.369 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.370 2 DEBUG nova.virt.libvirt.driver [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.378 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.441 2 INFO nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 13.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.441 2 DEBUG nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.509 2 INFO nova.compute.manager [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 14.13 seconds to build instance.#033[00m
Oct 14 05:35:54 np0005486808 nova_compute[259627]: 2025-10-14 09:35:54.534 2 DEBUG oslo_concurrency.lockutils [None req-6cc08aa3-2ca9-4b2b-ad5f-37c69904ad3b 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:35:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:35:55 np0005486808 nova_compute[259627]: 2025-10-14 09:35:55.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:56 np0005486808 nova_compute[259627]: 2025-10-14 09:35:56.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:35:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 14 05:35:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.226 2 DEBUG nova.compute.manager [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.226 2 DEBUG nova.compute.manager [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.228 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.228 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.229 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.289 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.315 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.315 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.316 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 12 KiB/s wr, 36 op/s
Oct 14 05:35:58 np0005486808 nova_compute[259627]: 2025-10-14 09:35:58.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:35:59 np0005486808 nova_compute[259627]: 2025-10-14 09:35:59.737 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:35:59 np0005486808 nova_compute[259627]: 2025-10-14 09:35:59.737 2 DEBUG nova.network.neutron [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:35:59 np0005486808 nova_compute[259627]: 2025-10-14 09:35:59.768 2 DEBUG oslo_concurrency.lockutils [req-7bb3ef8e-8d9d-4299-832b-30573f85efeb req-bfd0d962-363a-446a-8d84-1f76a03eb061 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:36:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:36:00 np0005486808 nova_compute[259627]: 2025-10-14 09:36:00.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:01 np0005486808 nova_compute[259627]: 2025-10-14 09:36:01.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:36:04 np0005486808 podman[410124]: 2025-10-14 09:36:04.695178087 +0000 UTC m=+0.088794730 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:36:04 np0005486808 podman[410123]: 2025-10-14 09:36:04.732546914 +0000 UTC m=+0.129921559 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:36:04 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 14 05:36:05 np0005486808 nova_compute[259627]: 2025-10-14 09:36:05.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:36:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:36:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:36:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099769795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:36:06 np0005486808 nova_compute[259627]: 2025-10-14 09:36:06.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 14 05:36:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.052 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:07Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:70:35 10.100.0.8
Oct 14 05:36:07 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:07Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:70:35 10.100.0.8
Oct 14 05:36:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Oct 14 05:36:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 14 05:36:10 np0005486808 nova_compute[259627]: 2025-10-14 09:36:10.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:11 np0005486808 nova_compute[259627]: 2025-10-14 09:36:11.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:36:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:36:15 np0005486808 nova_compute[259627]: 2025-10-14 09:36:15.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:16 np0005486808 nova_compute[259627]: 2025-10-14 09:36:16.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.243 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.246 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.481 2 DEBUG nova.compute.manager [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.481 2 DEBUG nova.compute.manager [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing instance network info cache due to event network-changed-2e84cd2e-9f09-482a-9a06-adcfff2088d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.482 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Refreshing network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.536 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.536 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.537 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.538 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.538 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.540 2 INFO nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Terminating instance#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.542 2 DEBUG nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:36:17 np0005486808 kernel: tap2e84cd2e-9f (unregistering): left promiscuous mode
Oct 14 05:36:17 np0005486808 NetworkManager[44885]: <info>  [1760434577.5993] device (tap2e84cd2e-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01583|binding|INFO|Releasing lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 from this chassis (sb_readonly=0)
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01584|binding|INFO|Setting lport 2e84cd2e-9f09-482a-9a06-adcfff2088d5 down in Southbound
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01585|binding|INFO|Removing iface tap2e84cd2e-9f ovn-installed in OVS
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.622 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:70:35 10.100.0.8'], port_security=['fa:16:3e:20:70:35 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=2e84cd2e-9f09-482a-9a06-adcfff2088d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.623 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 2e84cd2e-9f09-482a-9a06-adcfff2088d5 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 unbound from our chassis#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.624 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cb6ee52-3808-410f-9854-68ac8ffadab8#033[00m
Oct 14 05:36:17 np0005486808 kernel: tapda98587d-dc (unregistering): left promiscuous mode
Oct 14 05:36:17 np0005486808 NetworkManager[44885]: <info>  [1760434577.6415] device (tapda98587d-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.651 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[63626a20-eebd-4e4d-988c-c12c6f1716a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01586|binding|INFO|Releasing lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e from this chassis (sb_readonly=0)
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01587|binding|INFO|Setting lport da98587d-dc59-4e4f-bc3f-d3e70dafd21e down in Southbound
Oct 14 05:36:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:17Z|01588|binding|INFO|Removing iface tapda98587d-dc ovn-installed in OVS
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.664 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], port_security=['fa:16:3e:61:8d:23 2001:db8:0:1:f816:3eff:fe61:8d23 2001:db8::f816:3eff:fe61:8d23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe61:8d23/64 2001:db8::f816:3eff:fe61:8d23/64', 'neutron:device_id': '2d97e325-a4e8-4595-9697-04219277474d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=da98587d-dc59-4e4f-bc3f-d3e70dafd21e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.689 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d255ffaa-5c28-422f-ad7c-01ab600a02b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.692 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a1e540-5834-430f-b017-a126ef0324fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.722 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b126aa75-48a3-4bfa-b12a-97a4df4a4875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 14 05:36:17 np0005486808 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Consumed 13.637s CPU time.
Oct 14 05:36:17 np0005486808 systemd-machined[214636]: Machine qemu-178-instance-00000091 terminated.
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.740 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0942a086-653a-46f3-919b-6f85d25b4392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cb6ee52-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:65:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829416, 'reachable_time': 37698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410185, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.759 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8e9e6c-7b76-4255-8caf-3e28570bd319]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829431, 'tstamp': 829431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410186, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cb6ee52-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829433, 'tstamp': 829433}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410186, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.762 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb6ee52-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.771 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cb6ee52-30, col_values=(('external_ids', {'iface-id': 'e170cfc7-b9d3-441b-8041-f001d366d5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.772 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.773 162547 INFO neutron.agent.ovn.metadata.agent [-] Port da98587d-dc59-4e4f-bc3f-d3e70dafd21e in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.774 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543#033[00m
Oct 14 05:36:17 np0005486808 NetworkManager[44885]: <info>  [1760434577.7779] manager: (tapda98587d-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a3038f5a-94d1-4975-8718-5ffa468ce33e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.797 2 INFO nova.virt.libvirt.driver [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Instance destroyed successfully.#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.797 2 DEBUG nova.objects.instance [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 2d97e325-a4e8-4595-9697-04219277474d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.812 2 DEBUG nova.virt.libvirt.vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:54Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.813 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.814 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.816 2 DEBUG os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e84cd2e-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.825 2 INFO os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:70:35,bridge_name='br-int',has_traffic_filtering=True,id=2e84cd2e-9f09-482a-9a06-adcfff2088d5,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e84cd2e-9f')#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.826 2 DEBUG nova.virt.libvirt.vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-529814172',display_name='tempest-TestGettingAddress-server-529814172',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-529814172',id=145,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ynpbprbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:54Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=2d97e325-a4e8-4595-9697-04219277474d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.825 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[77066bd6-ed79-4b08-8a9f-58a35152f82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.826 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.827 2 DEBUG nova.network.os_vif_util [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.827 2 DEBUG os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda98587d-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.829 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c8993c-585e-4362-806d-25559b762f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.832 2 INFO os_vif [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8d:23,bridge_name='br-int',has_traffic_filtering=True,id=da98587d-dc59-4e4f-bc3f-d3e70dafd21e,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda98587d-dc')#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.854 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f704df57-4402-4d5b-a7ba-97d58707e2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.870 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0100be30-b603-452b-90f0-1b686bce45b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d3b36cd-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:2e:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 5, 'rx_bytes': 3600, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 5, 'rx_bytes': 3600, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829526, 'reachable_time': 38835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410230, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.890 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b00df92c-e283-4f35-8ec1-4a296ce2d4f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d3b36cd-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829539, 'tstamp': 829539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410234, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.892 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 nova_compute[259627]: 2025-10-14 09:36:17.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.895 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d3b36cd-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.895 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d3b36cd-f0, col_values=(('external_ids', {'iface-id': '1caef19e-e76a-434d-84c0-dc762554a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:17.896 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:36:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.210 2 INFO nova.virt.libvirt.driver [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deleting instance files /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d_del#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.211 2 INFO nova.virt.libvirt.driver [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deletion of /var/lib/nova/instances/2d97e325-a4e8-4595-9697-04219277474d_del complete#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.371 2 INFO nova.compute.manager [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.371 2 DEBUG oslo.service.loopingcall [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.372 2 DEBUG nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.372 2 DEBUG nova.network.neutron [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:36:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 167 KiB/s wr, 33 op/s
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.704 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updated VIF entry in instance network info cache for port 2e84cd2e-9f09-482a-9a06-adcfff2088d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.705 2 DEBUG nova.network.neutron [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "address": "fa:16:3e:20:70:35", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e84cd2e-9f", "ovs_interfaceid": "2e84cd2e-9f09-482a-9a06-adcfff2088d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:36:18 np0005486808 nova_compute[259627]: 2025-10-14 09:36:18.723 2 DEBUG oslo_concurrency.lockutils [req-44233c3b-9c61-4eaa-a06d-fbc414203847 req-c63b6b61-0bb7-43de-a8c5-e25f164f94a5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-2d97e325-a4e8-4595-9697-04219277474d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.169 2 DEBUG nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-deleted-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.169 2 INFO nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Neutron deleted interface 2e84cd2e-9f09-482a-9a06-adcfff2088d5; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.170 2 DEBUG nova.network.neutron [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [{"id": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "address": "fa:16:3e:61:8d:23", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe61:8d23", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda98587d-dc", "ovs_interfaceid": "da98587d-dc59-4e4f-bc3f-d3e70dafd21e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.189 2 DEBUG nova.compute.manager [req-1d901890-2861-4d96-aa74-e0bc476eff61 req-06e24f1d-61c3-4731-ac29-76e16d2d9ba6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Detach interface failed, port_id=2e84cd2e-9f09-482a-9a06-adcfff2088d5, reason: Instance 2d97e325-a4e8-4595-9697-04219277474d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.435 2 DEBUG nova.network.neutron [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.451 2 INFO nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Took 1.08 seconds to deallocate network for instance.#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.499 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.500 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.580 2 DEBUG oslo_concurrency.processutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.624 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.624 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-unplugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.625 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.626 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-2e84cd2e-9f09-482a-9a06-adcfff2088d5 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.626 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.627 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.627 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-unplugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "2d97e325-a4e8-4595-9697-04219277474d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG oslo_concurrency.lockutils [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 DEBUG nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] No waiting events found dispatching network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:19 np0005486808 nova_compute[259627]: 2025-10-14 09:36:19.628 2 WARNING nova.compute.manager [req-4cb76bd6-eda8-435a-8b42-3d31eaac8115 req-17b5e2e0-c3f5-4448-be09-d78ab197d6d6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received unexpected event network-vif-plugged-da98587d-dc59-4e4f-bc3f-d3e70dafd21e for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:36:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:36:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402120357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.081 2 DEBUG oslo_concurrency.processutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.086 2 DEBUG nova.compute.provider_tree [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.104 2 DEBUG nova.scheduler.client.report [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.132 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.174 2 INFO nova.scheduler.client.report [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 2d97e325-a4e8-4595-9697-04219277474d#033[00m
Oct 14 05:36:20 np0005486808 nova_compute[259627]: 2025-10-14 09:36:20.239 2 DEBUG oslo_concurrency.lockutils [None req-945396eb-54af-44db-938a-94fa9c465c1f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "2d97e325-a4e8-4595-9697-04219277474d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 173 KiB/s wr, 62 op/s
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.236 2 DEBUG nova.compute.manager [req-79d85022-897e-408a-bbad-94824d993d94 req-0e8d6cf2-463c-4f84-af22-8af669bcb41a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Received event network-vif-deleted-da98587d-dc59-4e4f-bc3f-d3e70dafd21e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.564 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.565 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.566 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.567 2 INFO nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Terminating instance#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.568 2 DEBUG nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:36:21 np0005486808 kernel: tap13d4f68b-23 (unregistering): left promiscuous mode
Oct 14 05:36:21 np0005486808 NetworkManager[44885]: <info>  [1760434581.6211] device (tap13d4f68b-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01589|binding|INFO|Releasing lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 from this chassis (sb_readonly=0)
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01590|binding|INFO|Setting lport 13d4f68b-234a-4c46-9e1d-79f28a907bf2 down in Southbound
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01591|binding|INFO|Removing iface tap13d4f68b-23 ovn-installed in OVS
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.679 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:ad:d7 10.100.0.5'], port_security=['fa:16:3e:2a:ad:d7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09c5c036-dfc9-4826-bc59-c008b28bd97a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=13d4f68b-234a-4c46-9e1d-79f28a907bf2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.680 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 in datapath 9cb6ee52-3808-410f-9854-68ac8ffadab8 unbound from our chassis#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.681 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cb6ee52-3808-410f-9854-68ac8ffadab8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.681 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[468b4dc9-4cb5-42e7-9ecd-03e5ef337998]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.682 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 namespace which is not needed anymore#033[00m
Oct 14 05:36:21 np0005486808 kernel: tapb2457aed-ba (unregistering): left promiscuous mode
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG nova.compute.manager [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG nova.compute.manager [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing instance network info cache due to event network-changed-13d4f68b-234a-4c46-9e1d-79f28a907bf2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.695 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.696 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.696 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Refreshing network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:36:21 np0005486808 NetworkManager[44885]: <info>  [1760434581.7013] device (tapb2457aed-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01592|binding|INFO|Releasing lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 from this chassis (sb_readonly=0)
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01593|binding|INFO|Setting lport b2457aed-ba7c-4d69-b93d-9f4c98e456b2 down in Southbound
Oct 14 05:36:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:36:21Z|01594|binding|INFO|Removing iface tapb2457aed-ba ovn-installed in OVS
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.718 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], port_security=['fa:16:3e:4e:2d:2e 2001:db8:0:1:f816:3eff:fe4e:2d2e 2001:db8::f816:3eff:fe4e:2d2e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4e:2d2e/64 2001:db8::f816:3eff:fe4e:2d2e/64', 'neutron:device_id': 'c977bdc6-8dd7-4cb4-b50d-28e7313a16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ac22ee6-a60e-44c3-8db4-e0f5bf22a77e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=612e2e91-84a5-412c-996c-976c512895aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b2457aed-ba7c-4d69-b93d-9f4c98e456b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 14 05:36:21 np0005486808 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Consumed 15.565s CPU time.
Oct 14 05:36:21 np0005486808 systemd-machined[214636]: Machine qemu-177-instance-00000090 terminated.
Oct 14 05:36:21 np0005486808 NetworkManager[44885]: <info>  [1760434581.7837] manager: (tap13d4f68b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 14 05:36:21 np0005486808 NetworkManager[44885]: <info>  [1760434581.7943] manager: (tapb2457aed-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.808 2 INFO nova.virt.libvirt.driver [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Instance destroyed successfully.#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.808 2 DEBUG nova.objects.instance [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : haproxy version is 2.8.14-c23fe91
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [NOTICE]   (408470) : path to executable is /usr/sbin/haproxy
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : Exiting Master process...
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : Exiting Master process...
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [ALERT]    (408470) : Current worker (408472) exited with code 143 (Terminated)
Oct 14 05:36:21 np0005486808 neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8[408466]: [WARNING]  (408470) : All workers exited. Exiting... (0)
Oct 14 05:36:21 np0005486808 systemd[1]: libpod-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope: Deactivated successfully.
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.825 2 DEBUG nova.virt.libvirt.vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.826 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:36:21 np0005486808 podman[410288]: 2025-10-14 09:36:21.826335307 +0000 UTC m=+0.050106961 container died 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.826 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.827 2 DEBUG os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13d4f68b-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.835 2 INFO os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:ad:d7,bridge_name='br-int',has_traffic_filtering=True,id=13d4f68b-234a-4c46-9e1d-79f28a907bf2,network=Network(9cb6ee52-3808-410f-9854-68ac8ffadab8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13d4f68b-23')#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.836 2 DEBUG nova.virt.libvirt.vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1125974325',display_name='tempest-TestGettingAddress-server-1125974325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1125974325',id=144,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOKJ7HNGwxvkNIllbfk0BJx0lRxRvnircYlLzsMnTrFAS1BvWK624+5Xtsv/flZHhMPVZEwSoakCSfohmdiqqoxJwqvVYzYcxThKbDWthm7NoIlGg8aDS1B3nhZUeum7w==',key_name='tempest-TestGettingAddress-1328370159',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-55lasja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:35:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=c977bdc6-8dd7-4cb4-b50d-28e7313a16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.836 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.837 2 DEBUG nova.network.os_vif_util [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.837 2 DEBUG os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2457aed-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.841 2 INFO os_vif [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:2d:2e,bridge_name='br-int',has_traffic_filtering=True,id=b2457aed-ba7c-4d69-b93d-9f4c98e456b2,network=Network(0d3b36cd-f345-4ba4-8ea7-29b299ab0543),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2457aed-ba')#033[00m
Oct 14 05:36:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323-userdata-shm.mount: Deactivated successfully.
Oct 14 05:36:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bfd38037ad8c5df564412ddc4d5b130a78e747af4052f1d4f025faaa569ef1ba-merged.mount: Deactivated successfully.
Oct 14 05:36:21 np0005486808 podman[410288]: 2025-10-14 09:36:21.875839962 +0000 UTC m=+0.099611626 container cleanup 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:36:21 np0005486808 systemd[1]: libpod-conmon-704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323.scope: Deactivated successfully.
Oct 14 05:36:21 np0005486808 podman[410360]: 2025-10-14 09:36:21.953064797 +0000 UTC m=+0.051648888 container remove 704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.959 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4f047e-ea4c-4db8-b053-36ba4acc9d0c]: (4, ('Tue Oct 14 09:36:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 (704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323)\n704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323\nTue Oct 14 09:36:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 (704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323)\n704da8c4514d130efa0ad985892e55a6f08b5db9e87a4dff1f62fbc56fa12323\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.961 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2aca3595-ba31-44a6-9e97-9969d3d2df65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.962 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb6ee52-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 kernel: tap9cb6ee52-30: left promiscuous mode
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.968 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[82f6f6b4-5f42-408e-9ddb-2d8203eb02a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:21 np0005486808 nova_compute[259627]: 2025-10-14 09:36:21.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e10d8e7f-d2cd-405a-b58a-6bd42e78ff03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:21 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:21.993 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fffb9626-af05-4d07-bfba-3aedf7baeec8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.012 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc44ece-62c4-44c4-aeed-1dffdaef035a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829409, 'reachable_time': 43953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410377, 'error': None, 'target': 'ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 systemd[1]: run-netns-ovnmeta\x2d9cb6ee52\x2d3808\x2d410f\x2d9854\x2d68ac8ffadab8.mount: Deactivated successfully.
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.018 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9cb6ee52-3808-410f-9854-68ac8ffadab8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.018 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc4669b-d808-40e8-92ba-202109b75f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.019 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b2457aed-ba7c-4d69-b93d-9f4c98e456b2 in datapath 0d3b36cd-f345-4ba4-8ea7-29b299ab0543 unbound from our chassis#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.020 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.020 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87116647-ee63-4485-86a6-650ebdd7bbb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.021 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 namespace which is not needed anymore#033[00m
Oct 14 05:36:22 np0005486808 podman[410376]: 2025-10-14 09:36:22.075345668 +0000 UTC m=+0.074879939 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 05:36:22 np0005486808 podman[410373]: 2025-10-14 09:36:22.08399028 +0000 UTC m=+0.085098799 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : haproxy version is 2.8.14-c23fe91
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [NOTICE]   (408544) : path to executable is /usr/sbin/haproxy
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : Exiting Master process...
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : Exiting Master process...
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [ALERT]    (408544) : Current worker (408546) exited with code 143 (Terminated)
Oct 14 05:36:22 np0005486808 neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543[408540]: [WARNING]  (408544) : All workers exited. Exiting... (0)
Oct 14 05:36:22 np0005486808 systemd[1]: libpod-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope: Deactivated successfully.
Oct 14 05:36:22 np0005486808 podman[410429]: 2025-10-14 09:36:22.159155085 +0000 UTC m=+0.046125213 container died 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:36:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-71f24b8bd883a1a9d5a46c8f33163dab05be4377ce726a9e20ef9f68b79d27d6-merged.mount: Deactivated successfully.
Oct 14 05:36:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa-userdata-shm.mount: Deactivated successfully.
Oct 14 05:36:22 np0005486808 podman[410429]: 2025-10-14 09:36:22.201424532 +0000 UTC m=+0.088394650 container cleanup 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:36:22 np0005486808 systemd[1]: libpod-conmon-29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa.scope: Deactivated successfully.
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.268 2 INFO nova.virt.libvirt.driver [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deleting instance files /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_del#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.269 2 INFO nova.virt.libvirt.driver [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deletion of /var/lib/nova/instances/c977bdc6-8dd7-4cb4-b50d-28e7313a16e8_del complete#033[00m
Oct 14 05:36:22 np0005486808 podman[410461]: 2025-10-14 09:36:22.280087242 +0000 UTC m=+0.043975030 container remove 29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.286 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df6eac2e-a6e5-4b99-9187-ebbdf632ba72]: (4, ('Tue Oct 14 09:36:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 (29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa)\n29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa\nTue Oct 14 09:36:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 (29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa)\n29e50ba6a898c594ac8fb8f1b369d4287b1cde8bc04d5147c996487d6e5f67aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.288 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a47952-392f-4752-a288-20358db961f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.289 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d3b36cd-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:22 np0005486808 kernel: tap0d3b36cd-f0: left promiscuous mode
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.308 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9131ad-fa65-445b-8248-01e6583a505e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.331 2 INFO nova.compute.manager [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.331 2 DEBUG oslo.service.loopingcall [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.331 2 DEBUG nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.332 2 DEBUG nova.network.neutron [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.341 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[665752fa-e110-4598-bc35-fcf5e3a664c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.343 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeec789-4ad4-4c03-8817-a62ff91d5336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.362 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0ebc2d-7a31-4894-bf00-fe6feca143e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829518, 'reachable_time': 36653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410477, 'error': None, 'target': 'ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.365 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d3b36cd-f345-4ba4-8ea7-29b299ab0543 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:36:22 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:36:22.365 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[2c931654-22f8-4b89-9051-9790eef935fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:36:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 05:36:22 np0005486808 systemd[1]: run-netns-ovnmeta\x2d0d3b36cd\x2df345\x2d4ba4\x2d8ea7\x2d29b299ab0543.mount: Deactivated successfully.
Oct 14 05:36:22 np0005486808 nova_compute[259627]: 2025-10-14 09:36:22.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.387 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.388 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.388 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.389 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.389 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.390 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.391 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.391 2 DEBUG oslo_concurrency.lockutils [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.392 2 DEBUG nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.392 2 WARNING nova.compute.manager [req-d2c8aca3-6970-4027-807e-32de9a5e08a6 req-a0afa7f8-c326-4e84-b655-412e9bda1c7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-13d4f68b-234a-4c46-9e1d-79f28a907bf2 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.818 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.819 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.819 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.820 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.820 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-unplugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.821 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.822 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.822 2 DEBUG oslo_concurrency.lockutils [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.823 2 DEBUG nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] No waiting events found dispatching network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:36:23 np0005486808 nova_compute[259627]: 2025-10-14 09:36:23.823 2 WARNING nova.compute.manager [req-ba8e98ab-f265-41ed-8be5-2daf16d59506 req-3f188194-1d2a-4acb-a4ce-5afa117f8f06 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received unexpected event network-vif-plugged-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.107 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updated VIF entry in instance network info cache for port 13d4f68b-234a-4c46-9e1d-79f28a907bf2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.107 2 DEBUG nova.network.neutron [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [{"id": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "address": "fa:16:3e:2a:ad:d7", "network": {"id": "9cb6ee52-3808-410f-9854-68ac8ffadab8", "bridge": "br-int", "label": "tempest-network-smoke--1060169211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13d4f68b-23", "ovs_interfaceid": "13d4f68b-234a-4c46-9e1d-79f28a907bf2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "address": "fa:16:3e:4e:2d:2e", "network": {"id": "0d3b36cd-f345-4ba4-8ea7-29b299ab0543", "bridge": "br-int", "label": "tempest-network-smoke--617172189", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:2d2e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2457aed-ba", "ovs_interfaceid": "b2457aed-ba7c-4d69-b93d-9f4c98e456b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.142 2 DEBUG oslo_concurrency.lockutils [req-bf43376e-02ac-469f-8ba4-e96f43464ca0 req-75e58601-03b7-4539-92ea-1f0159454ab6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:36:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 30 op/s
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.676 2 DEBUG nova.network.neutron [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.698 2 INFO nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Took 2.37 seconds to deallocate network for instance.#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.741 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.743 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:24 np0005486808 nova_compute[259627]: 2025-10-14 09:36:24.808 2 DEBUG oslo_concurrency.processutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:36:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:36:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535339950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.277 2 DEBUG oslo_concurrency.processutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.286 2 DEBUG nova.compute.provider_tree [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.303 2 DEBUG nova.scheduler.client.report [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.332 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.371 2 INFO nova.scheduler.client.report [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance c977bdc6-8dd7-4cb4-b50d-28e7313a16e8#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.451 2 DEBUG oslo_concurrency.lockutils [None req-34db282a-b281-4ee5-8bce-d7fb60bbffbc 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "c977bdc6-8dd7-4cb4-b50d-28e7313a16e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.496 2 DEBUG nova.compute.manager [req-8ea42346-aa8d-4420-a237-ef1d7f1b792f req-d9f0beb4-0df9-4309-bf67-9da7569cdd5a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-deleted-b2457aed-ba7c-4d69-b93d-9f4c98e456b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:25 np0005486808 nova_compute[259627]: 2025-10-14 09:36:25.497 2 DEBUG nova.compute.manager [req-8ea42346-aa8d-4420-a237-ef1d7f1b792f req-d9f0beb4-0df9-4309-bf67-9da7569cdd5a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Received event network-vif-deleted-13d4f68b-234a-4c46-9e1d-79f28a907bf2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:36:26 np0005486808 nova_compute[259627]: 2025-10-14 09:36:26.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 58 op/s
Oct 14 05:36:26 np0005486808 nova_compute[259627]: 2025-10-14 09:36:26.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 05:36:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 7.1 KiB/s wr, 56 op/s
Oct 14 05:36:31 np0005486808 nova_compute[259627]: 2025-10-14 09:36:31.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:31 np0005486808 nova_compute[259627]: 2025-10-14 09:36:31.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:32 np0005486808 nova_compute[259627]: 2025-10-14 09:36:32.792 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434577.790484, 2d97e325-a4e8-4595-9697-04219277474d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:36:32 np0005486808 nova_compute[259627]: 2025-10-14 09:36:32.793 2 INFO nova.compute.manager [-] [instance: 2d97e325-a4e8-4595-9697-04219277474d] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:32 np0005486808 nova_compute[259627]: 2025-10-14 09:36:32.820 2 DEBUG nova.compute.manager [None req-aaefe0e8-d3c6-4410-8d31-2b3e7d5d25bc - - - - - -] [instance: 2d97e325-a4e8-4595-9697-04219277474d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:36:32
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Oct 14 05:36:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:36:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:36:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:36:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:36:35 np0005486808 nova_compute[259627]: 2025-10-14 09:36:35.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:35 np0005486808 nova_compute[259627]: 2025-10-14 09:36:35.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:35 np0005486808 podman[410502]: 2025-10-14 09:36:35.676163409 +0000 UTC m=+0.091211299 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 14 05:36:35 np0005486808 podman[410501]: 2025-10-14 09:36:35.707212651 +0000 UTC m=+0.118990631 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009)
Oct 14 05:36:36 np0005486808 nova_compute[259627]: 2025-10-14 09:36:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:36:36 np0005486808 nova_compute[259627]: 2025-10-14 09:36:36.806 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434581.8053603, c977bdc6-8dd7-4cb4-b50d-28e7313a16e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:36:36 np0005486808 nova_compute[259627]: 2025-10-14 09:36:36.806 2 INFO nova.compute.manager [-] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:36:36 np0005486808 nova_compute[259627]: 2025-10-14 09:36:36.836 2 DEBUG nova.compute.manager [None req-4684d29d-6b5c-4cf8-b203-4ea802e5823c - - - - - -] [instance: c977bdc6-8dd7-4cb4-b50d-28e7313a16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:36:36 np0005486808 nova_compute[259627]: 2025-10-14 09:36:36.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:41 np0005486808 nova_compute[259627]: 2025-10-14 09:36:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:41 np0005486808 nova_compute[259627]: 2025-10-14 09:36:41.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:36:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:36:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:44 np0005486808 nova_compute[259627]: 2025-10-14 09:36:44.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e61a499a-ffd0-4719-92a7-b57e9b4b3b6b does not exist
Oct 14 05:36:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 62e1145c-9c59-43d8-ba29-821ddac2839c does not exist
Oct 14 05:36:45 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0ccef622-a27f-49f6-bfb3-c700302fa4d7 does not exist
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.786204809 +0000 UTC m=+0.045756404 container create 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:36:45 np0005486808 systemd[1]: Started libpod-conmon-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope.
Oct 14 05:36:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.764986998 +0000 UTC m=+0.024538583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.87386672 +0000 UTC m=+0.133418305 container init 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.881725983 +0000 UTC m=+0.141277578 container start 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.886218603 +0000 UTC m=+0.145770168 container attach 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:36:45 np0005486808 upbeat_darwin[410836]: 167 167
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.887758471 +0000 UTC m=+0.147310026 container died 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:36:45 np0005486808 systemd[1]: libpod-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope: Deactivated successfully.
Oct 14 05:36:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1f4664fcffb5ae50bc2d88c0d452134cb4c33968cd29ef93ce7d1e188f847990-merged.mount: Deactivated successfully.
Oct 14 05:36:45 np0005486808 podman[410820]: 2025-10-14 09:36:45.930774366 +0000 UTC m=+0.190325911 container remove 68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:36:45 np0005486808 systemd[1]: libpod-conmon-68b0c4f0969782e1d0ef1324ca1289e9faefc6e7c4b3b3c4148fe7a64d854f90.scope: Deactivated successfully.
Oct 14 05:36:46 np0005486808 podman[410862]: 2025-10-14 09:36:46.120274167 +0000 UTC m=+0.055476803 container create 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:36:46 np0005486808 systemd[1]: Started libpod-conmon-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope.
Oct 14 05:36:46 np0005486808 podman[410862]: 2025-10-14 09:36:46.089232535 +0000 UTC m=+0.024435171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:46 np0005486808 podman[410862]: 2025-10-14 09:36:46.219468911 +0000 UTC m=+0.154671547 container init 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:36:46 np0005486808 nova_compute[259627]: 2025-10-14 09:36:46.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:46 np0005486808 podman[410862]: 2025-10-14 09:36:46.233764312 +0000 UTC m=+0.168966938 container start 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:36:46 np0005486808 podman[410862]: 2025-10-14 09:36:46.238409646 +0000 UTC m=+0.173612332 container attach 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:36:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:46 np0005486808 nova_compute[259627]: 2025-10-14 09:36:46.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:47 np0005486808 cranky_haibt[410878]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:36:47 np0005486808 cranky_haibt[410878]: --> relative data size: 1.0
Oct 14 05:36:47 np0005486808 cranky_haibt[410878]: --> All data devices are unavailable
Oct 14 05:36:47 np0005486808 systemd[1]: libpod-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Deactivated successfully.
Oct 14 05:36:47 np0005486808 systemd[1]: libpod-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Consumed 1.119s CPU time.
Oct 14 05:36:47 np0005486808 podman[410862]: 2025-10-14 09:36:47.407481035 +0000 UTC m=+1.342683661 container died 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-baa6cc8efd20ae8aea72dee0af9e0efe60233ae0283c0f20693ec4a2654d6caa-merged.mount: Deactivated successfully.
Oct 14 05:36:47 np0005486808 podman[410862]: 2025-10-14 09:36:47.486986746 +0000 UTC m=+1.422189372 container remove 5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:47 np0005486808 systemd[1]: libpod-conmon-5466f9057286b9c62a7f083034276787e5a19bfbc4d0050e3be3f1e00c4f42b1.scope: Deactivated successfully.
Oct 14 05:36:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.344048988 +0000 UTC m=+0.072649633 container create d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:48 np0005486808 systemd[1]: Started libpod-conmon-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope.
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.312277579 +0000 UTC m=+0.040878284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.445673902 +0000 UTC m=+0.174274587 container init d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.456957869 +0000 UTC m=+0.185558514 container start d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.461193233 +0000 UTC m=+0.189793878 container attach d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:48 np0005486808 hungry_goldwasser[411076]: 167 167
Oct 14 05:36:48 np0005486808 systemd[1]: libpod-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope: Deactivated successfully.
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.465592471 +0000 UTC m=+0.194193116 container died d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:36:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bc5f90a5a36722ed3e3714c38517d1b3b433f398cd32fcadffe91e2e4c048934-merged.mount: Deactivated successfully.
Oct 14 05:36:48 np0005486808 podman[411059]: 2025-10-14 09:36:48.515312541 +0000 UTC m=+0.243913166 container remove d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:36:48 np0005486808 systemd[1]: libpod-conmon-d50cc513df4632cb4cba3f020e09879d327d3be3de5c027804f1d88b75ffdd68.scope: Deactivated successfully.
Oct 14 05:36:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:48 np0005486808 podman[411099]: 2025-10-14 09:36:48.782545389 +0000 UTC m=+0.078681802 container create 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:36:48 np0005486808 podman[411099]: 2025-10-14 09:36:48.754259475 +0000 UTC m=+0.050395938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:48 np0005486808 systemd[1]: Started libpod-conmon-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope.
Oct 14 05:36:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:48 np0005486808 podman[411099]: 2025-10-14 09:36:48.908991042 +0000 UTC m=+0.205127435 container init 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 05:36:48 np0005486808 podman[411099]: 2025-10-14 09:36:48.924466042 +0000 UTC m=+0.220602445 container start 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:48 np0005486808 podman[411099]: 2025-10-14 09:36:48.928691406 +0000 UTC m=+0.224827859 container attach 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]: {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    "0": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "devices": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "/dev/loop3"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            ],
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_name": "ceph_lv0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_size": "21470642176",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "name": "ceph_lv0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "tags": {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_name": "ceph",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.crush_device_class": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.encrypted": "0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_id": "0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.vdo": "0"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            },
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "vg_name": "ceph_vg0"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        }
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    ],
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    "1": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "devices": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "/dev/loop4"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            ],
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_name": "ceph_lv1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_size": "21470642176",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "name": "ceph_lv1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "tags": {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_name": "ceph",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.crush_device_class": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.encrypted": "0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_id": "1",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.vdo": "0"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            },
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "vg_name": "ceph_vg1"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        }
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    ],
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    "2": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "devices": [
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "/dev/loop5"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            ],
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_name": "ceph_lv2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_size": "21470642176",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "name": "ceph_lv2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "tags": {
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.cluster_name": "ceph",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.crush_device_class": "",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.encrypted": "0",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osd_id": "2",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:                "ceph.vdo": "0"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            },
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "type": "block",
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:            "vg_name": "ceph_vg2"
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:        }
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]:    ]
Oct 14 05:36:49 np0005486808 competent_agnesi[411116]: }
Oct 14 05:36:49 np0005486808 systemd[1]: libpod-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope: Deactivated successfully.
Oct 14 05:36:49 np0005486808 podman[411099]: 2025-10-14 09:36:49.715200606 +0000 UTC m=+1.011337009 container died 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:36:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4085f60892936ab2ec03c17ee06131b1161a899ab1e1fce0bd14f265e78a8572-merged.mount: Deactivated successfully.
Oct 14 05:36:49 np0005486808 podman[411099]: 2025-10-14 09:36:49.773688321 +0000 UTC m=+1.069824704 container remove 07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:36:49 np0005486808 systemd[1]: libpod-conmon-07314b37f791ab3616eeb338e1466b529de18a86b05b79f83186e09e01718560.scope: Deactivated successfully.
Oct 14 05:36:49 np0005486808 nova_compute[259627]: 2025-10-14 09:36:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.608223651 +0000 UTC m=+0.070389809 container create b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:36:50 np0005486808 systemd[1]: Started libpod-conmon-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope.
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.580401368 +0000 UTC m=+0.042567606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:50 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.700920155 +0000 UTC m=+0.163086413 container init b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.713638468 +0000 UTC m=+0.175804666 container start b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.717456451 +0000 UTC m=+0.179622699 container attach b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:50 np0005486808 mystifying_lamport[411293]: 167 167
Oct 14 05:36:50 np0005486808 systemd[1]: libpod-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope: Deactivated successfully.
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.723135611 +0000 UTC m=+0.185301809 container died b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:36:50 np0005486808 systemd[1]: var-lib-containers-storage-overlay-45365a2467741cdc58cb1b34dd4813f8fced64d0e7305930b3135a420c5347ae-merged.mount: Deactivated successfully.
Oct 14 05:36:50 np0005486808 podman[411276]: 2025-10-14 09:36:50.778192972 +0000 UTC m=+0.240359170 container remove b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:36:50 np0005486808 systemd[1]: libpod-conmon-b640593d98e621a5d66e6b072e89d9d97ae9c0382322b7b393bee01c928f9c33.scope: Deactivated successfully.
Oct 14 05:36:50 np0005486808 nova_compute[259627]: 2025-10-14 09:36:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:51 np0005486808 podman[411316]: 2025-10-14 09:36:51.022791074 +0000 UTC m=+0.064632647 container create 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 14 05:36:51 np0005486808 systemd[1]: Started libpod-conmon-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope.
Oct 14 05:36:51 np0005486808 podman[411316]: 2025-10-14 09:36:51.00345959 +0000 UTC m=+0.045301183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:36:51 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:36:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:51 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:36:51 np0005486808 podman[411316]: 2025-10-14 09:36:51.121835345 +0000 UTC m=+0.163676938 container init 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:36:51 np0005486808 podman[411316]: 2025-10-14 09:36:51.136129116 +0000 UTC m=+0.177970689 container start 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:36:51 np0005486808 podman[411316]: 2025-10-14 09:36:51.140294188 +0000 UTC m=+0.182135751 container attach 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:36:51 np0005486808 nova_compute[259627]: 2025-10-14 09:36:51.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:51 np0005486808 nova_compute[259627]: 2025-10-14 09:36:51.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:51 np0005486808 nova_compute[259627]: 2025-10-14 09:36:51.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.015 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.017 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:36:52 np0005486808 festive_jones[411333]: {
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_id": 2,
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "type": "bluestore"
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    },
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_id": 1,
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "type": "bluestore"
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    },
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_id": 0,
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:36:52 np0005486808 festive_jones[411333]:        "type": "bluestore"
Oct 14 05:36:52 np0005486808 festive_jones[411333]:    }
Oct 14 05:36:52 np0005486808 festive_jones[411333]: }
Oct 14 05:36:52 np0005486808 systemd[1]: libpod-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Deactivated successfully.
Oct 14 05:36:52 np0005486808 podman[411316]: 2025-10-14 09:36:52.220877065 +0000 UTC m=+1.262718618 container died 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:36:52 np0005486808 systemd[1]: libpod-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Consumed 1.095s CPU time.
Oct 14 05:36:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-28fe1c52838b48ea110b506edffdd45052e53bcd63b1da1c87f80048fc4ffd56-merged.mount: Deactivated successfully.
Oct 14 05:36:52 np0005486808 podman[411316]: 2025-10-14 09:36:52.303597225 +0000 UTC m=+1.345438798 container remove 4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:52 np0005486808 systemd[1]: libpod-conmon-4a13bf39dfd7e842d2aa508e88a044d973f27b0fa2e6fa33eafea82c1fc6d193.scope: Deactivated successfully.
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:52 np0005486808 podman[411395]: 2025-10-14 09:36:52.354431503 +0000 UTC m=+0.085651043 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 83ba4603-d729-4833-9f64-14da00264a91 does not exist
Oct 14 05:36:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 113e22fe-eccc-45fb-b994-c5c57638c27e does not exist
Oct 14 05:36:52 np0005486808 podman[411388]: 2025-10-14 09:36:52.385126836 +0000 UTC m=+0.115394863 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:36:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685230023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.545 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:36:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.708 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.709 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.797 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.798 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:36:52 np0005486808 nova_compute[259627]: 2025-10-14 09:36:52.824 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:36:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:36:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954020972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:36:53 np0005486808 nova_compute[259627]: 2025-10-14 09:36:53.242 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:36:53 np0005486808 nova_compute[259627]: 2025-10-14 09:36:53.248 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:36:53 np0005486808 nova_compute[259627]: 2025-10-14 09:36:53.269 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:36:53 np0005486808 nova_compute[259627]: 2025-10-14 09:36:53.298 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:36:53 np0005486808 nova_compute[259627]: 2025-10-14 09:36:53.298 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:36:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:36:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.300 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.301 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.301 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:36:55 np0005486808 nova_compute[259627]: 2025-10-14 09:36:55.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:36:56 np0005486808 nova_compute[259627]: 2025-10-14 09:36:56.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:36:56 np0005486808 nova_compute[259627]: 2025-10-14 09:36:56.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:56 np0005486808 nova_compute[259627]: 2025-10-14 09:36:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:36:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:36:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:36:59 np0005486808 nova_compute[259627]: 2025-10-14 09:36:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:37:00 np0005486808 nova_compute[259627]: 2025-10-14 09:37:00.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:01 np0005486808 nova_compute[259627]: 2025-10-14 09:37:01.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:01 np0005486808 nova_compute[259627]: 2025-10-14 09:37:01.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.701 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.702 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.725 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.809 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.809 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.821 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:37:02 np0005486808 nova_compute[259627]: 2025-10-14 09:37:02.822 2 INFO nova.compute.claims [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.086 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:37:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472351782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.556 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.562 2 DEBUG nova.compute.provider_tree [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.586 2 DEBUG nova.scheduler.client.report [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.614 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.615 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.673 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.674 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.694 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.719 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.881 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.883 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.884 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating image(s)#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.924 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:03 np0005486808 nova_compute[259627]: 2025-10-14 09:37:03.967 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.006 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.012 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.084 2 DEBUG nova.policy [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.116 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.117 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.118 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.119 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.157 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.163 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bdfd070c-d036-4656-b797-efba7d4a4565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.443 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 bdfd070c-d036-4656-b797-efba7d4a4565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.543 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:37:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 975 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.654 2 DEBUG nova.objects.instance [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.671 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.672 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Ensure instance console log exists: /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.672 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.673 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:04 np0005486808 nova_compute[259627]: 2025-10-14 09:37:04.673 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:05 np0005486808 nova_compute[259627]: 2025-10-14 09:37:05.039 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully created port: d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:37:05 np0005486808 nova_compute[259627]: 2025-10-14 09:37:05.589 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully created port: d4332d2f-fff2-4de9-811c-7d5ce2580b21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:37:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:37:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:37:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:37:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888271872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.570 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully updated port: d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:37:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.668 2 DEBUG nova.compute.manager [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG nova.compute.manager [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.669 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:06 np0005486808 podman[411698]: 2025-10-14 09:37:06.6963252 +0000 UTC m=+0.091349743 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 14 05:37:06 np0005486808 podman[411697]: 2025-10-14 09:37:06.753224016 +0000 UTC m=+0.146565158 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.846 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:37:06 np0005486808 nova_compute[259627]: 2025-10-14 09:37:06.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.053 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.390 2 DEBUG nova.network.neutron [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.427 2 DEBUG oslo_concurrency.lockutils [req-b522347a-d967-4e26-a1bd-8a2a27019406 req-c5bde07e-9816-4197-a651-a2282e086e67 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.580 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Successfully updated port: d4332d2f-fff2-4de9-811c-7d5ce2580b21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.599 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.600 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.600 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:37:07 np0005486808 nova_compute[259627]: 2025-10-14 09:37:07.850 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:37:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:08 np0005486808 nova_compute[259627]: 2025-10-14 09:37:08.754 2 DEBUG nova.compute.manager [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:08 np0005486808 nova_compute[259627]: 2025-10-14 09:37:08.754 2 DEBUG nova.compute.manager [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d4332d2f-fff2-4de9-811c-7d5ce2580b21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:08 np0005486808 nova_compute[259627]: 2025-10-14 09:37:08.755 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.297 2 DEBUG nova.network.neutron [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.321 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.322 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance network_info: |[{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.323 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.323 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d4332d2f-fff2-4de9-811c-7d5ce2580b21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.329 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start _get_guest_xml network_info=[{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.336 2 WARNING nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.341 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.342 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.355 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.355 2 DEBUG nova.virt.libvirt.host [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.356 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.357 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.358 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.358 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.359 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.360 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.361 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.361 2 DEBUG nova.virt.hardware [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.366 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:37:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264397896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.871 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.909 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:11 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.944 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:11.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:37:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57655119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.436 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.439 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.439 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.441 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.443 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.443 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.445 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.448 2 DEBUG nova.objects.instance [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.487 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <uuid>bdfd070c-d036-4656-b797-efba7d4a4565</uuid>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <name>instance-00000092</name>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-1555367739</nova:name>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:37:11</nova:creationTime>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:port uuid="d3b7ded4-91fa-46dc-b6b9-e6e630c275ab">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <nova:port uuid="d4332d2f-fff2-4de9-811c-7d5ce2580b21">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe98:6975" ipVersion="6"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="serial">bdfd070c-d036-4656-b797-efba7d4a4565</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="uuid">bdfd070c-d036-4656-b797-efba7d4a4565</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/bdfd070c-d036-4656-b797-efba7d4a4565_disk">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/bdfd070c-d036-4656-b797-efba7d4a4565_disk.config">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:fc:b2:53"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <target dev="tapd3b7ded4-91"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:98:69:75"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <target dev="tapd4332d2f-ff"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/console.log" append="off"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:37:12 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:37:12 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:37:12 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:37:12 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.489 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Preparing to wait for external event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.490 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.490 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Preparing to wait for external event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.491 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.492 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.492 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.493 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.494 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.494 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.495 2 DEBUG os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3b7ded4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3b7ded4-91, col_values=(('external_ids', {'iface-id': 'd3b7ded4-91fa-46dc-b6b9-e6e630c275ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:b2:53', 'vm-uuid': 'bdfd070c-d036-4656-b797-efba7d4a4565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:37:12 np0005486808 NetworkManager[44885]: <info>  [1760434632.5055] manager: (tapd3b7ded4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.514 2 INFO os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91')#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.515 2 DEBUG nova.virt.libvirt.vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:03Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.516 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.516 2 DEBUG nova.network.os_vif_util [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4332d2f-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4332d2f-ff, col_values=(('external_ids', {'iface-id': 'd4332d2f-fff2-4de9-811c-7d5ce2580b21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:69:75', 'vm-uuid': 'bdfd070c-d036-4656-b797-efba7d4a4565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 NetworkManager[44885]: <info>  [1760434632.5232] manager: (tapd4332d2f-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.536 2 INFO os_vif [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff')#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.583 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:fc:b2:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.584 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:98:69:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.584 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Using config drive#033[00m
Oct 14 05:37:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:12 np0005486808 nova_compute[259627]: 2025-10-14 09:37:12.610 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.535 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Creating config drive at /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.545 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperl9e3em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.721 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperl9e3em" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.762 2 DEBUG nova.storage.rbd_utils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image bdfd070c-d036-4656-b797-efba7d4a4565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.767 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config bdfd070c-d036-4656-b797-efba7d4a4565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.962 2 DEBUG oslo_concurrency.processutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config bdfd070c-d036-4656-b797-efba7d4a4565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:13 np0005486808 nova_compute[259627]: 2025-10-14 09:37:13.963 2 INFO nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deleting local config drive /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565/disk.config because it was imported into RBD.#033[00m
Oct 14 05:37:14 np0005486808 kernel: tapd3b7ded4-91: entered promiscuous mode
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.0076] manager: (tapd3b7ded4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01595|binding|INFO|Claiming lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for this chassis.
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01596|binding|INFO|d3b7ded4-91fa-46dc-b6b9-e6e630c275ab: Claiming fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.027 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d4332d2f-fff2-4de9-811c-7d5ce2580b21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.028 2 DEBUG nova.network.neutron [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.0306] manager: (tapd4332d2f-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/653)
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.031 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:b2:53 10.100.0.10'], port_security=['fa:16:3e:fc:b2:53 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.033 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 bound to our chassis#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.034 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5#033[00m
Oct 14 05:37:14 np0005486808 systemd-udevd[411882]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:37:14 np0005486808 systemd-udevd[411883]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.048 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d701eb-e0ee-488b-be3d-b5d2f5bb7832]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.050 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcb1842f7-91 in ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcb1842f7-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f041d22f-1487-44c0-a43f-987d3b06d6c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad5925f-3795-4018-9b67-5a267872dfad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.053 2 DEBUG oslo_concurrency.lockutils [req-867068c4-c893-4658-b561-e811d1ea0486 req-3f3e3c2c-0382-46a3-9873-6c9a68312bb1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.0658] device (tapd3b7ded4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.064 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[09e0483b-bda1-48e7-96f1-c885332bc7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.0668] device (tapd3b7ded4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:37:14 np0005486808 systemd-machined[214636]: New machine qemu-179-instance-00000092.
Oct 14 05:37:14 np0005486808 systemd[1]: Started Virtual Machine qemu-179-instance-00000092.
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.088 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[72e71172-604d-4c93-8599-bbbf5a1ccfc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.1011] device (tapd4332d2f-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:37:14 np0005486808 kernel: tapd4332d2f-ff: entered promiscuous mode
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.1024] device (tapd4332d2f-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01597|binding|INFO|Claiming lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 for this chassis.
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01598|binding|INFO|d4332d2f-fff2-4de9-811c-7d5ce2580b21: Claiming fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.110 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], port_security=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe98:6975/64', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d4332d2f-fff2-4de9-811c-7d5ce2580b21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01599|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab ovn-installed in OVS
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01600|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab up in Southbound
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.125 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[6c78c16f-6199-4f3c-ae4e-4b4061850f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01601|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 ovn-installed in OVS
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01602|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 up in Southbound
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.1339] manager: (tapcb1842f7-90): new Veth device (/org/freedesktop/NetworkManager/Devices/654)
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.133 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fd975035-7928-4e2c-a136-91b05f06aaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.166 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a6a552-623d-4704-ac7f-788774a01656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.174 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[607a3784-cff4-48c5-92bc-b34d47161f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.2007] device (tapcb1842f7-90): carrier: link connected
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.205 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0aaea6-6f79-4d5a-9171-e2844d7b6d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.226 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a7db48-c1eb-4858-8c5d-6d670a3270ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411918, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.242 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a72328-9460-4391-9ac5-9cc7a437c58b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:b737'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841296, 'tstamp': 841296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411919, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[57f1d425-1095-493e-be3a-c08d7270a695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411920, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.293 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[84a0b4a2-e572-49ce-9a77-42b21272025f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG nova.compute.manager [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.328 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.329 2 DEBUG oslo_concurrency.lockutils [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.329 2 DEBUG nova.compute.manager [req-7d74f27b-b726-4ba5-8114-f8ecfbdf01b4 req-d8e7aa0b-1965-43df-8afd-bd46219d2216 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Processing event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.351 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[166f7ba2-f224-4fcf-a73a-61e24f4aa230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.352 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.353 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 NetworkManager[44885]: <info>  [1760434634.3854] manager: (tapcb1842f7-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Oct 14 05:37:14 np0005486808 kernel: tapcb1842f7-90: entered promiscuous mode
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.387 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:14Z|01603|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 05:37:14 np0005486808 nova_compute[259627]: 2025-10-14 09:37:14.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.405 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.407 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e8080006-1613-4b02-917e-95cc0b7372bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.407 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/cb1842f7-933b-4c76-aa59-c55590c98ec5.pid.haproxy
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID cb1842f7-933b-4c76-aa59-c55590c98ec5
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:37:14 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:14.408 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'env', 'PROCESS_TAG=haproxy-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cb1842f7-933b-4c76-aa59-c55590c98ec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:37:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:14 np0005486808 podman[411985]: 2025-10-14 09:37:14.842870614 +0000 UTC m=+0.074952040 container create a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:37:14 np0005486808 podman[411985]: 2025-10-14 09:37:14.805552508 +0000 UTC m=+0.037633974 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:37:14 np0005486808 systemd[1]: Started libpod-conmon-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope.
Oct 14 05:37:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a575707cf6d3dded0de43b86eb81e521613e1d92e58a48fd3de84aba25f6fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:14 np0005486808 podman[411985]: 2025-10-14 09:37:14.955041247 +0000 UTC m=+0.187122703 container init a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:37:14 np0005486808 podman[411985]: 2025-10-14 09:37:14.961825373 +0000 UTC m=+0.193906799 container start a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:37:14 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : New worker (412016) forked
Oct 14 05:37:14 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : Loading success.
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.028 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d4332d2f-fff2-4de9-811c-7d5ce2580b21 in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.030 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.049 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[881b05e4-e0e6-486f-b196-6167f2244a98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.050 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4914ad10-c1 in ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.052 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4914ad10-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[361823a3-309c-4429-b7db-2820c1ba0a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.053 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2768a337-988c-495c-ba5f-9ab1163c4803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.067 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[134bc167-84f2-43fc-9273-5e9c96344626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.089 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3abaa51e-a511-4f76-9aff-3d4fb3ac65a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.127 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0b5d72-dd35-4f8a-84d7-9680fbb5fd51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.135 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14c8719a-4fbe-45cb-a921-db9a148892ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 NetworkManager[44885]: <info>  [1760434635.1371] manager: (tap4914ad10-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/656)
Oct 14 05:37:15 np0005486808 systemd-udevd[411911]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.174 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc4e6af-cc85-464f-a01d-24d0f18fedb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.178 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[80d576ef-7a0d-4ea8-883e-d417dc5c4131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 NetworkManager[44885]: <info>  [1760434635.2094] device (tap4914ad10-c0): carrier: link connected
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.215 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9d10a6bf-2d58-4d36-bbe5-15666983e665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.236 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf354c3-30e3-4e5a-b580-27105396efb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412035, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.258 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[df20cebb-82cc-42bf-b272-72eccd3bf583]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:b40c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841397, 'tstamp': 841397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412036, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb2687-e50f-43df-b19c-d9d0cbeda6df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412037, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.286 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434635.2860374, bdfd070c-d036-4656-b797-efba7d4a4565 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.286 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Started (Lifecycle Event)#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.309 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.313 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434635.2883456, bdfd070c-d036-4656-b797-efba7d4a4565 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.313 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.327 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[14ab6682-ee98-41b0-acfa-f41c0663cc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.331 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.335 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.356 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[192caad6-29eb-446d-94db-b81d6e180238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.357 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.358 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:15 np0005486808 NetworkManager[44885]: <info>  [1760434635.3607] manager: (tap4914ad10-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:15 np0005486808 kernel: tap4914ad10-c0: entered promiscuous mode
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.364 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:15 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:15Z|01604|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.372 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:37:15 np0005486808 nova_compute[259627]: 2025-10-14 09:37:15.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.395 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa365d3-919c-4264-99c1-fc775e325157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.396 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.pid.haproxy
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 4914ad10-cd65-4b9a-8ebb-43ebafc5f222
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:37:15 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:15.397 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'env', 'PROCESS_TAG=haproxy-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4914ad10-cd65-4b9a-8ebb-43ebafc5f222.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:37:15 np0005486808 podman[412067]: 2025-10-14 09:37:15.806411879 +0000 UTC m=+0.068586704 container create 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:37:15 np0005486808 systemd[1]: Started libpod-conmon-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope.
Oct 14 05:37:15 np0005486808 podman[412067]: 2025-10-14 09:37:15.769805481 +0000 UTC m=+0.031980386 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:37:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331dc74cf2f6433ae28b4a6f4672079f6c234826e20c160e76e1878b09b3328/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:15 np0005486808 podman[412067]: 2025-10-14 09:37:15.917301291 +0000 UTC m=+0.179476196 container init 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:37:15 np0005486808 podman[412067]: 2025-10-14 09:37:15.929378167 +0000 UTC m=+0.191553022 container start 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 14 05:37:15 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : New worker (412089) forked
Oct 14 05:37:15 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : Loading success.
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.656 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.656 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.657 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.657 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.658 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No event matching network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in dict_keys([('network-vif-plugged', 'd4332d2f-fff2-4de9-811c-7d5ce2580b21')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.658 2 WARNING nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.659 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.659 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.660 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.660 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Processing event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.661 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.662 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.662 2 DEBUG oslo_concurrency.lockutils [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.663 2 DEBUG nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.663 2 WARNING nova.compute.manager [req-951395ae-1b28-4cd2-bd3c-fe78b87e305c req-d6c27c3c-2d9a-4120-9dfe-a043ce494e85 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with vm_state building and task_state spawning.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.664 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.670 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434636.669818, bdfd070c-d036-4656-b797-efba7d4a4565 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.670 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.674 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.681 2 INFO nova.virt.libvirt.driver [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance spawned successfully.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.681 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.699 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.704 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.719 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.719 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.720 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.721 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.721 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.722 2 DEBUG nova.virt.libvirt.driver [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.741 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.804 2 INFO nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 12.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.804 2 DEBUG nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.880 2 INFO nova.compute.manager [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 14.10 seconds to build instance.#033[00m
Oct 14 05:37:16 np0005486808 nova_compute[259627]: 2025-10-14 09:37:16.896 2 DEBUG oslo_concurrency.lockutils [None req-2b9f26f9-2c25-4753-85a5-31aac56e0f26 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:17 np0005486808 nova_compute[259627]: 2025-10-14 09:37:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 14 05:37:20 np0005486808 nova_compute[259627]: 2025-10-14 09:37:20.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:20.338 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:37:20 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:20.339 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:37:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:37:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:21Z|01605|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 05:37:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:21Z|01606|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:21 np0005486808 NetworkManager[44885]: <info>  [1760434641.2268] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/658)
Oct 14 05:37:21 np0005486808 NetworkManager[44885]: <info>  [1760434641.2291] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct 14 05:37:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:21Z|01607|binding|INFO|Releasing lport 49fe400f-9e76-42ad-a72c-3f9a5bf50e43 from this chassis (sb_readonly=0)
Oct 14 05:37:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:21Z|01608|binding|INFO|Releasing lport 7ea31872-5902-4f26-8d38-70f94c9c61fa from this chassis (sb_readonly=0)
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.567 2 DEBUG nova.compute.manager [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.568 2 DEBUG nova.compute.manager [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.568 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.569 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:21 np0005486808 nova_compute[259627]: 2025-10-14 09:37:21.569 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:22 np0005486808 nova_compute[259627]: 2025-10-14 09:37:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:37:22 np0005486808 podman[412100]: 2025-10-14 09:37:22.697534786 +0000 UTC m=+0.084855073 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 14 05:37:22 np0005486808 podman[412099]: 2025-10-14 09:37:22.702500338 +0000 UTC m=+0.092146653 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:37:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:24 np0005486808 nova_compute[259627]: 2025-10-14 09:37:24.006 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:37:24 np0005486808 nova_compute[259627]: 2025-10-14 09:37:24.007 2 DEBUG nova.network.neutron [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:24 np0005486808 nova_compute[259627]: 2025-10-14 09:37:24.028 2 DEBUG oslo_concurrency.lockutils [req-03d7a12b-67c3-41b2-bc43-aff5342d18d8 req-f35699dc-7eb6-49c4-9052-45fe4c28814a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:37:26 np0005486808 nova_compute[259627]: 2025-10-14 09:37:26.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:37:27 np0005486808 nova_compute[259627]: 2025-10-14 09:37:27.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:27 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 05:37:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:28 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:28.342 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 14 05:37:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:28Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 05:37:28 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:28Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:b2:53 10.100.0.10
Oct 14 05:37:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 14 05:37:31 np0005486808 nova_compute[259627]: 2025-10-14 09:37:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:32 np0005486808 nova_compute[259627]: 2025-10-14 09:37:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:37:32
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Oct 14 05:37:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:37:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:37:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:37:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:37:36 np0005486808 nova_compute[259627]: 2025-10-14 09:37:36.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:37:37 np0005486808 nova_compute[259627]: 2025-10-14 09:37:37.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:37 np0005486808 podman[412142]: 2025-10-14 09:37:37.666827749 +0000 UTC m=+0.075006861 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:37:37 np0005486808 podman[412141]: 2025-10-14 09:37:37.702897974 +0000 UTC m=+0.114932171 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:37:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:37:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:37:41 np0005486808 nova_compute[259627]: 2025-10-14 09:37:41.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.504 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.504 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.527 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.603 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.603 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.613 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.614 2 INFO nova.compute.claims [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:37:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:42 np0005486808 nova_compute[259627]: 2025-10-14 09:37:42.748 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:37:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418417672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.247 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.256 2 DEBUG nova.compute.provider_tree [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.285 2 DEBUG nova.scheduler.client.report [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.321 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.323 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.380 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.380 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.404 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.420 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.523 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.525 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.525 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating image(s)#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.559 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:37:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.598 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.637 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.642 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.695 2 DEBUG nova.policy [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.731 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.732 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.733 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.733 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.759 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:43 np0005486808 nova_compute[259627]: 2025-10-14 09:37:43.765 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4e23c3df-9710-4287-9890-cdae2d551fc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.211 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4e23c3df-9710-4287-9890-cdae2d551fc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.304 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.345 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully created port: 31c29d30-7edf-486a-a168-0356f62ab3b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.420 2 DEBUG nova.objects.instance [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.447 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.447 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Ensure instance console log exists: /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.448 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.449 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:44 np0005486808 nova_compute[259627]: 2025-10-14 09:37:44.449 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:37:45 np0005486808 nova_compute[259627]: 2025-10-14 09:37:45.427 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully created port: b67fadaf-4e6a-49b7-b340-4a8659d6216b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.562 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully updated port: 31c29d30-7edf-486a-a168-0356f62ab3b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:37:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG nova.compute.manager [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG nova.compute.manager [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.664 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.665 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.665 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:46 np0005486808 nova_compute[259627]: 2025-10-14 09:37:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.119 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.501 2 DEBUG nova.network.neutron [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.523 2 DEBUG oslo_concurrency.lockutils [req-ab53597d-f11f-4dd7-9351-2566ef206d48 req-e6bbdc4d-3c2c-46d9-9b4d-ed7b63b59407 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.534 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Successfully updated port: b67fadaf-4e6a-49b7-b340-4a8659d6216b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.559 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.559 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.560 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:37:47 np0005486808 nova_compute[259627]: 2025-10-14 09:37:47.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:48 np0005486808 nova_compute[259627]: 2025-10-14 09:37:48.091 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:37:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:48 np0005486808 nova_compute[259627]: 2025-10-14 09:37:48.825 2 DEBUG nova.compute.manager [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:48 np0005486808 nova_compute[259627]: 2025-10-14 09:37:48.826 2 DEBUG nova.compute.manager [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-b67fadaf-4e6a-49b7-b340-4a8659d6216b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:48 np0005486808 nova_compute[259627]: 2025-10-14 09:37:48.826 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.814 2 DEBUG nova.network.neutron [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.836 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.836 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance network_info: |[{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.837 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.838 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port b67fadaf-4e6a-49b7-b340-4a8659d6216b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.844 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start _get_guest_xml network_info=[{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.851 2 WARNING nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.861 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.862 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.868 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.869 2 DEBUG nova.virt.libvirt.host [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.870 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.871 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.872 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.872 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.873 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.873 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.874 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.875 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.875 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.876 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.876 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.877 2 DEBUG nova.virt.hardware [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:37:49 np0005486808 nova_compute[259627]: 2025-10-14 09:37:49.882 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:37:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417527090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.368 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.398 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.403 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:37:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785500214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.926 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.927 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.928 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.928 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.929 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.929 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.930 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.931 2 DEBUG nova.objects.instance [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.947 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <uuid>4e23c3df-9710-4287-9890-cdae2d551fc0</uuid>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <name>instance-00000093</name>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-2012583976</nova:name>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:37:49</nova:creationTime>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:port uuid="31c29d30-7edf-486a-a168-0356f62ab3b9">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <nova:port uuid="b67fadaf-4e6a-49b7-b340-4a8659d6216b">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9e:a24" ipVersion="6"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="serial">4e23c3df-9710-4287-9890-cdae2d551fc0</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="uuid">4e23c3df-9710-4287-9890-cdae2d551fc0</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4e23c3df-9710-4287-9890-cdae2d551fc0_disk">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:7f:2b:85"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <target dev="tap31c29d30-7e"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:9e:0a:24"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <target dev="tapb67fadaf-4e"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/console.log" append="off"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:37:50 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:37:50 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:37:50 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:37:50 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Preparing to wait for external event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.949 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Preparing to wait for external event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.950 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.951 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.952 2 DEBUG os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31c29d30-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31c29d30-7e, col_values=(('external_ids', {'iface-id': '31c29d30-7edf-486a-a168-0356f62ab3b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:2b:85', 'vm-uuid': '4e23c3df-9710-4287-9890-cdae2d551fc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:50 np0005486808 NetworkManager[44885]: <info>  [1760434670.9962] manager: (tap31c29d30-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct 14 05:37:50 np0005486808 nova_compute[259627]: 2025-10-14 09:37:50.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.006 2 INFO os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e')#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.006 2 DEBUG nova.virt.libvirt.vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:37:43Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG nova.network.os_vif_util [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.007 2 DEBUG os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb67fadaf-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb67fadaf-4e, col_values=(('external_ids', {'iface-id': 'b67fadaf-4e6a-49b7-b340-4a8659d6216b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0a:24', 'vm-uuid': '4e23c3df-9710-4287-9890-cdae2d551fc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 NetworkManager[44885]: <info>  [1760434671.0121] manager: (tapb67fadaf-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.021 2 INFO os_vif [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e')#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.080 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port b67fadaf-4e6a-49b7-b340-4a8659d6216b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.081 2 DEBUG nova.network.neutron [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.094 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:7f:2b:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.095 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:9e:0a:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.096 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Using config drive#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.129 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.138 2 DEBUG oslo_concurrency.lockutils [req-3a32bcae-a490-48ab-a1dd-76c19272ee9f req-d25b100b-49aa-451c-9e9e-c2fc5c0bb12c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.499 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Creating config drive at /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.510 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod0jc_5c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.679 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpod0jc_5c" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.710 2 DEBUG nova.storage.rbd_utils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.715 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.916 2 DEBUG oslo_concurrency.processutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config 4e23c3df-9710-4287-9890-cdae2d551fc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.918 2 INFO nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deleting local config drive /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0/disk.config because it was imported into RBD.#033[00m
Oct 14 05:37:51 np0005486808 nova_compute[259627]: 2025-10-14 09:37:51.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:51 np0005486808 kernel: tap31c29d30-7e: entered promiscuous mode
Oct 14 05:37:51 np0005486808 NetworkManager[44885]: <info>  [1760434671.9973] manager: (tap31c29d30-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/662)
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01609|binding|INFO|Claiming lport 31c29d30-7edf-486a-a168-0356f62ab3b9 for this chassis.
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01610|binding|INFO|31c29d30-7edf-486a-a168-0356f62ab3b9: Claiming fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 05:37:52 np0005486808 NetworkManager[44885]: <info>  [1760434672.0191] manager: (tapb67fadaf-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.017 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:2b:85 10.100.0.8'], port_security=['fa:16:3e:7f:2b:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31c29d30-7edf-486a-a168-0356f62ab3b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.019 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31c29d30-7edf-486a-a168-0356f62ab3b9 in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 bound to our chassis#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.021 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5#033[00m
Oct 14 05:37:52 np0005486808 kernel: tapb67fadaf-4e: entered promiscuous mode
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01611|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 ovn-installed in OVS
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01612|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 up in Southbound
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01613|if_status|INFO|Dropped 1 log messages in last 241 seconds (most recently, 241 seconds ago) due to excessive rate
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01614|if_status|INFO|Not updating pb chassis for b67fadaf-4e6a-49b7-b340-4a8659d6216b now as sb is readonly
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.046 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[99b3d99b-d5cf-4130-89e9-e28a27903cfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01615|binding|INFO|Claiming lport b67fadaf-4e6a-49b7-b340-4a8659d6216b for this chassis.
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01616|binding|INFO|b67fadaf-4e6a-49b7-b340-4a8659d6216b: Claiming fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.064 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], port_security=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:a24/64', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b67fadaf-4e6a-49b7-b340-4a8659d6216b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:37:52 np0005486808 systemd-udevd[412516]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:37:52 np0005486808 systemd-udevd[412515]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01617|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b ovn-installed in OVS
Oct 14 05:37:52 np0005486808 ovn_controller[152662]: 2025-10-14T09:37:52Z|01618|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b up in Southbound
Oct 14 05:37:52 np0005486808 NetworkManager[44885]: <info>  [1760434672.0850] device (tap31c29d30-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:37:52 np0005486808 NetworkManager[44885]: <info>  [1760434672.0857] device (tap31c29d30-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:37:52 np0005486808 NetworkManager[44885]: <info>  [1760434672.0870] device (tapb67fadaf-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:37:52 np0005486808 NetworkManager[44885]: <info>  [1760434672.0877] device (tapb67fadaf-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:37:52 np0005486808 systemd-machined[214636]: New machine qemu-180-instance-00000093.
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.099 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc78928-e418-416f-b37a-79be8579cae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.105 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5e85b8c6-4b8b-44e6-bdce-38d46eef5935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 systemd[1]: Started Virtual Machine qemu-180-instance-00000093.
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[04cef20a-847e-4644-9144-8f6a4beafee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.180 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[85de27e4-cb0e-4027-995e-47589ac3aa71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412525, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.206 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[93e20ace-bf87-4b8c-8142-411513c63538]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841308, 'tstamp': 841308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412530, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841310, 'tstamp': 841310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412530, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.208 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.215 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.215 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.216 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.218 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b67fadaf-4e6a-49b7-b340-4a8659d6216b in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.220 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.245 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4004ce95-bc77-48db-9777-2b785ce200d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.289 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[aaed2df3-9055-4a79-9a39-7395c8d71496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.293 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0966040f-a538-4380-8e91-322d0b26d703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.345 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[de411e01-2593-4183-bba8-6e6625a57329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.376 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[94583473-fb5f-4284-ae75-71023babf7d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412537, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.404 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e2c081-8a46-4d04-ae01-844efc3732c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4914ad10-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841411, 'tstamp': 841411}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412538, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.407 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.411 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.411 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.412 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:37:52 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:37:52.413 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:37:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:52 np0005486808 podman[412657]: 2025-10-14 09:37:52.851978449 +0000 UTC m=+0.059305726 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 14 05:37:52 np0005486808 podman[412656]: 2025-10-14 09:37:52.889182292 +0000 UTC m=+0.089307352 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 05:37:52 np0005486808 nova_compute[259627]: 2025-10-14 09:37:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.014 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.054 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.035582, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.055 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Started (Lifecycle Event)#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.094 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.099 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.0360239, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.099 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.119 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.122 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:37:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.143 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:37:53 np0005486808 podman[412813]: 2025-10-14 09:37:53.402224082 +0000 UTC m=+0.104630498 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:37:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:37:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287153890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.450 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:53 np0005486808 podman[412813]: 2025-10-14 09:37:53.49017727 +0000 UTC m=+0.192583706 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.553 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.554 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.561 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.562 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.679 2 DEBUG nova.compute.manager [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.680 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.681 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.682 2 DEBUG oslo_concurrency.lockutils [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.682 2 DEBUG nova.compute.manager [req-a0e64c5a-69dc-47ae-a0da-bbf6d9f950d3 req-90dd1967-dabd-40a4-a9d1-3b903b21e600 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Processing event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.816 2 DEBUG nova.compute.manager [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.816 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG oslo_concurrency.lockutils [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.817 2 DEBUG nova.compute.manager [req-08149896-2488-4d46-b882-9eaef6753961 req-a2919e96-85a3-4cb2-8b82-b2d5f341f8ff 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Processing event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.819 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.823 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434673.8230517, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.823 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.827 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.833 2 INFO nova.virt.libvirt.driver [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance spawned successfully.#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.834 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.843 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.846 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.856 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.857 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.857 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.858 2 DEBUG nova.virt.libvirt.driver [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.865 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.877 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3273MB free_disk=59.921966552734375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.928 2 INFO nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 10.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.928 2 DEBUG nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance bdfd070c-d036-4656-b797-efba7d4a4565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.969 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:37:53 np0005486808 nova_compute[259627]: 2025-10-14 09:37:53.988 2 INFO nova.compute.manager [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 11.41 seconds to build instance.#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.005 2 DEBUG oslo_concurrency.lockutils [None req-dc95f47b-78c4-4595-a307-c93cc47b5c4f 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.022 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:37:54 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833838182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.556 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.562 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.576 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.607 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:37:54 np0005486808 nova_compute[259627]: 2025-10-14 09:37:54.607 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:55 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 00ddeafd-27bc-4458-ba99-6b1855cd1850 does not exist
Oct 14 05:37:55 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 99f1a0e5-ae33-4645-9ec9-6ec412725c61 does not exist
Oct 14 05:37:55 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5128d048-48d9-452b-ab9f-76f1ce52372f does not exist
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:37:55 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.812 2 DEBUG nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.813 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG oslo_concurrency.lockutils [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.814 2 DEBUG nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.815 2 WARNING nova.compute.manager [req-1da13cd6-ed23-4444-9f8e-3c580b979bcf req-88903970-f7f8-48eb-9c79-94d8527620a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with vm_state active and task_state None.#033[00m
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.867081789 +0000 UTC m=+0.073179537 container create 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 14 05:37:55 np0005486808 systemd[1]: Started libpod-conmon-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope.
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.833499065 +0000 UTC m=+0.039596903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.939 2 DEBUG nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.940 2 DEBUG oslo_concurrency.lockutils [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.941 2 DEBUG nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:37:55 np0005486808 nova_compute[259627]: 2025-10-14 09:37:55.941 2 WARNING nova.compute.manager [req-0e5f28e9-ccaf-4b1c-a550-4a741d3e76a8 req-f122bb11-aaf4-40fe-91d7-69913cdcb5ad 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:37:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.97389554 +0000 UTC m=+0.179993368 container init 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.987670209 +0000 UTC m=+0.193767937 container start 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.990921188 +0000 UTC m=+0.197018926 container attach 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:37:55 np0005486808 eloquent_bohr[413278]: 167 167
Oct 14 05:37:55 np0005486808 podman[413262]: 2025-10-14 09:37:55.996128636 +0000 UTC m=+0.202226384 container died 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:37:55 np0005486808 systemd[1]: libpod-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope: Deactivated successfully.
Oct 14 05:37:56 np0005486808 nova_compute[259627]: 2025-10-14 09:37:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2cac6e178e87e875ca9bfb5771fb39dfbd7852cb63990042078a608c98b45160-merged.mount: Deactivated successfully.
Oct 14 05:37:56 np0005486808 podman[413262]: 2025-10-14 09:37:56.051590337 +0000 UTC m=+0.257688105 container remove 2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 14 05:37:56 np0005486808 systemd[1]: libpod-conmon-2a2bae641cef505667785c341254d53a8e2b4abf52013b80cd0d24b54d1f4dbd.scope: Deactivated successfully.
Oct 14 05:37:56 np0005486808 podman[413304]: 2025-10-14 09:37:56.313099185 +0000 UTC m=+0.066457232 container create 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:37:56 np0005486808 systemd[1]: Started libpod-conmon-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope.
Oct 14 05:37:56 np0005486808 podman[413304]: 2025-10-14 09:37:56.287348793 +0000 UTC m=+0.040706820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:37:56 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:56 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:56 np0005486808 podman[413304]: 2025-10-14 09:37:56.429771988 +0000 UTC m=+0.183130035 container init 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:37:56 np0005486808 podman[413304]: 2025-10-14 09:37:56.438320528 +0000 UTC m=+0.191678545 container start 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:37:56 np0005486808 podman[413304]: 2025-10-14 09:37:56.442002998 +0000 UTC m=+0.195361045 container attach 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:37:56 np0005486808 nova_compute[259627]: 2025-10-14 09:37:56.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:37:56 np0005486808 nova_compute[259627]: 2025-10-14 09:37:56.608 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:56 np0005486808 nova_compute[259627]: 2025-10-14 09:37:56.609 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:37:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 14 05:37:56 np0005486808 nova_compute[259627]: 2025-10-14 09:37:56.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:57 np0005486808 dazzling_varahamihira[413321]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:37:57 np0005486808 dazzling_varahamihira[413321]: --> relative data size: 1.0
Oct 14 05:37:57 np0005486808 dazzling_varahamihira[413321]: --> All data devices are unavailable
Oct 14 05:37:57 np0005486808 systemd[1]: libpod-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Deactivated successfully.
Oct 14 05:37:57 np0005486808 systemd[1]: libpod-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Consumed 1.032s CPU time.
Oct 14 05:37:57 np0005486808 podman[413350]: 2025-10-14 09:37:57.575583045 +0000 UTC m=+0.026169763 container died 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:37:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6769bfea4dec1456aea8974ee829169f4141efa4ae14cebc00738a53ef39bc94-merged.mount: Deactivated successfully.
Oct 14 05:37:57 np0005486808 podman[413350]: 2025-10-14 09:37:57.639679608 +0000 UTC m=+0.090266306 container remove 602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:37:57 np0005486808 systemd[1]: libpod-conmon-602e1ce8d26cb4e23d1e42af9c76d362dcaf2a2ddf44a45731b3e285947b7357.scope: Deactivated successfully.
Oct 14 05:37:57 np0005486808 nova_compute[259627]: 2025-10-14 09:37:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:37:57 np0005486808 nova_compute[259627]: 2025-10-14 09:37:57.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:37:57 np0005486808 nova_compute[259627]: 2025-10-14 09:37:57.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:37:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:37:58 np0005486808 nova_compute[259627]: 2025-10-14 09:37:58.199 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:58 np0005486808 nova_compute[259627]: 2025-10-14 09:37:58.200 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:58 np0005486808 nova_compute[259627]: 2025-10-14 09:37:58.200 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:37:58 np0005486808 nova_compute[259627]: 2025-10-14 09:37:58.201 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.386059834 +0000 UTC m=+0.042834052 container create 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 14 05:37:58 np0005486808 systemd[1]: Started libpod-conmon-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope.
Oct 14 05:37:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.371573789 +0000 UTC m=+0.028348027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.467312948 +0000 UTC m=+0.124087166 container init 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.473243894 +0000 UTC m=+0.130018112 container start 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.4759233 +0000 UTC m=+0.132697548 container attach 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:37:58 np0005486808 dreamy_black[413522]: 167 167
Oct 14 05:37:58 np0005486808 systemd[1]: libpod-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope: Deactivated successfully.
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.480265836 +0000 UTC m=+0.137040094 container died 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:37:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-418e2e4a56dafdf4e077c4ffaac965d20141bf60dcf01e7fad39ac3248f276e3-merged.mount: Deactivated successfully.
Oct 14 05:37:58 np0005486808 podman[413505]: 2025-10-14 09:37:58.525693701 +0000 UTC m=+0.182467949 container remove 03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_black, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 05:37:58 np0005486808 systemd[1]: libpod-conmon-03b186e847dd2e155059737a73ad52250b0f00f9cc9eed542a2dc394a6332c29.scope: Deactivated successfully.
Oct 14 05:37:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 05:37:58 np0005486808 podman[413546]: 2025-10-14 09:37:58.7591568 +0000 UTC m=+0.058933837 container create 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:37:58 np0005486808 systemd[1]: Started libpod-conmon-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope.
Oct 14 05:37:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:37:58 np0005486808 podman[413546]: 2025-10-14 09:37:58.73715056 +0000 UTC m=+0.036927637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:37:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:37:58 np0005486808 podman[413546]: 2025-10-14 09:37:58.853124446 +0000 UTC m=+0.152901503 container init 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:37:58 np0005486808 podman[413546]: 2025-10-14 09:37:58.860937708 +0000 UTC m=+0.160714745 container start 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:37:58 np0005486808 podman[413546]: 2025-10-14 09:37:58.865065509 +0000 UTC m=+0.164842626 container attach 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:37:59 np0005486808 nova_compute[259627]: 2025-10-14 09:37:59.244 2 DEBUG nova.compute.manager [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:37:59 np0005486808 nova_compute[259627]: 2025-10-14 09:37:59.247 2 DEBUG nova.compute.manager [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:37:59 np0005486808 nova_compute[259627]: 2025-10-14 09:37:59.248 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:37:59 np0005486808 nova_compute[259627]: 2025-10-14 09:37:59.249 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:37:59 np0005486808 nova_compute[259627]: 2025-10-14 09:37:59.249 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]: {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    "0": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "devices": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "/dev/loop3"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            ],
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_name": "ceph_lv0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_size": "21470642176",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "name": "ceph_lv0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "tags": {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_name": "ceph",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.crush_device_class": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.encrypted": "0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_id": "0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.vdo": "0"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            },
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "vg_name": "ceph_vg0"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        }
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    ],
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    "1": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "devices": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "/dev/loop4"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            ],
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_name": "ceph_lv1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_size": "21470642176",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "name": "ceph_lv1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "tags": {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_name": "ceph",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.crush_device_class": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.encrypted": "0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_id": "1",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.vdo": "0"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            },
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "vg_name": "ceph_vg1"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        }
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    ],
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    "2": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "devices": [
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "/dev/loop5"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            ],
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_name": "ceph_lv2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_size": "21470642176",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "name": "ceph_lv2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "tags": {
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.cluster_name": "ceph",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.crush_device_class": "",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.encrypted": "0",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osd_id": "2",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:                "ceph.vdo": "0"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            },
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "type": "block",
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:            "vg_name": "ceph_vg2"
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:        }
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]:    ]
Oct 14 05:37:59 np0005486808 ecstatic_gagarin[413563]: }
Oct 14 05:37:59 np0005486808 systemd[1]: libpod-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope: Deactivated successfully.
Oct 14 05:37:59 np0005486808 conmon[413563]: conmon 963422f73fd8818af178 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope/container/memory.events
Oct 14 05:37:59 np0005486808 podman[413546]: 2025-10-14 09:37:59.700649595 +0000 UTC m=+1.000426662 container died 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:37:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d5e57bc320274fbec08e264b11847b8c4aba407a0ab3b45fe9c3f2ccddedb75a-merged.mount: Deactivated successfully.
Oct 14 05:37:59 np0005486808 podman[413546]: 2025-10-14 09:37:59.761740034 +0000 UTC m=+1.061517061 container remove 963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:37:59 np0005486808 systemd[1]: libpod-conmon-963422f73fd8818af17868417e9a1b0fe886233be3550914959b041b69ded64b.scope: Deactivated successfully.
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.423899423 +0000 UTC m=+0.057062551 container create 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 14 05:38:00 np0005486808 systemd[1]: Started libpod-conmon-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope.
Oct 14 05:38:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.404694252 +0000 UTC m=+0.037857350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.50853515 +0000 UTC m=+0.141698188 container init 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.516807803 +0000 UTC m=+0.149970811 container start 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.520954015 +0000 UTC m=+0.154117063 container attach 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:38:00 np0005486808 awesome_noether[413741]: 167 167
Oct 14 05:38:00 np0005486808 systemd[1]: libpod-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope: Deactivated successfully.
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.524522253 +0000 UTC m=+0.157685261 container died 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:38:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9f92591f9de9cfae70a303f93ae4b50cc93e762a6a809cc1a659b87e98e6c82f-merged.mount: Deactivated successfully.
Oct 14 05:38:00 np0005486808 podman[413725]: 2025-10-14 09:38:00.574734205 +0000 UTC m=+0.207897243 container remove 9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:38:00 np0005486808 systemd[1]: libpod-conmon-9430b29c791a92b2e1a0a84fa3c0463824368f86dcb8a9983a320f5e69c6a612.scope: Deactivated successfully.
Oct 14 05:38:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 14 05:38:00 np0005486808 podman[413765]: 2025-10-14 09:38:00.76936939 +0000 UTC m=+0.047247460 container create d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:38:00 np0005486808 systemd[1]: Started libpod-conmon-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope.
Oct 14 05:38:00 np0005486808 podman[413765]: 2025-10-14 09:38:00.744751476 +0000 UTC m=+0.022629526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:38:00 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:38:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:38:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:38:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:38:00 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:38:00 np0005486808 podman[413765]: 2025-10-14 09:38:00.870480392 +0000 UTC m=+0.148358472 container init d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:38:00 np0005486808 podman[413765]: 2025-10-14 09:38:00.877818072 +0000 UTC m=+0.155696112 container start d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:38:00 np0005486808 podman[413765]: 2025-10-14 09:38:00.881858901 +0000 UTC m=+0.159737021 container attach d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:38:00 np0005486808 nova_compute[259627]: 2025-10-14 09:38:00.965 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:38:00 np0005486808 nova_compute[259627]: 2025-10-14 09:38:00.970 2 DEBUG nova.network.neutron [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:00 np0005486808 nova_compute[259627]: 2025-10-14 09:38:00.997 2 DEBUG oslo_concurrency.lockutils [req-ddd875e4-6ec3-417f-ad39-920b185a64ba req-4a064cc3-3544-4fcf-a920-4788fea6ac7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.273 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.301 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.301 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.302 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:01 np0005486808 nice_yalow[413783]: {
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_id": 2,
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "type": "bluestore"
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    },
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_id": 1,
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "type": "bluestore"
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    },
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_id": 0,
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:        "type": "bluestore"
Oct 14 05:38:01 np0005486808 nice_yalow[413783]:    }
Oct 14 05:38:01 np0005486808 nice_yalow[413783]: }
Oct 14 05:38:01 np0005486808 systemd[1]: libpod-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Deactivated successfully.
Oct 14 05:38:01 np0005486808 podman[413765]: 2025-10-14 09:38:01.883759718 +0000 UTC m=+1.161637748 container died d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:38:01 np0005486808 systemd[1]: libpod-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Consumed 1.013s CPU time.
Oct 14 05:38:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ade5c986a14e38320bc46b2edffee5be0942a0768f5c49f179fcb079006d0789-merged.mount: Deactivated successfully.
Oct 14 05:38:01 np0005486808 podman[413765]: 2025-10-14 09:38:01.93764538 +0000 UTC m=+1.215523410 container remove d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:38:01 np0005486808 systemd[1]: libpod-conmon-d2732cd5a3d1009053d4f265a9408c638f09d11210f6883e31238ff2971a9da1.scope: Deactivated successfully.
Oct 14 05:38:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:38:01 np0005486808 nova_compute[259627]: 2025-10-14 09:38:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:38:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:38:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:38:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 784e2ec4-3083-4f9b-a589-5ecf936b60d5 does not exist
Oct 14 05:38:01 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bf338d45-c4da-47af-b287-2cc45e6732e1 does not exist
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:38:02 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.145680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683145736, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1601, "num_deletes": 250, "total_data_size": 2609815, "memory_usage": 2646648, "flush_reason": "Manual Compaction"}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683159147, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1529055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52276, "largest_seqno": 53876, "table_properties": {"data_size": 1523607, "index_size": 2652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14341, "raw_average_key_size": 20, "raw_value_size": 1511590, "raw_average_value_size": 2187, "num_data_blocks": 122, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434519, "oldest_key_time": 1760434519, "file_creation_time": 1760434683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 13525 microseconds, and 8402 cpu microseconds.
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.159203) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1529055 bytes OK
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.159229) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161087) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161111) EVENT_LOG_v1 {"time_micros": 1760434683161104, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.161133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2602846, prev total WAL file size 2602846, number of live WAL files 2.
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.162833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303030' seq:72057594037927935, type:22 .. '6D6772737461740032323531' seq:0, type:0; will stop at (end)
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1493KB)], [122(9880KB)]
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683162927, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11646883, "oldest_snapshot_seqno": -1}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7457 keys, 9358591 bytes, temperature: kUnknown
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683226494, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9358591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9310751, "index_size": 28080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193866, "raw_average_key_size": 25, "raw_value_size": 9179503, "raw_average_value_size": 1230, "num_data_blocks": 1099, "num_entries": 7457, "num_filter_entries": 7457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.226818) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9358591 bytes
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.228300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.0 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(13.7) write-amplify(6.1) OK, records in: 7892, records dropped: 435 output_compression: NoCompression
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.228331) EVENT_LOG_v1 {"time_micros": 1760434683228317, "job": 74, "event": "compaction_finished", "compaction_time_micros": 63648, "compaction_time_cpu_micros": 42930, "output_level": 6, "num_output_files": 1, "total_output_size": 9358591, "num_input_records": 7892, "num_output_records": 7457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683228913, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434683232492, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.162595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:03 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:03.232598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:38:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:05Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 05:38:05 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:05Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:2b:85 10.100.0.8
Oct 14 05:38:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:38:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:38:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:38:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113738685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:38:06 np0005486808 nova_compute[259627]: 2025-10-14 09:38:06.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:06 np0005486808 nova_compute[259627]: 2025-10-14 09:38:06.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 14 05:38:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.054 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 14 05:38:08 np0005486808 podman[413880]: 2025-10-14 09:38:08.69892227 +0000 UTC m=+0.094719125 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 05:38:08 np0005486808 podman[413879]: 2025-10-14 09:38:08.745729269 +0000 UTC m=+0.144008065 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 05:38:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:38:11 np0005486808 nova_compute[259627]: 2025-10-14 09:38:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:11 np0005486808 nova_compute[259627]: 2025-10-14 09:38:11.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:38:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:38:16 np0005486808 nova_compute[259627]: 2025-10-14 09:38:16.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:16 np0005486808 nova_compute[259627]: 2025-10-14 09:38:16.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.085 2 DEBUG nova.compute.manager [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.085 2 DEBUG nova.compute.manager [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing instance network info cache due to event network-changed-31c29d30-7edf-486a-a168-0356f62ab3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.086 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Refreshing network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:38:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.173 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.173 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.174 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.175 2 INFO nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Terminating instance#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.176 2 DEBUG nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:38:18 np0005486808 kernel: tap31c29d30-7e (unregistering): left promiscuous mode
Oct 14 05:38:18 np0005486808 NetworkManager[44885]: <info>  [1760434698.2332] device (tap31c29d30-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01619|binding|INFO|Releasing lport 31c29d30-7edf-486a-a168-0356f62ab3b9 from this chassis (sb_readonly=0)
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01620|binding|INFO|Setting lport 31c29d30-7edf-486a-a168-0356f62ab3b9 down in Southbound
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01621|binding|INFO|Removing iface tap31c29d30-7e ovn-installed in OVS
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.258 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:2b:85 10.100.0.8'], port_security=['fa:16:3e:7f:2b:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=31c29d30-7edf-486a-a168-0356f62ab3b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.260 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 31c29d30-7edf-486a-a168-0356f62ab3b9 in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 unbound from our chassis#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.261 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb1842f7-933b-4c76-aa59-c55590c98ec5#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 kernel: tapb67fadaf-4e (unregistering): left promiscuous mode
Oct 14 05:38:18 np0005486808 NetworkManager[44885]: <info>  [1760434698.2795] device (tapb67fadaf-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.290 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b532af59-773c-470d-a666-950366763d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01622|binding|INFO|Releasing lport b67fadaf-4e6a-49b7-b340-4a8659d6216b from this chassis (sb_readonly=0)
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01623|binding|INFO|Setting lport b67fadaf-4e6a-49b7-b340-4a8659d6216b down in Southbound
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:18Z|01624|binding|INFO|Removing iface tapb67fadaf-4e ovn-installed in OVS
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.310 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], port_security=['fa:16:3e:9e:0a:24 2001:db8::f816:3eff:fe9e:a24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9e:a24/64', 'neutron:device_id': '4e23c3df-9710-4287-9890-cdae2d551fc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b67fadaf-4e6a-49b7-b340-4a8659d6216b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.344 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[88f9f314-107d-463f-9e72-fbfc5b6a612f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.347 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c8165a8c-3544-46b9-9451-bc0d35077a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 14 05:38:18 np0005486808 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Consumed 13.664s CPU time.
Oct 14 05:38:18 np0005486808 systemd-machined[214636]: Machine qemu-180-instance-00000093 terminated.
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.392 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc1f172-38ab-45f6-bd88-10b5005a4efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 NetworkManager[44885]: <info>  [1760434698.4162] manager: (tapb67fadaf-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.417 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[36ff8a6f-5378-4b11-82bb-63ec15be113e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb1842f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:b7:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841296, 'reachable_time': 21039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413939, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.433 2 INFO nova.virt.libvirt.driver [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance destroyed successfully.#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.433 2 DEBUG nova.objects.instance [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 4e23c3df-9710-4287-9890-cdae2d551fc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.444 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddc28a7-8fcb-43ed-8294-852591b85d0d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841308, 'tstamp': 841308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413956, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcb1842f7-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841310, 'tstamp': 841310}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413956, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.446 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.452 2 DEBUG nova.virt.libvirt.vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:53Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.452 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.453 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.454 2 DEBUG os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31c29d30-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.461 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb1842f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb1842f7-90, col_values=(('external_ids', {'iface-id': '49fe400f-9e76-42ad-a72c-3f9a5bf50e43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.462 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.464 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b67fadaf-4e6a-49b7-b340-4a8659d6216b in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.465 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.468 2 INFO os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:2b:85,bridge_name='br-int',has_traffic_filtering=True,id=31c29d30-7edf-486a-a168-0356f62ab3b9,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31c29d30-7e')#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.468 2 DEBUG nova.virt.libvirt.vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2012583976',display_name='tempest-TestGettingAddress-server-2012583976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2012583976',id=147,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-mne0wzqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:53Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4e23c3df-9710-4287-9890-cdae2d551fc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.469 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.470 2 DEBUG nova.network.os_vif_util [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.471 2 DEBUG os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb67fadaf-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.478 2 INFO os_vif [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0a:24,bridge_name='br-int',has_traffic_filtering=True,id=b67fadaf-4e6a-49b7-b340-4a8659d6216b,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb67fadaf-4e')#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.492 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfc1845-ef09-4a6c-8665-c1ba60f47dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.537 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ce36baa0-0b17-486e-bca0-764293b12a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.542 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6372c2-44b9-4d94-bed9-dba1999997b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.584 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[def7b38e-d0ca-428f-b421-0b97493b3159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.612 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e390a9a4-97e0-401e-a5e1-0dc83f4d22ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4914ad10-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:b4:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841397, 'reachable_time': 27486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413985, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 109 KiB/s wr, 17 op/s
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.637 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec93e21-2833-4b6e-adb3-006924725417]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4914ad10-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841411, 'tstamp': 841411}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413986, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.639 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.644 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4914ad10-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.645 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4914ad10-c0, col_values=(('external_ids', {'iface-id': '7ea31872-5902-4f26-8d38-70f94c9c61fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:18 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:18.646 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.943 2 INFO nova.virt.libvirt.driver [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deleting instance files /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0_del#033[00m
Oct 14 05:38:18 np0005486808 nova_compute[259627]: 2025-10-14 09:38:18.944 2 INFO nova.virt.libvirt.driver [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deletion of /var/lib/nova/instances/4e23c3df-9710-4287-9890-cdae2d551fc0_del complete#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.009 2 INFO nova.compute.manager [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.010 2 DEBUG oslo.service.loopingcall [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.011 2 DEBUG nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.011 2 DEBUG nova.network.neutron [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.301 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.302 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.302 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.303 2 DEBUG oslo_concurrency.lockutils [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.303 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:19 np0005486808 nova_compute[259627]: 2025-10-14 09:38:19.304 2 DEBUG nova.compute.manager [req-f3f5bc46-99d9-449c-b500-282a87c59e4d req-d1d0cbd9-b67a-424e-9755-3733ded5878e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.172 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.173 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.173 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG oslo_concurrency.lockutils [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.174 2 DEBUG nova.compute.manager [req-36d171a7-9831-49cd-ad95-624065adf103 req-c66b46d6-b518-4c5b-9da0-9a708d2d5ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-unplugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:38:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 116 KiB/s wr, 48 op/s
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.867 2 DEBUG nova.network.neutron [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.889 2 INFO nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Took 1.88 seconds to deallocate network for instance.#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.947 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.948 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.970 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updated VIF entry in instance network info cache for port 31c29d30-7edf-486a-a168-0356f62ab3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.971 2 DEBUG nova.network.neutron [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Updating instance_info_cache with network_info: [{"id": "31c29d30-7edf-486a-a168-0356f62ab3b9", "address": "fa:16:3e:7f:2b:85", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c29d30-7e", "ovs_interfaceid": "31c29d30-7edf-486a-a168-0356f62ab3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "address": "fa:16:3e:9e:0a:24", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9e:a24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb67fadaf-4e", "ovs_interfaceid": "b67fadaf-4e6a-49b7-b340-4a8659d6216b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:20 np0005486808 nova_compute[259627]: 2025-10-14 09:38:20.997 2 DEBUG oslo_concurrency.lockutils [req-44c43d0a-864d-481e-bb31-ec786b925c3b req-f7fc48ff-8c39-44fd-8cb4-1f11835d8d00 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4e23c3df-9710-4287-9890-cdae2d551fc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.045 2 DEBUG oslo_concurrency.processutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.436 2 DEBUG nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.437 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.438 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.438 2 DEBUG oslo_concurrency.lockutils [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.439 2 DEBUG nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.439 2 WARNING nova.compute.manager [req-0a7e3218-683f-47d5-a71a-4a8ca1884766 req-58b6801e-a8f1-4617-9b33-14a8059d2715 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-31c29d30-7edf-486a-a168-0356f62ab3b9 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:38:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:38:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051288224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.486 2 DEBUG oslo_concurrency.processutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.494 2 DEBUG nova.compute.provider_tree [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.514 2 DEBUG nova.scheduler.client.report [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.544 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.568 2 INFO nova.scheduler.client.report [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 4e23c3df-9710-4287-9890-cdae2d551fc0#033[00m
Oct 14 05:38:21 np0005486808 nova_compute[259627]: 2025-10-14 09:38:21.631 2 DEBUG oslo_concurrency.lockutils [None req-27d457d6-6383-4dc8-89d7-cc5cac343089 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.259 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.259 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.260 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.260 2 DEBUG oslo_concurrency.lockutils [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4e23c3df-9710-4287-9890-cdae2d551fc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.261 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] No waiting events found dispatching network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.261 2 WARNING nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received unexpected event network-vif-plugged-b67fadaf-4e6a-49b7-b340-4a8659d6216b for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.262 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-deleted-31c29d30-7edf-486a-a168-0356f62ab3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.262 2 INFO nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Neutron deleted interface 31c29d30-7edf-486a-a168-0356f62ab3b9; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.263 2 DEBUG nova.network.neutron [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.267 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Detach interface failed, port_id=31c29d30-7edf-486a-a168-0356f62ab3b9, reason: Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.267 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Received event network-vif-deleted-b67fadaf-4e6a-49b7-b340-4a8659d6216b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.268 2 INFO nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Neutron deleted interface b67fadaf-4e6a-49b7-b340-4a8659d6216b; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.268 2 DEBUG nova.network.neutron [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 14 05:38:22 np0005486808 nova_compute[259627]: 2025-10-14 09:38:22.272 2 DEBUG nova.compute.manager [req-fdf795df-bb14-4e85-913b-81217b077670 req-6b2628e4-87f4-4f57-873e-2c87421d03ab 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Detach interface failed, port_id=b67fadaf-4e6a-49b7-b340-4a8659d6216b, reason: Instance 4e23c3df-9710-4287-9890-cdae2d551fc0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:38:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 05:38:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.351 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.352 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 podman[414011]: 2025-10-14 09:38:23.707801505 +0000 UTC m=+0.100061587 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:38:23 np0005486808 podman[414010]: 2025-10-14 09:38:23.728284317 +0000 UTC m=+0.121827140 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.847 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.848 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.849 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.849 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.850 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.852 2 INFO nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Terminating instance#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.854 2 DEBUG nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:38:23 np0005486808 kernel: tapd3b7ded4-91 (unregistering): left promiscuous mode
Oct 14 05:38:23 np0005486808 NetworkManager[44885]: <info>  [1760434703.9237] device (tapd3b7ded4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01625|binding|INFO|Releasing lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab from this chassis (sb_readonly=0)
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01626|binding|INFO|Setting lport d3b7ded4-91fa-46dc-b6b9-e6e630c275ab down in Southbound
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01627|binding|INFO|Removing iface tapd3b7ded4-91 ovn-installed in OVS
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.945 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:b2:53 10.100.0.10'], port_security=['fa:16:3e:fc:b2:53 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b96c99b-2e43-4904-a8ba-7ffb70fd145f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.947 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab in datapath cb1842f7-933b-4c76-aa59-c55590c98ec5 unbound from our chassis#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.948 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cb1842f7-933b-4c76-aa59-c55590c98ec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.949 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a198c74b-92d9-4190-bc4e-c4c229e055f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:23.950 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 namespace which is not needed anymore#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 kernel: tapd4332d2f-ff (unregistering): left promiscuous mode
Oct 14 05:38:23 np0005486808 NetworkManager[44885]: <info>  [1760434703.9764] device (tapd4332d2f-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01628|binding|INFO|Releasing lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 from this chassis (sb_readonly=0)
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01629|binding|INFO|Setting lport d4332d2f-fff2-4de9-811c-7d5ce2580b21 down in Southbound
Oct 14 05:38:23 np0005486808 ovn_controller[152662]: 2025-10-14T09:38:23Z|01630|binding|INFO|Removing iface tapd4332d2f-ff ovn-installed in OVS
Oct 14 05:38:23 np0005486808 nova_compute[259627]: 2025-10-14 09:38:23.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.000 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], port_security=['fa:16:3e:98:69:75 2001:db8::f816:3eff:fe98:6975'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe98:6975/64', 'neutron:device_id': 'bdfd070c-d036-4656-b797-efba7d4a4565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6145c26-2b2a-4cab-aefe-d574ccfcd594', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85b53903-cbc2-43bd-a94b-2366acd741ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=d4332d2f-fff2-4de9-811c-7d5ce2580b21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 14 05:38:24 np0005486808 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Consumed 16.802s CPU time.
Oct 14 05:38:24 np0005486808 systemd-machined[214636]: Machine qemu-179-instance-00000092 terminated.
Oct 14 05:38:24 np0005486808 NetworkManager[44885]: <info>  [1760434704.0910] manager: (tapd4332d2f-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.120 2 INFO nova.virt.libvirt.driver [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Instance destroyed successfully.#033[00m
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : haproxy version is 2.8.14-c23fe91
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [NOTICE]   (412014) : path to executable is /usr/sbin/haproxy
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : Exiting Master process...
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : Exiting Master process...
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.122 2 DEBUG nova.objects.instance [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid bdfd070c-d036-4656-b797-efba7d4a4565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [ALERT]    (412014) : Current worker (412016) exited with code 143 (Terminated)
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5[412010]: [WARNING]  (412014) : All workers exited. Exiting... (0)
Oct 14 05:38:24 np0005486808 systemd[1]: libpod-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope: Deactivated successfully.
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.135 2 DEBUG nova.virt.libvirt.vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.140 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:38:24 np0005486808 podman[414076]: 2025-10-14 09:38:24.135824058 +0000 UTC m=+0.066760949 container died a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.143 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.144 2 DEBUG os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3b7ded4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.158 2 INFO os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:b2:53,bridge_name='br-int',has_traffic_filtering=True,id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab,network=Network(cb1842f7-933b-4c76-aa59-c55590c98ec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b7ded4-91')#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.160 2 DEBUG nova.virt.libvirt.vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1555367739',display_name='tempest-TestGettingAddress-server-1555367739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1555367739',id=146,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmIsbrf38DSeYfU0D9g6Pa5xZfdqfiS9Azfj2PL9jRlp2LEaDRc8Y0PnazdZrJgmswDWR1X7Y7Ef3rL30Sf7OIa7xQNBBPmRTRkIXeX3JkPwVJc6ryIngStVh0nN8o8KQ==',key_name='tempest-TestGettingAddress-932007528',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-3rvo4yr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:37:16Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=bdfd070c-d036-4656-b797-efba7d4a4565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.160 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.161 2 DEBUG nova.network.os_vif_util [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.162 2 DEBUG os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4332d2f-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.169 2 INFO os_vif [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:69:75,bridge_name='br-int',has_traffic_filtering=True,id=d4332d2f-fff2-4de9-811c-7d5ce2580b21,network=Network(4914ad10-cd65-4b9a-8ebb-43ebafc5f222),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4332d2f-ff')#033[00m
Oct 14 05:38:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54-userdata-shm.mount: Deactivated successfully.
Oct 14 05:38:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-25a575707cf6d3dded0de43b86eb81e521613e1d92e58a48fd3de84aba25f6fe-merged.mount: Deactivated successfully.
Oct 14 05:38:24 np0005486808 podman[414076]: 2025-10-14 09:38:24.196315683 +0000 UTC m=+0.127252574 container cleanup a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:38:24 np0005486808 systemd[1]: libpod-conmon-a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54.scope: Deactivated successfully.
Oct 14 05:38:24 np0005486808 podman[414142]: 2025-10-14 09:38:24.271412686 +0000 UTC m=+0.045956899 container remove a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.280 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8c19e55e-68f4-40af-997e-5b53a1391770]: (4, ('Tue Oct 14 09:38:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 (a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54)\na5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54\nTue Oct 14 09:38:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 (a5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54)\na5fb4a873dbebb3c5d634e3c4960ccd5662b44ea742a782aba63a014099ebd54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.282 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bf210d54-f083-4bc6-8bc2-7d13afea5bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.284 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb1842f7-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 kernel: tapcb1842f7-90: left promiscuous mode
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb3141c-a8e9-46ed-a5df-6c5fd30b2b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.347 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[35265d52-c823-44f7-8cc3-b62b6a7b8b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.349 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2843e757-5734-4ff8-b270-108f49e73ad6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.368 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d22ad-d9a2-4534-a638-d7445e94e26d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841288, 'reachable_time': 21456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414160, 'error': None, 'target': 'ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 systemd[1]: run-netns-ovnmeta\x2dcb1842f7\x2d933b\x2d4c76\x2daa59\x2dc55590c98ec5.mount: Deactivated successfully.
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.374 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cb1842f7-933b-4c76-aa59-c55590c98ec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.374 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba165f8-422d-422b-84dc-fb86cc9d73df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.376 162547 INFO neutron.agent.ovn.metadata.agent [-] Port d4332d2f-fff2-4de9-811c-7d5ce2580b21 in datapath 4914ad10-cd65-4b9a-8ebb-43ebafc5f222 unbound from our chassis#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.378 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.379 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.379 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing instance network info cache due to event network-changed-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.380 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.380 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.381 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Refreshing network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.380 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[0592a474-8b27-4e15-a0e1-83e730a06bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.381 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 namespace which is not needed anymore#033[00m
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : haproxy version is 2.8.14-c23fe91
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [NOTICE]   (412087) : path to executable is /usr/sbin/haproxy
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [WARNING]  (412087) : Exiting Master process...
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [ALERT]    (412087) : Current worker (412089) exited with code 143 (Terminated)
Oct 14 05:38:24 np0005486808 neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222[412083]: [WARNING]  (412087) : All workers exited. Exiting... (0)
Oct 14 05:38:24 np0005486808 systemd[1]: libpod-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope: Deactivated successfully.
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.605 2 INFO nova.virt.libvirt.driver [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deleting instance files /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565_del#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.606 2 INFO nova.virt.libvirt.driver [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deletion of /var/lib/nova/instances/bdfd070c-d036-4656-b797-efba7d4a4565_del complete#033[00m
Oct 14 05:38:24 np0005486808 podman[414179]: 2025-10-14 09:38:24.608269452 +0000 UTC m=+0.081277175 container died 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:38:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 31 op/s
Oct 14 05:38:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb-userdata-shm.mount: Deactivated successfully.
Oct 14 05:38:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-1331dc74cf2f6433ae28b4a6f4672079f6c234826e20c160e76e1878b09b3328-merged.mount: Deactivated successfully.
Oct 14 05:38:24 np0005486808 podman[414179]: 2025-10-14 09:38:24.65217191 +0000 UTC m=+0.125179543 container cleanup 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.661 2 INFO nova.compute.manager [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.662 2 DEBUG oslo.service.loopingcall [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.663 2 DEBUG nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.663 2 DEBUG nova.network.neutron [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:38:24 np0005486808 systemd[1]: libpod-conmon-7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb.scope: Deactivated successfully.
Oct 14 05:38:24 np0005486808 podman[414207]: 2025-10-14 09:38:24.710958592 +0000 UTC m=+0.038899045 container remove 7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.723 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6b525581-c508-46ff-9905-cb3d7f508389]: (4, ('Tue Oct 14 09:38:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 (7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb)\n7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb\nTue Oct 14 09:38:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 (7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb)\n7ab8b2afc12766df09251966b091f22abcd9771c59d975709d780ded3ea72dfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.725 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[55906227-1d74-4a4e-82d7-f6c4d9d7d4a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.726 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4914ad10-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 kernel: tap4914ad10-c0: left promiscuous mode
Oct 14 05:38:24 np0005486808 nova_compute[259627]: 2025-10-14 09:38:24.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.755 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0b6310-2a96-4fa2-a8e3-9c09b1ecb0a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.788 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bc338261-d309-43f5-a783-ebf5e2bb6540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.789 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[28164651-b9a1-4acc-a593-abcb04750d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.813 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3e80ca-26d5-4fc7-a222-2ee5ce6eb156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841388, 'reachable_time': 27324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414222, 'error': None, 'target': 'ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.815 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4914ad10-cd65-4b9a-8ebb-43ebafc5f222 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:38:24 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:24.816 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb3db80-7193-4596-b717-f1cd5f05241b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:25 np0005486808 systemd[1]: run-netns-ovnmeta\x2d4914ad10\x2dcd65\x2d4b9a\x2d8ebb\x2d43ebafc5f222.mount: Deactivated successfully.
Oct 14 05:38:25 np0005486808 nova_compute[259627]: 2025-10-14 09:38:25.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 26 KiB/s wr, 59 op/s
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.696 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updated VIF entry in instance network info cache for port d3b7ded4-91fa-46dc-b6b9-e6e630c275ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.697 2 DEBUG nova.network.neutron [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "address": "fa:16:3e:fc:b2:53", "network": {"id": "cb1842f7-933b-4c76-aa59-c55590c98ec5", "bridge": "br-int", "label": "tempest-network-smoke--589513035", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b7ded4-91", "ovs_interfaceid": "d3b7ded4-91fa-46dc-b6b9-e6e630c275ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.727 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-bdfd070c-d036-4656-b797-efba7d4a4565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.728 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.729 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.729 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.730 2 DEBUG oslo_concurrency.lockutils [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.730 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.731 2 DEBUG nova.compute.manager [req-b4efa9b3-00ec-4c17-bcdb-d80b83f430ac req-d1dd75dd-4f3d-442f-bc53-1ae3a3e62881 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.782 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.783 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.784 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.784 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.785 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.785 2 WARNING nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.786 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.787 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-unplugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.788 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.789 2 DEBUG oslo_concurrency.lockutils [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.789 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] No waiting events found dispatching network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.789 2 WARNING nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received unexpected event network-vif-plugged-d4332d2f-fff2-4de9-811c-7d5ce2580b21 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.790 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-deleted-d3b7ded4-91fa-46dc-b6b9-e6e630c275ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.790 2 INFO nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Neutron deleted interface d3b7ded4-91fa-46dc-b6b9-e6e630c275ab; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.790 2 DEBUG nova.network.neutron [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [{"id": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "address": "fa:16:3e:98:69:75", "network": {"id": "4914ad10-cd65-4b9a-8ebb-43ebafc5f222", "bridge": "br-int", "label": "tempest-network-smoke--1837952559", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe98:6975", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4332d2f-ff", "ovs_interfaceid": "d4332d2f-fff2-4de9-811c-7d5ce2580b21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:26 np0005486808 nova_compute[259627]: 2025-10-14 09:38:26.823 2 DEBUG nova.compute.manager [req-9fcf35ca-48a7-43c1-bec8-73a23de81fc8 req-9ee88809-de4f-476c-8373-c5772b919cbd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Detach interface failed, port_id=d3b7ded4-91fa-46dc-b6b9-e6e630c275ab, reason: Instance bdfd070c-d036-4656-b797-efba7d4a4565 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.355 2 DEBUG nova.network.neutron [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.379 2 INFO nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Took 2.72 seconds to deallocate network for instance.#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.422 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.423 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.508 2 DEBUG oslo_concurrency.processutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:38:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:38:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442197784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.935 2 DEBUG oslo_concurrency.processutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.941 2 DEBUG nova.compute.provider_tree [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.961 2 DEBUG nova.scheduler.client.report [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:38:27 np0005486808 nova_compute[259627]: 2025-10-14 09:38:27.987 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:28 np0005486808 nova_compute[259627]: 2025-10-14 09:38:28.028 2 INFO nova.scheduler.client.report [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance bdfd070c-d036-4656-b797-efba7d4a4565#033[00m
Oct 14 05:38:28 np0005486808 nova_compute[259627]: 2025-10-14 09:38:28.128 2 DEBUG oslo_concurrency.lockutils [None req-165911fd-40d3-491a-bfab-055ebc90c0c3 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "bdfd070c-d036-4656-b797-efba7d4a4565" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 05:38:28 np0005486808 nova_compute[259627]: 2025-10-14 09:38:28.907 2 DEBUG nova.compute.manager [req-63ef8cc8-0c8a-4042-a5d3-b77102f72d2e req-41093187-bf5e-46b1-a26f-0c0c245126bf 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Received event network-vif-deleted-d4332d2f-fff2-4de9-811c-7d5ce2580b21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:38:29 np0005486808 nova_compute[259627]: 2025-10-14 09:38:29.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 14 05:38:31 np0005486808 nova_compute[259627]: 2025-10-14 09:38:31.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:38:32
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Oct 14 05:38:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:38:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:38:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:38:33 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:33.355 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:38:33 np0005486808 nova_compute[259627]: 2025-10-14 09:38:33.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434698.4304912, 4e23c3df-9710-4287-9890-cdae2d551fc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:38:33 np0005486808 nova_compute[259627]: 2025-10-14 09:38:33.433 2 INFO nova.compute.manager [-] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:38:33 np0005486808 nova_compute[259627]: 2025-10-14 09:38:33.470 2 DEBUG nova.compute.manager [None req-5c3f268b-6fad-4997-ab3a-e7fc5b784b27 - - - - - -] [instance: 4e23c3df-9710-4287-9890-cdae2d551fc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:38:34 np0005486808 nova_compute[259627]: 2025-10-14 09:38:34.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 05:38:36 np0005486808 nova_compute[259627]: 2025-10-14 09:38:36.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:36 np0005486808 nova_compute[259627]: 2025-10-14 09:38:36.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:36 np0005486808 nova_compute[259627]: 2025-10-14 09:38:36.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 14 05:38:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:39 np0005486808 nova_compute[259627]: 2025-10-14 09:38:39.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434704.108715, bdfd070c-d036-4656-b797-efba7d4a4565 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:38:39 np0005486808 nova_compute[259627]: 2025-10-14 09:38:39.111 2 INFO nova.compute.manager [-] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:38:39 np0005486808 nova_compute[259627]: 2025-10-14 09:38:39.146 2 DEBUG nova.compute.manager [None req-e11be0fd-c752-473b-a714-d0dff6e67c4c - - - - - -] [instance: bdfd070c-d036-4656-b797-efba7d4a4565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:38:39 np0005486808 nova_compute[259627]: 2025-10-14 09:38:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:39 np0005486808 podman[414247]: 2025-10-14 09:38:39.697560112 +0000 UTC m=+0.096521340 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:38:39 np0005486808 podman[414246]: 2025-10-14 09:38:39.713663677 +0000 UTC m=+0.119903033 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:38:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:41 np0005486808 nova_compute[259627]: 2025-10-14 09:38:41.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:38:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.745826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723745915, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 549, "num_deletes": 251, "total_data_size": 578483, "memory_usage": 588696, "flush_reason": "Manual Compaction"}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723752972, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 573213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53877, "largest_seqno": 54425, "table_properties": {"data_size": 570161, "index_size": 1024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6977, "raw_average_key_size": 18, "raw_value_size": 564210, "raw_average_value_size": 1533, "num_data_blocks": 46, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434683, "oldest_key_time": 1760434683, "file_creation_time": 1760434723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7237 microseconds, and 4267 cpu microseconds.
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.753076) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 573213 bytes OK
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.753099) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754627) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754649) EVENT_LOG_v1 {"time_micros": 1760434723754642, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.754672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 575397, prev total WAL file size 575397, number of live WAL files 2.
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.755341) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(559KB)], [125(9139KB)]
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723755557, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9931804, "oldest_snapshot_seqno": -1}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7315 keys, 8282339 bytes, temperature: kUnknown
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723811443, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8282339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8236370, "index_size": 26592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 191600, "raw_average_key_size": 26, "raw_value_size": 8108558, "raw_average_value_size": 1108, "num_data_blocks": 1029, "num_entries": 7315, "num_filter_entries": 7315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.811712) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8282339 bytes
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.813176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.7 rd, 148.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(31.8) write-amplify(14.4) OK, records in: 7825, records dropped: 510 output_compression: NoCompression
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.813205) EVENT_LOG_v1 {"time_micros": 1760434723813193, "job": 76, "event": "compaction_finished", "compaction_time_micros": 55897, "compaction_time_cpu_micros": 37865, "output_level": 6, "num_output_files": 1, "total_output_size": 8282339, "num_input_records": 7825, "num_output_records": 7315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723813568, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434723817239, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.755253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:38:43.817317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:38:44 np0005486808 nova_compute[259627]: 2025-10-14 09:38:44.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:46 np0005486808 nova_compute[259627]: 2025-10-14 09:38:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:46 np0005486808 nova_compute[259627]: 2025-10-14 09:38:46.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:49 np0005486808 nova_compute[259627]: 2025-10-14 09:38:49.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.223 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8::f816:3eff:fe76:47b8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=500514c8-ea10-48c5-93d8-b1c6948e60b0) old=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.224 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 500514c8-ea10-48c5-93d8-b1c6948e60b0 in datapath 14196b9b-0205-497b-9e98-32690613a533 updated#033[00m
Oct 14 05:38:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.225 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:38:50 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:50.227 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33ecf7e3-ac28-496d-b04c-32b028b426ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:50 np0005486808 nova_compute[259627]: 2025-10-14 09:38:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:50 np0005486808 nova_compute[259627]: 2025-10-14 09:38:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:50 np0005486808 nova_compute[259627]: 2025-10-14 09:38:50.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:38:51 np0005486808 nova_compute[259627]: 2025-10-14 09:38:51.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:52 np0005486808 nova_compute[259627]: 2025-10-14 09:38:52.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.724 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8:0:1:f816:3eff:fe76:47b8 2001:db8::f816:3eff:fe76:47b8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe76:47b8/64 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=500514c8-ea10-48c5-93d8-b1c6948e60b0) old=Port_Binding(mac=['fa:16:3e:76:47:b8 10.100.0.2 2001:db8::f816:3eff:fe76:47b8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe76:47b8/64', 'neutron:device_id': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:38:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.726 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 500514c8-ea10-48c5-93d8-b1c6948e60b0 in datapath 14196b9b-0205-497b-9e98-32690613a533 updated#033[00m
Oct 14 05:38:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.728 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:38:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:38:53.729 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7718b5d9-5c9c-4d8e-a31b-ccddc812d110]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:38:54 np0005486808 nova_compute[259627]: 2025-10-14 09:38:54.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:54 np0005486808 podman[414292]: 2025-10-14 09:38:54.649775458 +0000 UTC m=+0.063805627 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:38:54 np0005486808 podman[414293]: 2025-10-14 09:38:54.661842644 +0000 UTC m=+0.067303342 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:38:54 np0005486808 nova_compute[259627]: 2025-10-14 09:38:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:38:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:38:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806017976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.474 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.748 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.750 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3594MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.750 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.751 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.900 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.901 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:38:55 np0005486808 nova_compute[259627]: 2025-10-14 09:38:55.966 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:38:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:38:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848032482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:38:56 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:38:56 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.437 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.445 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.471 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.504 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.505 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:56 np0005486808 nova_compute[259627]: 2025-10-14 09:38:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:57 np0005486808 nova_compute[259627]: 2025-10-14 09:38:57.506 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:57 np0005486808 nova_compute[259627]: 2025-10-14 09:38:57.507 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:38:57 np0005486808 nova_compute[259627]: 2025-10-14 09:38:57.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:57 np0005486808 nova_compute[259627]: 2025-10-14 09:38:57.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.007 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:38:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:38:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.679 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.679 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.708 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.890 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.891 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.901 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:38:58 np0005486808 nova_compute[259627]: 2025-10-14 09:38:58.902 2 INFO nova.compute.claims [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.000 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.050 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:38:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:38:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347606092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.548 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.554 2 DEBUG nova.compute.provider_tree [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.581 2 DEBUG nova.scheduler.client.report [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.629 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.630 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.707 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.708 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.735 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.756 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.869 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.871 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.872 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating image(s)#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.911 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.948 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.975 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:38:59 np0005486808 nova_compute[259627]: 2025-10-14 09:38:59.979 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.028 2 DEBUG nova.policy [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.033 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.071 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.072 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.072 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.073 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.097 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.100 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.362 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.443 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.550 2 DEBUG nova.objects.instance [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Ensure instance console log exists: /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.574 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.575 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:00 np0005486808 nova_compute[259627]: 2025-10-14 09:39:00.575 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 05:39:01 np0005486808 nova_compute[259627]: 2025-10-14 09:39:01.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:02 np0005486808 nova_compute[259627]: 2025-10-14 09:39:02.517 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Successfully created port: 30c28c87-45b1-43e9-930b-c8ba5142286f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 956726a7-5c12-4a68-bf17-4325b85dd9e3 does not exist
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bc7bc011-8776-4264-8df5-c13a6edb5ddb does not exist
Oct 14 05:39:02 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bb08aeff-726b-4be7-a5e5-43ba3297f9f9 does not exist
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:39:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:39:02 np0005486808 nova_compute[259627]: 2025-10-14 09:39:02.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.193 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Successfully updated port: 30c28c87-45b1-43e9-930b-c8ba5142286f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.258 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.332 2 DEBUG nova.compute.manager [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.333 2 DEBUG nova.compute.manager [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.333 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:03 np0005486808 nova_compute[259627]: 2025-10-14 09:39:03.543 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.749817991 +0000 UTC m=+0.068306827 container create 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:39:03 np0005486808 systemd[1]: Started libpod-conmon-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope.
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.720722117 +0000 UTC m=+0.039211003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.853477435 +0000 UTC m=+0.171966261 container init 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.861745648 +0000 UTC m=+0.180234484 container start 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.867899269 +0000 UTC m=+0.186388105 container attach 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:39:03 np0005486808 xenodochial_kilby[414860]: 167 167
Oct 14 05:39:03 np0005486808 systemd[1]: libpod-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope: Deactivated successfully.
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.871170359 +0000 UTC m=+0.189659185 container died 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:39:03 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:39:03 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:03 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:39:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-073458afd39f15130dda37f96a62361c662b7be0bf1605df52aa0875376b851b-merged.mount: Deactivated successfully.
Oct 14 05:39:03 np0005486808 podman[414843]: 2025-10-14 09:39:03.926334503 +0000 UTC m=+0.244823329 container remove 14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:39:03 np0005486808 systemd[1]: libpod-conmon-14a43929da8039d095e260a119986528fbf0cde353e52c424ec112d45af294ec.scope: Deactivated successfully.
Oct 14 05:39:04 np0005486808 podman[414883]: 2025-10-14 09:39:04.184495128 +0000 UTC m=+0.071850194 container create 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 05:39:04 np0005486808 systemd[1]: Started libpod-conmon-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope.
Oct 14 05:39:04 np0005486808 podman[414883]: 2025-10-14 09:39:04.154245466 +0000 UTC m=+0.041600572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:04 np0005486808 nova_compute[259627]: 2025-10-14 09:39:04.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:04 np0005486808 podman[414883]: 2025-10-14 09:39:04.292611141 +0000 UTC m=+0.179966227 container init 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:39:04 np0005486808 podman[414883]: 2025-10-14 09:39:04.312228733 +0000 UTC m=+0.199583809 container start 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:39:04 np0005486808 podman[414883]: 2025-10-14 09:39:04.316701762 +0000 UTC m=+0.204056908 container attach 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:39:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 51 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 167 KiB/s wr, 2 op/s
Oct 14 05:39:05 np0005486808 affectionate_shockley[414900]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:39:05 np0005486808 affectionate_shockley[414900]: --> relative data size: 1.0
Oct 14 05:39:05 np0005486808 affectionate_shockley[414900]: --> All data devices are unavailable
Oct 14 05:39:05 np0005486808 systemd[1]: libpod-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Deactivated successfully.
Oct 14 05:39:05 np0005486808 systemd[1]: libpod-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Consumed 1.149s CPU time.
Oct 14 05:39:05 np0005486808 podman[414929]: 2025-10-14 09:39:05.555918322 +0000 UTC m=+0.031790081 container died 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:39:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8a73628b6bf357cd3e7364643554689a51e781e377de77d981c6c7ebb52c3e7f-merged.mount: Deactivated successfully.
Oct 14 05:39:05 np0005486808 podman[414929]: 2025-10-14 09:39:05.618004206 +0000 UTC m=+0.093875875 container remove 4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:39:05 np0005486808 systemd[1]: libpod-conmon-4876e162328a08db45867521fde39e97a8f2c1208b8a3b1e1c13616f87b04782.scope: Deactivated successfully.
Oct 14 05:39:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:39:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:39:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:39:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896758948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.137 2 DEBUG nova.network.neutron [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.169 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.169 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance network_info: |[{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.170 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.170 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.180 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start _get_guest_xml network_info=[{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.187 2 WARNING nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.193 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.195 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.207 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.208 2 DEBUG nova.virt.libvirt.host [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.209 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.209 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.210 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.211 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.212 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.213 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.214 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.214 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.215 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.215 2 DEBUG nova.virt.hardware [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.222 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.313477193 +0000 UTC m=+0.048437490 container create 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:39:06 np0005486808 systemd[1]: Started libpod-conmon-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope.
Oct 14 05:39:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.373619279 +0000 UTC m=+0.108579606 container init 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.381210205 +0000 UTC m=+0.116170492 container start 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.385055049 +0000 UTC m=+0.120015366 container attach 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:39:06 np0005486808 vibrant_kare[415099]: 167 167
Oct 14 05:39:06 np0005486808 systemd[1]: libpod-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope: Deactivated successfully.
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.294499437 +0000 UTC m=+0.029459774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.387463078 +0000 UTC m=+0.122423365 container died 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:39:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9a0ac7207e34bf0d329b4cf196b94ff10aebca80426c407a33fba20cad4d61a6-merged.mount: Deactivated successfully.
Oct 14 05:39:06 np0005486808 podman[415082]: 2025-10-14 09:39:06.427274615 +0000 UTC m=+0.162234902 container remove 25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:39:06 np0005486808 systemd[1]: libpod-conmon-25b2d58f9611342ead28c520c37849a1b32e81d413b2d735b0bf2a9f751e0509.scope: Deactivated successfully.
Oct 14 05:39:06 np0005486808 podman[415140]: 2025-10-14 09:39:06.597441401 +0000 UTC m=+0.048328407 container create 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:39:06 np0005486808 systemd[1]: Started libpod-conmon-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope.
Oct 14 05:39:06 np0005486808 podman[415140]: 2025-10-14 09:39:06.577375629 +0000 UTC m=+0.028262685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:39:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923987819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:39:06 np0005486808 podman[415140]: 2025-10-14 09:39:06.718354708 +0000 UTC m=+0.169241744 container init 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:39:06 np0005486808 podman[415140]: 2025-10-14 09:39:06.725609106 +0000 UTC m=+0.176496122 container start 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:39:06 np0005486808 podman[415140]: 2025-10-14 09:39:06.729175184 +0000 UTC m=+0.180062240 container attach 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.734 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.757 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:06 np0005486808 nova_compute[259627]: 2025-10-14 09:39:06.761 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.055 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:39:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103820873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.200 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.202 2 DEBUG nova.virt.libvirt.vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:38:59Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.203 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.205 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.208 2 DEBUG nova.objects.instance [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.226 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <uuid>e5b13156-71d2-4a9c-be63-1beebe1ca3fb</uuid>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <name>instance-00000094</name>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-911560012</nova:name>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:39:06</nova:creationTime>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <nova:port uuid="30c28c87-45b1-43e9-930b-c8ba5142286f">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb7:ec03" ipVersion="6"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb7:ec03" ipVersion="6"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="serial">e5b13156-71d2-4a9c-be63-1beebe1ca3fb</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="uuid">e5b13156-71d2-4a9c-be63-1beebe1ca3fb</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:b7:ec:03"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <target dev="tap30c28c87-45"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/console.log" append="off"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:39:07 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:39:07 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:39:07 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:39:07 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.229 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Preparing to wait for external event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.229 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.230 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.230 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.232 2 DEBUG nova.virt.libvirt.vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:38:59Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.233 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.234 2 DEBUG nova.network.os_vif_util [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.235 2 DEBUG os_vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30c28c87-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30c28c87-45, col_values=(('external_ids', {'iface-id': '30c28c87-45b1-43e9-930b-c8ba5142286f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:ec:03', 'vm-uuid': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:07 np0005486808 NetworkManager[44885]: <info>  [1760434747.2554] manager: (tap30c28c87-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/666)
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.263 2 INFO os_vif [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45')#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.332 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.333 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.333 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:b7:ec:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.334 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Using config drive#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.366 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:07 np0005486808 pensive_euler[415156]: {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    "0": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "devices": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "/dev/loop3"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            ],
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_name": "ceph_lv0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_size": "21470642176",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "name": "ceph_lv0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "tags": {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_name": "ceph",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.crush_device_class": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.encrypted": "0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_id": "0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.vdo": "0"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            },
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "vg_name": "ceph_vg0"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        }
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    ],
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    "1": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "devices": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "/dev/loop4"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            ],
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_name": "ceph_lv1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_size": "21470642176",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "name": "ceph_lv1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "tags": {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_name": "ceph",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.crush_device_class": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.encrypted": "0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_id": "1",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.vdo": "0"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            },
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "vg_name": "ceph_vg1"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        }
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    ],
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    "2": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "devices": [
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "/dev/loop5"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            ],
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_name": "ceph_lv2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_size": "21470642176",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "name": "ceph_lv2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "tags": {
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.cluster_name": "ceph",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.crush_device_class": "",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.encrypted": "0",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osd_id": "2",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:                "ceph.vdo": "0"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            },
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "type": "block",
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:            "vg_name": "ceph_vg2"
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:        }
Oct 14 05:39:07 np0005486808 pensive_euler[415156]:    ]
Oct 14 05:39:07 np0005486808 pensive_euler[415156]: }
Oct 14 05:39:07 np0005486808 systemd[1]: libpod-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope: Deactivated successfully.
Oct 14 05:39:07 np0005486808 podman[415140]: 2025-10-14 09:39:07.568790598 +0000 UTC m=+1.019677624 container died 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 05:39:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-edef90d78b08b5e5c93723a472847facfaf70a021dccab1549b268970c5a68bf-merged.mount: Deactivated successfully.
Oct 14 05:39:07 np0005486808 podman[415140]: 2025-10-14 09:39:07.633808354 +0000 UTC m=+1.084695350 container remove 26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:39:07 np0005486808 systemd[1]: libpod-conmon-26e6b8ad26365786ad268e1e038c0e863dfff3af54c2056ea809caa91d2ea2de.scope: Deactivated successfully.
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.834 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Creating config drive at /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config#033[00m
Oct 14 05:39:07 np0005486808 nova_compute[259627]: 2025-10-14 09:39:07.847 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20xiwnlm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.003 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20xiwnlm" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.044 2 DEBUG nova.storage.rbd_utils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.049 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.246 2 DEBUG oslo_concurrency.processutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config e5b13156-71d2-4a9c-be63-1beebe1ca3fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.248 2 INFO nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deleting local config drive /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb/disk.config because it was imported into RBD.#033[00m
Oct 14 05:39:08 np0005486808 kernel: tap30c28c87-45: entered promiscuous mode
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.3198] manager: (tap30c28c87-45): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Oct 14 05:39:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:08Z|01631|binding|INFO|Claiming lport 30c28c87-45b1-43e9-930b-c8ba5142286f for this chassis.
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:08Z|01632|binding|INFO|30c28c87-45b1-43e9-930b-c8ba5142286f: Claiming fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.337 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], port_security=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:feb7:ec03/64 2001:db8::f816:3eff:feb7:ec03/64', 'neutron:device_id': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=30c28c87-45b1-43e9-930b-c8ba5142286f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.339 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 30c28c87-45b1-43e9-930b-c8ba5142286f in datapath 14196b9b-0205-497b-9e98-32690613a533 bound to our chassis#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.340 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.352 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[90567eda-48c7-41cf-b710-42e4b905c6a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.353 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14196b9b-01 in ovnmeta-14196b9b-0205-497b-9e98-32690613a533 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:39:08 np0005486808 systemd-udevd[415443]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.357 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14196b9b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.357 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c04787de-0c7c-4e46-b332-19e880237d43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.358 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e85c4ec6-2a90-4d12-85a1-92ee8b9c46c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.361480711 +0000 UTC m=+0.053483024 container create 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.3695] device (tap30c28c87-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.369 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[a90748c4-f2c5-4abb-a199-a18b4723e6c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.3703] device (tap30c28c87-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.370 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.371 2 DEBUG nova.network.neutron [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:08 np0005486808 systemd-machined[214636]: New machine qemu-181-instance-00000094.
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.387 2 DEBUG oslo_concurrency.lockutils [req-821d96d6-c345-4c93-b0eb-c2494b300a41 req-505da766-619e-47ce-80b8-e7111a02aacb 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.396 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4634e0ca-07d9-48be-aa4b-b3f01539364e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:08Z|01633|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f ovn-installed in OVS
Oct 14 05:39:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:08Z|01634|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f up in Southbound
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 systemd[1]: Started libpod-conmon-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope.
Oct 14 05:39:08 np0005486808 systemd[1]: Started Virtual Machine qemu-181-instance-00000094.
Oct 14 05:39:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.425 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[468130c5-bea1-4f85-a1a0-32f71b106f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.337875052 +0000 UTC m=+0.029877385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:08 np0005486808 systemd-udevd[415450]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.4337] manager: (tap14196b9b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/668)
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.433 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[70c51889-4934-4ad7-aae9-22cc9ebb4348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.464899919 +0000 UTC m=+0.156902272 container init 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.472884385 +0000 UTC m=+0.164886688 container start 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:39:08 np0005486808 unruffled_hawking[415456]: 167 167
Oct 14 05:39:08 np0005486808 systemd[1]: libpod-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope: Deactivated successfully.
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.479 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec4661-b782-4453-b20a-9f431e555ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.483 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7a238d-49ee-4fa4-97a4-47d452f88f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.5143] device (tap14196b9b-00): carrier: link connected
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.521 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[9f48eedd-b5de-4638-865d-ec6716102964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2a812250-6c2b-496d-a0e1-cb9a86852fc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415498, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.548817228 +0000 UTC m=+0.240819541 container attach 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.549112265 +0000 UTC m=+0.241114568 container died 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG nova.compute.manager [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.564 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.565 2 DEBUG oslo_concurrency.lockutils [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.565 2 DEBUG nova.compute.manager [req-51be09af-0feb-429a-830a-d8c565d6307d req-f90319e0-2bb7-402f-b505-341748806f1a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Processing event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.567 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b14fca0c-551a-41c8-a5ba-0621999488c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:47b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852727, 'tstamp': 852727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415499, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.583 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6721f9-5bbe-4572-8fea-7bd661022724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415500, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 systemd[1]: var-lib-containers-storage-overlay-81b73908ea85983b88e26f247867240613e4ef8d37bcf191cc6d3cb56781cb7a-merged.mount: Deactivated successfully.
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.618 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b382a7ef-f775-460e-aa0b-29539ea84776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 podman[415426]: 2025-10-14 09:39:08.639558495 +0000 UTC m=+0.331560798 container remove 9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:39:08 np0005486808 systemd[1]: libpod-conmon-9a9a5b73a9fc2afd3762c512f9fb90dc3c3cb20a180907cfde9c52a1827e3b80.scope: Deactivated successfully.
Oct 14 05:39:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.706 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7278b2f2-096a-4357-b9df-0b8bd06e7393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.708 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.709 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 NetworkManager[44885]: <info>  [1760434748.7121] manager: (tap14196b9b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Oct 14 05:39:08 np0005486808 kernel: tap14196b9b-00: entered promiscuous mode
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.724 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.724 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1a38b403-f2ff-4797-bc07-8ca2e22b3295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.726 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-14196b9b-0205-497b-9e98-32690613a533
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/14196b9b-0205-497b-9e98-32690613a533.pid.haproxy
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID 14196b9b-0205-497b-9e98-32690613a533
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:39:08 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:08.727 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'env', 'PROCESS_TAG=haproxy-14196b9b-0205-497b-9e98-32690613a533', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14196b9b-0205-497b-9e98-32690613a533.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:39:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:08Z|01635|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 05:39:08 np0005486808 nova_compute[259627]: 2025-10-14 09:39:08.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:08 np0005486808 podman[415518]: 2025-10-14 09:39:08.87167629 +0000 UTC m=+0.083655344 container create 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:39:08 np0005486808 podman[415518]: 2025-10-14 09:39:08.809796522 +0000 UTC m=+0.021775576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:39:08 np0005486808 systemd[1]: Started libpod-conmon-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope.
Oct 14 05:39:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:08 np0005486808 podman[415518]: 2025-10-14 09:39:08.95847033 +0000 UTC m=+0.170449394 container init 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:39:08 np0005486808 podman[415518]: 2025-10-14 09:39:08.967500442 +0000 UTC m=+0.179479486 container start 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 05:39:08 np0005486808 podman[415518]: 2025-10-14 09:39:08.970486735 +0000 UTC m=+0.182465799 container attach 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:39:09 np0005486808 podman[415604]: 2025-10-14 09:39:09.076976968 +0000 UTC m=+0.042852762 container create da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:39:09 np0005486808 systemd[1]: Started libpod-conmon-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope.
Oct 14 05:39:09 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:39:09 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579e65901c3d9167dcba11f2ead71e8144552c5f2bb2a88cb78fc270c6ac4ad0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:39:09 np0005486808 podman[415604]: 2025-10-14 09:39:09.054567028 +0000 UTC m=+0.020442842 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:39:09 np0005486808 podman[415604]: 2025-10-14 09:39:09.151519288 +0000 UTC m=+0.117395092 container init da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:39:09 np0005486808 podman[415604]: 2025-10-14 09:39:09.15651079 +0000 UTC m=+0.122386584 container start da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:39:09 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : New worker (415625) forked
Oct 14 05:39:09 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : Loading success.
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.447 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.448 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4472356, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.448 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Started (Lifecycle Event)#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.453 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.456 2 INFO nova.virt.libvirt.driver [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance spawned successfully.#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.457 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.475 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.481 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.485 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.487 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.487 2 DEBUG nova.virt.libvirt.driver [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.523 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.523 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4481535, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.524 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.550 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.552 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434749.4525728, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.552 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.562 2 INFO nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.563 2 DEBUG nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.590 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.594 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.625 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.640 2 INFO nova.compute.manager [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 10.85 seconds to build instance.#033[00m
Oct 14 05:39:09 np0005486808 nova_compute[259627]: 2025-10-14 09:39:09.659 2 DEBUG oslo_concurrency.lockutils [None req-81d26dfc-4de1-4d11-9b62-72a9cb3955a5 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]: {
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_id": 2,
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "type": "bluestore"
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    },
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_id": 1,
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "type": "bluestore"
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    },
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_id": 0,
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:        "type": "bluestore"
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]:    }
Oct 14 05:39:09 np0005486808 hopeful_diffie[415572]: }
Oct 14 05:39:09 np0005486808 systemd[1]: libpod-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Deactivated successfully.
Oct 14 05:39:09 np0005486808 systemd[1]: libpod-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Consumed 1.009s CPU time.
Oct 14 05:39:09 np0005486808 podman[415518]: 2025-10-14 09:39:09.990194819 +0000 UTC m=+1.202173863 container died 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:39:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ffee12450558ccc3c7fc13d1890dc8b9fefb0b59236acf0616a4e781bfb8b9e2-merged.mount: Deactivated successfully.
Oct 14 05:39:10 np0005486808 podman[415518]: 2025-10-14 09:39:10.049974826 +0000 UTC m=+1.261953870 container remove 2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:39:10 np0005486808 systemd[1]: libpod-conmon-2bab9577701aa716c77f5e7a4f246c5bc9bdd69551d5faeda29face09eeb867b.scope: Deactivated successfully.
Oct 14 05:39:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:39:10 np0005486808 podman[415671]: 2025-10-14 09:39:10.089779583 +0000 UTC m=+0.070381099 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 05:39:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:39:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ec160650-3a95-4185-ad10-a9f8cda3e817 does not exist
Oct 14 05:39:10 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 855ddc89-e09d-406e-8ad3-d2a7d937523f does not exist
Oct 14 05:39:10 np0005486808 podman[415663]: 2025-10-14 09:39:10.114877239 +0000 UTC m=+0.093887425 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:39:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.681 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.682 2 DEBUG oslo_concurrency.lockutils [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.682 2 DEBUG nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:39:10 np0005486808 nova_compute[259627]: 2025-10-14 09:39:10.682 2 WARNING nova.compute.manager [req-fc9fb591-a942-497e-a633-656b0cb828fe req-29dbd63f-c1cd-45d6-80cc-a63d601d67c6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received unexpected event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with vm_state active and task_state None.#033[00m
Oct 14 05:39:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:39:11 np0005486808 nova_compute[259627]: 2025-10-14 09:39:11.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:11 np0005486808 nova_compute[259627]: 2025-10-14 09:39:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:12 np0005486808 nova_compute[259627]: 2025-10-14 09:39:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 05:39:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:14Z|01636|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 05:39:14 np0005486808 NetworkManager[44885]: <info>  [1760434754.3804] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Oct 14 05:39:14 np0005486808 NetworkManager[44885]: <info>  [1760434754.3832] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:14 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:14Z|01637|binding|INFO|Releasing lport 500514c8-ea10-48c5-93d8-b1c6948e60b0 from this chassis (sb_readonly=0)
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.734 2 DEBUG nova.compute.manager [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.735 2 DEBUG nova.compute.manager [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.735 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.736 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:14 np0005486808 nova_compute[259627]: 2025-10-14 09:39:14.736 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:39:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 98 op/s
Oct 14 05:39:16 np0005486808 nova_compute[259627]: 2025-10-14 09:39:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:17 np0005486808 nova_compute[259627]: 2025-10-14 09:39:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:17 np0005486808 nova_compute[259627]: 2025-10-14 09:39:17.355 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:39:17 np0005486808 nova_compute[259627]: 2025-10-14 09:39:17.356 2 DEBUG nova.network.neutron [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:17 np0005486808 nova_compute[259627]: 2025-10-14 09:39:17.380 2 DEBUG oslo_concurrency.lockutils [req-f073202e-225b-430a-ab22-c6f5cc0a3c40 req-6ce77fbc-7ff3-4155-8b7d-b389569bfba8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 88 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:39:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 14 05:39:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:21Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:ec:03 10.100.0.7
Oct 14 05:39:21 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:21Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ec:03 10.100.0.7
Oct 14 05:39:21 np0005486808 nova_compute[259627]: 2025-10-14 09:39:21.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:22 np0005486808 nova_compute[259627]: 2025-10-14 09:39:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 05:39:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 109 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 05:39:24 np0005486808 nova_compute[259627]: 2025-10-14 09:39:24.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:24 np0005486808 nova_compute[259627]: 2025-10-14 09:39:24.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:39:25 np0005486808 nova_compute[259627]: 2025-10-14 09:39:25.014 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:39:25 np0005486808 podman[415766]: 2025-10-14 09:39:25.723118235 +0000 UTC m=+0.090156194 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 05:39:25 np0005486808 podman[415765]: 2025-10-14 09:39:25.735507359 +0000 UTC m=+0.100670812 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:39:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 14 05:39:26 np0005486808 nova_compute[259627]: 2025-10-14 09:39:26.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:27 np0005486808 nova_compute[259627]: 2025-10-14 09:39:27.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:39:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:39:31 np0005486808 nova_compute[259627]: 2025-10-14 09:39:31.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:32 np0005486808 nova_compute[259627]: 2025-10-14 09:39:32.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:39:32
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Oct 14 05:39:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.127 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.128 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.152 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:39:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.277 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.278 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.291 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.292 2 INFO nova.compute.claims [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:39:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:39:33 np0005486808 nova_compute[259627]: 2025-10-14 09:39:33.538 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:39:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472284329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.045 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.052 2 DEBUG nova.compute.provider_tree [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.076 2 DEBUG nova.scheduler.client.report [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.101 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.102 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.163 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.163 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.186 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.204 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.336 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.337 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.338 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating image(s)#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.360 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.382 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.402 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.405 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.455 2 DEBUG nova.policy [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.507 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.507 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.508 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.508 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.529 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.532 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 107 KiB/s wr, 21 op/s
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.795 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.876 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.975 2 DEBUG nova.objects.instance [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.994 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.995 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Ensure instance console log exists: /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.996 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.997 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:34 np0005486808 nova_compute[259627]: 2025-10-14 09:39:34.998 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.396 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:39:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.398 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:39:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:35.399 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:35 np0005486808 nova_compute[259627]: 2025-10-14 09:39:35.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:35 np0005486808 nova_compute[259627]: 2025-10-14 09:39:35.525 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Successfully created port: e8b3ac7d-3adf-47a0-8a80-7a5692a145de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.192 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Successfully updated port: e8b3ac7d-3adf-47a0-8a80-7a5692a145de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.211 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.275 2 DEBUG nova.compute.manager [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.276 2 DEBUG nova.compute.manager [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.276 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Oct 14 05:39:36 np0005486808 nova_compute[259627]: 2025-10-14 09:39:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:37 np0005486808 nova_compute[259627]: 2025-10-14 09:39:37.129 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:39:37 np0005486808 nova_compute[259627]: 2025-10-14 09:39:37.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.410 2 DEBUG nova.network.neutron [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.445 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.446 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance network_info: |[{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.447 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.447 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.453 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start _get_guest_xml network_info=[{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.461 2 WARNING nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.467 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.468 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.477 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.478 2 DEBUG nova.virt.libvirt.host [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.479 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.480 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.480 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.481 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.481 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.482 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.482 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.483 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.483 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.484 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.484 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.485 2 DEBUG nova.virt.hardware [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.489 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:39:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:39:38 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351782841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:39:38 np0005486808 nova_compute[259627]: 2025-10-14 09:39:38.993 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.015 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.019 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:39:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661788815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.489 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.491 2 DEBUG nova.virt.libvirt.vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:39:34Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.491 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.492 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.494 2 DEBUG nova.objects.instance [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.513 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <uuid>3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</uuid>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <name>instance-00000095</name>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-311340838</nova:name>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:39:38</nova:creationTime>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <nova:port uuid="e8b3ac7d-3adf-47a0-8a80-7a5692a145de">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee7:826b" ipVersion="6"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee7:826b" ipVersion="6"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="serial">3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="uuid">3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:e7:82:6b"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <target dev="tape8b3ac7d-3a"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/console.log" append="off"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:39:39 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:39:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:39:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:39:39 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.513 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Preparing to wait for external event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.514 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.515 2 DEBUG nova.virt.libvirt.vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:39:34Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.515 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.516 2 DEBUG nova.network.os_vif_util [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.516 2 DEBUG os_vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8b3ac7d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.522 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8b3ac7d-3a, col_values=(('external_ids', {'iface-id': 'e8b3ac7d-3adf-47a0-8a80-7a5692a145de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:82:6b', 'vm-uuid': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:39 np0005486808 NetworkManager[44885]: <info>  [1760434779.5399] manager: (tape8b3ac7d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.548 2 INFO os_vif [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a')#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.642 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.642 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.643 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:e7:82:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.643 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Using config drive#033[00m
Oct 14 05:39:39 np0005486808 nova_compute[259627]: 2025-10-14 09:39:39.667 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.065 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Creating config drive at /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.070 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfum5ym0q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.216 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfum5ym0q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.256 2 DEBUG nova.storage.rbd_utils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.261 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.483 2 DEBUG oslo_concurrency.processutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.485 2 INFO nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deleting local config drive /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed/disk.config because it was imported into RBD.#033[00m
Oct 14 05:39:40 np0005486808 kernel: tape8b3ac7d-3a: entered promiscuous mode
Oct 14 05:39:40 np0005486808 NetworkManager[44885]: <info>  [1760434780.5446] manager: (tape8b3ac7d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/673)
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:40Z|01638|binding|INFO|Claiming lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de for this chassis.
Oct 14 05:39:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:40Z|01639|binding|INFO|e8b3ac7d-3adf-47a0-8a80-7a5692a145de: Claiming fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.555 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], port_security=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fee7:826b/64 2001:db8::f816:3eff:fee7:826b/64', 'neutron:device_id': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e8b3ac7d-3adf-47a0-8a80-7a5692a145de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.556 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e8b3ac7d-3adf-47a0-8a80-7a5692a145de in datapath 14196b9b-0205-497b-9e98-32690613a533 bound to our chassis#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.557 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.572 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c47e6c-12af-4ac2-8e7d-d34b3789f4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:40Z|01640|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de ovn-installed in OVS
Oct 14 05:39:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:40Z|01641|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de up in Southbound
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.633 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[436bb6bb-6f8a-4713-b621-e0ca8d4c6b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 systemd-machined[214636]: New machine qemu-182-instance-00000095.
Oct 14 05:39:40 np0005486808 systemd-udevd[416137]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.637 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe31f5c-1c46-42a2-ac27-ed7ac8d2677c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 systemd[1]: Started Virtual Machine qemu-182-instance-00000095.
Oct 14 05:39:40 np0005486808 NetworkManager[44885]: <info>  [1760434780.6502] device (tape8b3ac7d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:39:40 np0005486808 NetworkManager[44885]: <info>  [1760434780.6522] device (tape8b3ac7d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:39:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.675 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[35e3fda8-f035-47be-9030-53d12f8e1c94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.700 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[adbb8f83-1f6f-44b9-ada8-d6f6546f81c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416162, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 podman[416126]: 2025-10-14 09:39:40.705858107 +0000 UTC m=+0.075015132 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.717 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1427f6ee-9d17-4d5f-b7e6-f45e44c70e3e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852742, 'tstamp': 852742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416175, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852745, 'tstamp': 852745}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416175, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.719 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:40 np0005486808 nova_compute[259627]: 2025-10-14 09:39:40.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.722 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:39:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:39:40.722 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:39:40 np0005486808 podman[416123]: 2025-10-14 09:39:40.764855775 +0000 UTC m=+0.136634244 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.692 2 DEBUG nova.compute.manager [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.693 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.693 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.694 2 DEBUG oslo_concurrency.lockutils [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.694 2 DEBUG nova.compute.manager [req-2b7fbd52-21db-439b-8d5f-7a733399178d req-631fd37b-773f-4807-9bda-399de5e86a2d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Processing event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.696 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.697 2 DEBUG nova.network.neutron [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.711 2 DEBUG oslo_concurrency.lockutils [req-b20d6d03-3d05-4b5a-b005-abfc5aa63de9 req-4a337080-fc32-4064-9cfb-85f8aa74d88d 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.776 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.7759151, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.777 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Started (Lifecycle Event)#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.780 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.786 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.794 2 INFO nova.virt.libvirt.driver [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance spawned successfully.#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.794 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.799 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.806 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.822 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.823 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.824 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.825 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.825 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.826 2 DEBUG nova.virt.libvirt.driver [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.835 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.836 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.776156, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.836 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.887 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.893 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434781.7861793, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.893 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.939 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.942 2 INFO nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 7.60 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.942 2 DEBUG nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.947 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:39:41 np0005486808 nova_compute[259627]: 2025-10-14 09:39:41.985 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:39:42 np0005486808 nova_compute[259627]: 2025-10-14 09:39:42.027 2 INFO nova.compute.manager [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 8.80 seconds to build instance.#033[00m
Oct 14 05:39:42 np0005486808 nova_compute[259627]: 2025-10-14 09:39:42.047 2 DEBUG oslo_concurrency.lockutils [None req-53741d57-398e-42a1-9124-43d793b22322 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:39:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:39:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.812 2 DEBUG nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.814 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.814 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.815 2 DEBUG oslo_concurrency.lockutils [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.815 2 DEBUG nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:39:43 np0005486808 nova_compute[259627]: 2025-10-14 09:39:43.815 2 WARNING nova.compute.manager [req-76e49958-f81d-4069-9ea7-9188b0803781 req-312536b7-6a0b-4fe1-b58e-891ad8207f54 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received unexpected event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with vm_state active and task_state None.#033[00m
Oct 14 05:39:44 np0005486808 nova_compute[259627]: 2025-10-14 09:39:44.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 14 05:39:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:39:46 np0005486808 nova_compute[259627]: 2025-10-14 09:39:46.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:46.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:47.350 2 DEBUG nova.compute.manager [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:47.351 2 DEBUG nova.compute.manager [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:47.352 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:47.352 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:39:47 np0005486808 nova_compute[259627]: 2025-10-14 09:39:47.353 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:39:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:39:49 np0005486808 nova_compute[259627]: 2025-10-14 09:39:49.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:50 np0005486808 nova_compute[259627]: 2025-10-14 09:39:50.166 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:39:50 np0005486808 nova_compute[259627]: 2025-10-14 09:39:50.167 2 DEBUG nova.network.neutron [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:39:50 np0005486808 nova_compute[259627]: 2025-10-14 09:39:50.202 2 DEBUG oslo_concurrency.lockutils [req-f4a5b2e9-3d4b-4de2-980c-2ce0f62c3de3 req-21611eba-e116-4057-8a2b-b74c89442743 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:39:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:39:51 np0005486808 nova_compute[259627]: 2025-10-14 09:39:51.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:52 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 05:39:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 05:39:52 np0005486808 nova_compute[259627]: 2025-10-14 09:39:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:54 np0005486808 nova_compute[259627]: 2025-10-14 09:39:54.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 14 05:39:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:54Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:82:6b 10.100.0.10
Oct 14 05:39:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:39:54Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:82:6b 10.100.0.10
Oct 14 05:39:54 np0005486808 nova_compute[259627]: 2025-10-14 09:39:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:54 np0005486808 nova_compute[259627]: 2025-10-14 09:39:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.008 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.009 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.009 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:55 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:39:55 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520363710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.506 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.612 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.613 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.619 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.861 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.861 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3179MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.862 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.862 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.981 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance e5b13156-71d2-4a9c-be63-1beebe1ca3fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.982 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.982 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:39:55 np0005486808 nova_compute[259627]: 2025-10-14 09:39:55.983 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.010 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.036 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.036 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.053 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.096 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.181 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:39:56 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:39:56 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3786777675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:39:56 np0005486808 podman[416270]: 2025-10-14 09:39:56.655001097 +0000 UTC m=+0.068594944 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:39:56 np0005486808 podman[416271]: 2025-10-14 09:39:56.656510864 +0000 UTC m=+0.065370905 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:39:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.674 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.680 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.710 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.733 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.734 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:39:56 np0005486808 nova_compute[259627]: 2025-10-14 09:39:56.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:39:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.733 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.734 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.734 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:39:59 np0005486808 nova_compute[259627]: 2025-10-14 09:39:59.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:40:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:40:00 np0005486808 nova_compute[259627]: 2025-10-14 09:40:00.974 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:40:00 np0005486808 nova_compute[259627]: 2025-10-14 09:40:00.975 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:40:00 np0005486808 nova_compute[259627]: 2025-10-14 09:40:00.975 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:40:00 np0005486808 nova_compute[259627]: 2025-10-14 09:40:00.976 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:40:01 np0005486808 nova_compute[259627]: 2025-10-14 09:40:01.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.310 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.330 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.331 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.332 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:40:04 np0005486808 nova_compute[259627]: 2025-10-14 09:40:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:40:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:40:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:40:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394527340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.322 2 DEBUG nova.compute.manager [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.323 2 DEBUG nova.compute.manager [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing instance network info cache due to event network-changed-e8b3ac7d-3adf-47a0-8a80-7a5692a145de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.323 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.324 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.324 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Refreshing network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.385 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.386 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.386 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.387 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.387 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.389 2 INFO nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Terminating instance#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.391 2 DEBUG nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:40:06 np0005486808 kernel: tape8b3ac7d-3a (unregistering): left promiscuous mode
Oct 14 05:40:06 np0005486808 NetworkManager[44885]: <info>  [1760434806.4653] device (tape8b3ac7d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:40:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:06Z|01642|binding|INFO|Releasing lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de from this chassis (sb_readonly=0)
Oct 14 05:40:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:06Z|01643|binding|INFO|Setting lport e8b3ac7d-3adf-47a0-8a80-7a5692a145de down in Southbound
Oct 14 05:40:06 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:06Z|01644|binding|INFO|Removing iface tape8b3ac7d-3a ovn-installed in OVS
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.496 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], port_security=['fa:16:3e:e7:82:6b 10.100.0.10 2001:db8:0:1:f816:3eff:fee7:826b 2001:db8::f816:3eff:fee7:826b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fee7:826b/64 2001:db8::f816:3eff:fee7:826b/64', 'neutron:device_id': '3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=e8b3ac7d-3adf-47a0-8a80-7a5692a145de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.498 162547 INFO neutron.agent.ovn.metadata.agent [-] Port e8b3ac7d-3adf-47a0-8a80-7a5692a145de in datapath 14196b9b-0205-497b-9e98-32690613a533 unbound from our chassis#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.500 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14196b9b-0205-497b-9e98-32690613a533#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.528 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[745e32d1-f2a5-4410-a4a9-461bf92207b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 14 05:40:06 np0005486808 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Consumed 14.087s CPU time.
Oct 14 05:40:06 np0005486808 systemd-machined[214636]: Machine qemu-182-instance-00000095 terminated.
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.575 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f30aabde-55b0-4e65-9442-75188352d026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.581 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[19700c34-ef8d-4570-9938-6ea2cb46f4c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.635 2 INFO nova.virt.libvirt.driver [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Instance destroyed successfully.#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.636 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[09089c08-f00d-4e6b-8bcb-86e17949eb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.637 2 DEBUG nova.objects.instance [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.660 2 DEBUG nova.virt.libvirt.vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-311340838',display_name='tempest-TestGettingAddress-server-311340838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-311340838',id=149,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:39:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-0evlaun0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:39:41Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.661 2 DEBUG nova.network.os_vif_util [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.664 2 DEBUG nova.network.os_vif_util [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.665 2 DEBUG os_vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8b3ac7d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.675 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[faf663bc-0fd0-4c8b-8932-d15c369e7ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14196b9b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:47:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852727, 'reachable_time': 21450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416333, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.676 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.676 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.677 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.677 2 DEBUG oslo_concurrency.lockutils [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.678 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.678 2 DEBUG nova.compute.manager [req-d90c0c59-2feb-43c5-93b9-7ea22f9d684f req-59eb3584-6b1f-4cc9-af43-7219575fc6cd 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-unplugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.687 2 INFO os_vif [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:82:6b,bridge_name='br-int',has_traffic_filtering=True,id=e8b3ac7d-3adf-47a0-8a80-7a5692a145de,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8b3ac7d-3a')#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.697 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab3d9d-d59f-4daf-9554-d0391beedef0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852742, 'tstamp': 852742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416335, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14196b9b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852745, 'tstamp': 852745}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416335, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.698 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.702 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14196b9b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.702 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.703 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14196b9b-00, col_values=(('external_ids', {'iface-id': '500514c8-ea10-48c5-93d8-b1c6948e60b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:06 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:06.703 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:06 np0005486808 nova_compute[259627]: 2025-10-14 09:40:06.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.156 2 INFO nova.virt.libvirt.driver [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deleting instance files /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_del#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.158 2 INFO nova.virt.libvirt.driver [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deletion of /var/lib/nova/instances/3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed_del complete#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.241 2 INFO nova.compute.manager [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.242 2 DEBUG oslo.service.loopingcall [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.242 2 DEBUG nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:40:07 np0005486808 nova_compute[259627]: 2025-10-14 09:40:07.243 2 DEBUG nova.network.neutron [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.103 2 DEBUG nova.network.neutron [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.129 2 INFO nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct 14 05:40:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.183 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.184 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.267 2 DEBUG oslo_concurrency.processutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.400 2 DEBUG nova.compute.manager [req-796d3c0c-964a-4e4b-98d7-45085a842c5a req-3f978797-e996-46e4-a721-9f60f52d4e7b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-deleted-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.581 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updated VIF entry in instance network info cache for port e8b3ac7d-3adf-47a0-8a80-7a5692a145de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.582 2 DEBUG nova.network.neutron [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Updating instance_info_cache with network_info: [{"id": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "address": "fa:16:3e:e7:82:6b", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee7:826b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8b3ac7d-3a", "ovs_interfaceid": "e8b3ac7d-3adf-47a0-8a80-7a5692a145de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.601 2 DEBUG oslo_concurrency.lockutils [req-6f93fd00-58b5-40a5-87a0-0569a483e912 req-ce344d58-18e1-4636-b834-ba4e6009f76b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:40:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 14 05:40:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:40:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959017969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.756 2 DEBUG oslo_concurrency.processutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.765 2 DEBUG nova.compute.provider_tree [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.774 2 DEBUG nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.775 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.775 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.776 2 DEBUG oslo_concurrency.lockutils [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.776 2 DEBUG nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] No waiting events found dispatching network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.777 2 WARNING nova.compute.manager [req-3f75f09a-04b3-4033-af17-56291123f71c req-10d7848a-b8f1-4523-b08a-406b0abed1d9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Received unexpected event network-vif-plugged-e8b3ac7d-3adf-47a0-8a80-7a5692a145de for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.785 2 DEBUG nova.scheduler.client.report [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.810 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.836 2 INFO nova.scheduler.client.report [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed#033[00m
Oct 14 05:40:08 np0005486808 nova_compute[259627]: 2025-10-14 09:40:08.895 2 DEBUG oslo_concurrency.lockutils [None req-af0c13ca-259b-499d-89fe-eef471dc5038 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.330 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.331 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.332 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.333 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.333 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.335 2 INFO nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Terminating instance#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.338 2 DEBUG nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:40:10 np0005486808 kernel: tap30c28c87-45 (unregistering): left promiscuous mode
Oct 14 05:40:10 np0005486808 NetworkManager[44885]: <info>  [1760434810.3957] device (tap30c28c87-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:40:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:10Z|01645|binding|INFO|Releasing lport 30c28c87-45b1-43e9-930b-c8ba5142286f from this chassis (sb_readonly=0)
Oct 14 05:40:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:10Z|01646|binding|INFO|Setting lport 30c28c87-45b1-43e9-930b-c8ba5142286f down in Southbound
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:10Z|01647|binding|INFO|Removing iface tap30c28c87-45 ovn-installed in OVS
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.409 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], port_security=['fa:16:3e:b7:ec:03 10.100.0.7 2001:db8:0:1:f816:3eff:feb7:ec03 2001:db8::f816:3eff:feb7:ec03'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:feb7:ec03/64 2001:db8::f816:3eff:feb7:ec03/64', 'neutron:device_id': 'e5b13156-71d2-4a9c-be63-1beebe1ca3fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14196b9b-0205-497b-9e98-32690613a533', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75fd0641-e399-4e8e-872e-ceab82cd0201', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d75e0f-ba6b-4079-b571-32cab0870048, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=30c28c87-45b1-43e9-930b-c8ba5142286f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.410 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 30c28c87-45b1-43e9-930b-c8ba5142286f in datapath 14196b9b-0205-497b-9e98-32690613a533 unbound from our chassis#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.411 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14196b9b-0205-497b-9e98-32690613a533, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.414 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[be4b49c0-e55f-4d22-9dae-430592bde651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.415 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14196b9b-0205-497b-9e98-32690613a533 namespace which is not needed anymore#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 14 05:40:10 np0005486808 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Consumed 14.863s CPU time.
Oct 14 05:40:10 np0005486808 systemd-machined[214636]: Machine qemu-181-instance-00000094 terminated.
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : haproxy version is 2.8.14-c23fe91
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [NOTICE]   (415623) : path to executable is /usr/sbin/haproxy
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : Exiting Master process...
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : Exiting Master process...
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [ALERT]    (415623) : Current worker (415625) exited with code 143 (Terminated)
Oct 14 05:40:10 np0005486808 neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533[415619]: [WARNING]  (415623) : All workers exited. Exiting... (0)
Oct 14 05:40:10 np0005486808 systemd[1]: libpod-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope: Deactivated successfully.
Oct 14 05:40:10 np0005486808 podman[416499]: 2025-10-14 09:40:10.554840044 +0000 UTC m=+0.043470308 container died da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.571 2 INFO nova.virt.libvirt.driver [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Instance destroyed successfully.#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.572 2 DEBUG nova.objects.instance [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid e5b13156-71d2-4a9c-be63-1beebe1ca3fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.591 2 DEBUG nova.virt.libvirt.vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:38:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-911560012',display_name='tempest-TestGettingAddress-server-911560012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-911560012',id=148,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMDf2XavD9/IfCp+ndZfV3AdZCUIhTR2npOL1XNIlUaFyzTUoWv4qmqvpPpwdd1PNzx9cK/19FXxs4psOVoqPZEBFGbKyJdIety1giFsXf8LSdMC27Wpk9aHw8ObIpVuA==',key_name='tempest-TestGettingAddress-1135497915',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-friw18rr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:39:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=e5b13156-71d2-4a9c-be63-1beebe1ca3fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.592 2 DEBUG nova.network.os_vif_util [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.593 2 DEBUG nova.network.os_vif_util [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.593 2 DEBUG os_vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30c28c87-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.602 2 INFO os_vif [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ec:03,bridge_name='br-int',has_traffic_filtering=True,id=30c28c87-45b1-43e9-930b-c8ba5142286f,network=Network(14196b9b-0205-497b-9e98-32690613a533),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c28c87-45')#033[00m
Oct 14 05:40:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d-userdata-shm.mount: Deactivated successfully.
Oct 14 05:40:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-579e65901c3d9167dcba11f2ead71e8144552c5f2bb2a88cb78fc270c6ac4ad0-merged.mount: Deactivated successfully.
Oct 14 05:40:10 np0005486808 podman[416499]: 2025-10-14 09:40:10.624704809 +0000 UTC m=+0.113335113 container cleanup da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:40:10 np0005486808 systemd[1]: libpod-conmon-da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d.scope: Deactivated successfully.
Oct 14 05:40:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct 14 05:40:10 np0005486808 podman[416557]: 2025-10-14 09:40:10.705969333 +0000 UTC m=+0.050563432 container remove da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.711 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d30987e7-47c9-48c6-96a1-7041ab4cc8fc]: (4, ('Tue Oct 14 09:40:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533 (da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d)\nda359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d\nTue Oct 14 09:40:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14196b9b-0205-497b-9e98-32690613a533 (da359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d)\nda359fbc7512193206641124b817c3569de5b909769d2dabd7e25947b9d07e1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.713 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7a751600-79cb-47d7-93ca-bc295628500a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.714 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14196b9b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:10 np0005486808 kernel: tap14196b9b-00: left promiscuous mode
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.721 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[601c47f9-9c53-415d-a68d-5c7f2da34d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.760 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[2e443b51-0c6c-4c04-a3ea-deff37747b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.762 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b55dfa83-946b-4aad-a69d-82edd228bac7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.777 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5fad5b3b-c8ad-4304-ad6f-f1512bd8de74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852718, 'reachable_time': 43402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416596, 'error': None, 'target': 'ovnmeta-14196b9b-0205-497b-9e98-32690613a533', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 systemd[1]: run-netns-ovnmeta\x2d14196b9b\x2d0205\x2d497b\x2d9e98\x2d32690613a533.mount: Deactivated successfully.
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.781 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14196b9b-0205-497b-9e98-32690613a533 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:40:10 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:10.781 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[b0308696-416e-46be-9346-001da9be3cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:10 np0005486808 podman[416586]: 2025-10-14 09:40:10.827906745 +0000 UTC m=+0.075921804 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.882 2 DEBUG nova.compute.manager [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.882 2 DEBUG nova.compute.manager [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing instance network info cache due to event network-changed-30c28c87-45b1-43e9-930b-c8ba5142286f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:40:10 np0005486808 nova_compute[259627]: 2025-10-14 09:40:10.883 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Refreshing network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:40:10 np0005486808 podman[416604]: 2025-10-14 09:40:10.895792851 +0000 UTC m=+0.091015764 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.024 2 INFO nova.virt.libvirt.driver [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deleting instance files /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_del#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.025 2 INFO nova.virt.libvirt.driver [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deletion of /var/lib/nova/instances/e5b13156-71d2-4a9c-be63-1beebe1ca3fb_del complete#033[00m
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a04c671d-245d-4a5b-9222-7a97969a45e8 does not exist
Oct 14 05:40:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6cd7c7e3-9580-49b9-b248-f7d2cb3191e5 does not exist
Oct 14 05:40:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2f1c73e6-a12b-4e1f-9342-9bab84e2b7f4 does not exist
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.119 2 INFO nova.compute.manager [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG oslo.service.loopingcall [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.120 2 DEBUG nova.network.neutron [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.179 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.180 2 DEBUG oslo_concurrency.lockutils [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.181 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.181 2 DEBUG nova.compute.manager [req-54d4ef05-2301-4fec-8598-9b17dd9b3400 req-d096c4ff-d6ba-4205-b9bf-80210dbc4589 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-unplugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.750728372 +0000 UTC m=+0.061745007 container create 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:40:11 np0005486808 nova_compute[259627]: 2025-10-14 09:40:11.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:11 np0005486808 systemd[1]: Started libpod-conmon-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope.
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.726494397 +0000 UTC m=+0.037511122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.855924763 +0000 UTC m=+0.166941498 container init 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.867426105 +0000 UTC m=+0.178442780 container start 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.87089562 +0000 UTC m=+0.181912275 container attach 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 05:40:11 np0005486808 pedantic_northcutt[416807]: 167 167
Oct 14 05:40:11 np0005486808 systemd[1]: libpod-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope: Deactivated successfully.
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.875176425 +0000 UTC m=+0.186193100 container died 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:40:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e4d67928482b107d201b3311283b0c54bfa3e94dbc95743727efbfed27753f10-merged.mount: Deactivated successfully.
Oct 14 05:40:11 np0005486808 podman[416791]: 2025-10-14 09:40:11.913579928 +0000 UTC m=+0.224596573 container remove 4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:40:11 np0005486808 systemd[1]: libpod-conmon-4bd4b71d2af16cc55216b7ccb8ef156421ebbe95146350b6bef4d793701bbda4.scope: Deactivated successfully.
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:40:12 np0005486808 podman[416831]: 2025-10-14 09:40:12.123441218 +0000 UTC m=+0.041740055 container create e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:40:12 np0005486808 systemd[1]: Started libpod-conmon-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope.
Oct 14 05:40:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:12 np0005486808 podman[416831]: 2025-10-14 09:40:12.104947304 +0000 UTC m=+0.023246141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:12 np0005486808 podman[416831]: 2025-10-14 09:40:12.205759028 +0000 UTC m=+0.124057885 container init e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:40:12 np0005486808 podman[416831]: 2025-10-14 09:40:12.218028879 +0000 UTC m=+0.136327706 container start e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:40:12 np0005486808 podman[416831]: 2025-10-14 09:40:12.221144426 +0000 UTC m=+0.139443253 container attach e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:40:12 np0005486808 nova_compute[259627]: 2025-10-14 09:40:12.496 2 DEBUG nova.network.neutron [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:12 np0005486808 nova_compute[259627]: 2025-10-14 09:40:12.526 2 INFO nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct 14 05:40:12 np0005486808 nova_compute[259627]: 2025-10-14 09:40:12.569 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:12 np0005486808 nova_compute[259627]: 2025-10-14 09:40:12.570 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:12 np0005486808 nova_compute[259627]: 2025-10-14 09:40:12.606 2 DEBUG oslo_concurrency.processutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 05:40:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:40:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576567254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.115 2 DEBUG oslo_concurrency.processutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.125 2 DEBUG nova.compute.provider_tree [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.150 2 DEBUG nova.scheduler.client.report [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:40:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.185 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.227 2 INFO nova.scheduler.client.report [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance e5b13156-71d2-4a9c-be63-1beebe1ca3fb#033[00m
Oct 14 05:40:13 np0005486808 gifted_bohr[416847]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:40:13 np0005486808 gifted_bohr[416847]: --> relative data size: 1.0
Oct 14 05:40:13 np0005486808 gifted_bohr[416847]: --> All data devices are unavailable
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.289 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.290 2 DEBUG oslo_concurrency.lockutils [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.290 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] No waiting events found dispatching network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.290 2 WARNING nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received unexpected event network-vif-plugged-30c28c87-45b1-43e9-930b-c8ba5142286f for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.291 2 DEBUG nova.compute.manager [req-c97f272b-dcf1-4fe7-a074-e4914bc68e39 req-59717119-53b1-47c2-a7b2-a611e0f5fb7c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Received event network-vif-deleted-30c28c87-45b1-43e9-930b-c8ba5142286f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:13 np0005486808 systemd[1]: libpod-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Deactivated successfully.
Oct 14 05:40:13 np0005486808 systemd[1]: libpod-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Consumed 1.017s CPU time.
Oct 14 05:40:13 np0005486808 podman[416831]: 2025-10-14 09:40:13.300374629 +0000 UTC m=+1.218673496 container died e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.328 2 DEBUG oslo_concurrency.lockutils [None req-d7709769-d1cb-48eb-a5b0-39ff66bb8d61 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "e5b13156-71d2-4a9c-be63-1beebe1ca3fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-62dd67c004fb6e9a7e3508c01fd2c142f1fe6aa3764b373387c84462d54f702e-merged.mount: Deactivated successfully.
Oct 14 05:40:13 np0005486808 podman[416831]: 2025-10-14 09:40:13.359794047 +0000 UTC m=+1.278092894 container remove e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 05:40:13 np0005486808 systemd[1]: libpod-conmon-e076a058319560e3f36f8358635d1fb0953560d95e454a2bb6f9b467bf779270.scope: Deactivated successfully.
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.777 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updated VIF entry in instance network info cache for port 30c28c87-45b1-43e9-930b-c8ba5142286f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.779 2 DEBUG nova.network.neutron [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Updating instance_info_cache with network_info: [{"id": "30c28c87-45b1-43e9-930b-c8ba5142286f", "address": "fa:16:3e:b7:ec:03", "network": {"id": "14196b9b-0205-497b-9e98-32690613a533", "bridge": "br-int", "label": "tempest-network-smoke--214298167", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb7:ec03", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c28c87-45", "ovs_interfaceid": "30c28c87-45b1-43e9-930b-c8ba5142286f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:13 np0005486808 nova_compute[259627]: 2025-10-14 09:40:13.804 2 DEBUG oslo_concurrency.lockutils [req-2b5ed22a-9756-4cf3-944f-b818159ef1c3 req-4fc9da7d-9e43-4377-858d-91e1bae71279 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-e5b13156-71d2-4a9c-be63-1beebe1ca3fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.174177292 +0000 UTC m=+0.067329803 container create 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:40:14 np0005486808 systemd[1]: Started libpod-conmon-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope.
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.151187018 +0000 UTC m=+0.044339619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.258892201 +0000 UTC m=+0.152044772 container init 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.269991224 +0000 UTC m=+0.163143735 container start 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.27269704 +0000 UTC m=+0.165849551 container attach 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:40:14 np0005486808 hopeful_austin[417066]: 167 167
Oct 14 05:40:14 np0005486808 systemd[1]: libpod-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope: Deactivated successfully.
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.277914908 +0000 UTC m=+0.171067429 container died 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:40:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-16011f8eabe82c41632ae6374c24469019da00e6786da91fc2c6e72ccb7d8acc-merged.mount: Deactivated successfully.
Oct 14 05:40:14 np0005486808 podman[417050]: 2025-10-14 09:40:14.321063117 +0000 UTC m=+0.214215638 container remove 18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:40:14 np0005486808 systemd[1]: libpod-conmon-18b48e7f9c8ab29ed81b9f1b4cee4679dee35a135691fcca6c7cdecd3c0dbe1f.scope: Deactivated successfully.
Oct 14 05:40:14 np0005486808 podman[417091]: 2025-10-14 09:40:14.531621934 +0000 UTC m=+0.061201763 container create d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:40:14 np0005486808 systemd[1]: Started libpod-conmon-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope.
Oct 14 05:40:14 np0005486808 podman[417091]: 2025-10-14 09:40:14.511398028 +0000 UTC m=+0.040977867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:14 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:14 np0005486808 podman[417091]: 2025-10-14 09:40:14.635575175 +0000 UTC m=+0.165155084 container init d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:40:14 np0005486808 podman[417091]: 2025-10-14 09:40:14.646539824 +0000 UTC m=+0.176119643 container start d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:40:14 np0005486808 podman[417091]: 2025-10-14 09:40:14.650097831 +0000 UTC m=+0.179677730 container attach d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 14 05:40:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct 14 05:40:15 np0005486808 epic_nash[417108]: {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    "0": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "devices": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "/dev/loop3"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            ],
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_name": "ceph_lv0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_size": "21470642176",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "name": "ceph_lv0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "tags": {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_name": "ceph",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.crush_device_class": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.encrypted": "0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_id": "0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.vdo": "0"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            },
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "vg_name": "ceph_vg0"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        }
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    ],
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    "1": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "devices": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "/dev/loop4"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            ],
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_name": "ceph_lv1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_size": "21470642176",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "name": "ceph_lv1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "tags": {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_name": "ceph",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.crush_device_class": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.encrypted": "0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_id": "1",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.vdo": "0"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            },
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "vg_name": "ceph_vg1"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        }
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    ],
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    "2": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "devices": [
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "/dev/loop5"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            ],
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_name": "ceph_lv2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_size": "21470642176",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "name": "ceph_lv2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "tags": {
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.cluster_name": "ceph",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.crush_device_class": "",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.encrypted": "0",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osd_id": "2",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:                "ceph.vdo": "0"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            },
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "type": "block",
Oct 14 05:40:15 np0005486808 epic_nash[417108]:            "vg_name": "ceph_vg2"
Oct 14 05:40:15 np0005486808 epic_nash[417108]:        }
Oct 14 05:40:15 np0005486808 epic_nash[417108]:    ]
Oct 14 05:40:15 np0005486808 epic_nash[417108]: }
Oct 14 05:40:15 np0005486808 systemd[1]: libpod-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope: Deactivated successfully.
Oct 14 05:40:15 np0005486808 podman[417091]: 2025-10-14 09:40:15.430993415 +0000 UTC m=+0.960573294 container died d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:40:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b69f33a28ee889913e815240025a046ee2ddff697a3015c79de43c95fbd01b10-merged.mount: Deactivated successfully.
Oct 14 05:40:15 np0005486808 podman[417091]: 2025-10-14 09:40:15.512242909 +0000 UTC m=+1.041822758 container remove d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:40:15 np0005486808 systemd[1]: libpod-conmon-d57cd8fa4142e603498092a561405465ddd635736f210ec60c456a7668f1ba3b.scope: Deactivated successfully.
Oct 14 05:40:15 np0005486808 nova_compute[259627]: 2025-10-14 09:40:15.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.313875641 +0000 UTC m=+0.055623366 container create d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:40:16 np0005486808 systemd[1]: Started libpod-conmon-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope.
Oct 14 05:40:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.292836754 +0000 UTC m=+0.034584459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.408161815 +0000 UTC m=+0.149909590 container init d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.418948999 +0000 UTC m=+0.160696684 container start d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:40:16 np0005486808 laughing_golick[417288]: 167 167
Oct 14 05:40:16 np0005486808 systemd[1]: libpod-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope: Deactivated successfully.
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.42795403 +0000 UTC m=+0.169701715 container attach d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.428372521 +0000 UTC m=+0.170120206 container died d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:40:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa159873990fcc3ef8963c34bfba0ab1e7b1a3fc97cd2bc2c5979d5764f87dec-merged.mount: Deactivated successfully.
Oct 14 05:40:16 np0005486808 podman[417272]: 2025-10-14 09:40:16.492948635 +0000 UTC m=+0.234696400 container remove d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 05:40:16 np0005486808 systemd[1]: libpod-conmon-d771296d53b636d7f9159d06d4981f612b4517261b1eca27ed521951316a5352.scope: Deactivated successfully.
Oct 14 05:40:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct 14 05:40:16 np0005486808 podman[417314]: 2025-10-14 09:40:16.695421713 +0000 UTC m=+0.040678659 container create f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:40:16 np0005486808 systemd[1]: Started libpod-conmon-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope.
Oct 14 05:40:16 np0005486808 podman[417314]: 2025-10-14 09:40:16.67654942 +0000 UTC m=+0.021806396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:40:16 np0005486808 nova_compute[259627]: 2025-10-14 09:40:16.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:16 np0005486808 podman[417314]: 2025-10-14 09:40:16.814058324 +0000 UTC m=+0.159315300 container init f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:40:16 np0005486808 podman[417314]: 2025-10-14 09:40:16.830269732 +0000 UTC m=+0.175526678 container start f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:40:16 np0005486808 podman[417314]: 2025-10-14 09:40:16.833529612 +0000 UTC m=+0.178786598 container attach f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:40:17 np0005486808 practical_euler[417331]: {
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_id": 2,
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "type": "bluestore"
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    },
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_id": 1,
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "type": "bluestore"
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    },
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_id": 0,
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:40:17 np0005486808 practical_euler[417331]:        "type": "bluestore"
Oct 14 05:40:17 np0005486808 practical_euler[417331]:    }
Oct 14 05:40:17 np0005486808 practical_euler[417331]: }
Oct 14 05:40:17 np0005486808 systemd[1]: libpod-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Deactivated successfully.
Oct 14 05:40:17 np0005486808 systemd[1]: libpod-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Consumed 1.067s CPU time.
Oct 14 05:40:17 np0005486808 podman[417314]: 2025-10-14 09:40:17.891318551 +0000 UTC m=+1.236575537 container died f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:40:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b1f2f8ff61051b299f658afda6e43510f6162d4f6ac2c225212d0b274a043330-merged.mount: Deactivated successfully.
Oct 14 05:40:17 np0005486808 podman[417314]: 2025-10-14 09:40:17.972205755 +0000 UTC m=+1.317462741 container remove f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:40:17 np0005486808 systemd[1]: libpod-conmon-f45cf0dd2aa046178e337cb938df458ddbedc31621474a9b90b740916839f1a0.scope: Deactivated successfully.
Oct 14 05:40:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:40:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:40:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 8a11636b-eb9e-4633-9e4b-c797b8f2ac36 does not exist
Oct 14 05:40:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6c0c4242-e338-43cb-8466-c455c0cd84c2 does not exist
Oct 14 05:40:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 05:40:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:40:19 np0005486808 nova_compute[259627]: 2025-10-14 09:40:19.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:19 np0005486808 nova_compute[259627]: 2025-10-14 09:40:19.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:20 np0005486808 nova_compute[259627]: 2025-10-14 09:40:20.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 05:40:21 np0005486808 nova_compute[259627]: 2025-10-14 09:40:21.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434806.6306548, 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:40:21 np0005486808 nova_compute[259627]: 2025-10-14 09:40:21.634 2 INFO nova.compute.manager [-] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:40:21 np0005486808 nova_compute[259627]: 2025-10-14 09:40:21.664 2 DEBUG nova.compute.manager [None req-9699c253-86b8-45e5-9a9b-1895ee345424 - - - - - -] [instance: 3eaf3a7f-e0d6-4277-9c9c-1a8f3bc9f6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:21 np0005486808 nova_compute[259627]: 2025-10-14 09:40:21.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:40:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:40:25 np0005486808 nova_compute[259627]: 2025-10-14 09:40:25.569 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434810.568186, e5b13156-71d2-4a9c-be63-1beebe1ca3fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:40:25 np0005486808 nova_compute[259627]: 2025-10-14 09:40:25.570 2 INFO nova.compute.manager [-] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:40:25 np0005486808 nova_compute[259627]: 2025-10-14 09:40:25.598 2 DEBUG nova.compute.manager [None req-abda6417-3e73-4745-b51c-f2d63956071a - - - - - -] [instance: e5b13156-71d2-4a9c-be63-1beebe1ca3fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:25 np0005486808 nova_compute[259627]: 2025-10-14 09:40:25.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 14 05:40:26 np0005486808 nova_compute[259627]: 2025-10-14 09:40:26.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:27 np0005486808 podman[417430]: 2025-10-14 09:40:27.696381744 +0000 UTC m=+0.097922524 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 14 05:40:27 np0005486808 podman[417429]: 2025-10-14 09:40:27.702144585 +0000 UTC m=+0.105432128 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:40:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:28 np0005486808 nova_compute[259627]: 2025-10-14 09:40:28.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:30 np0005486808 nova_compute[259627]: 2025-10-14 09:40:30.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:31 np0005486808 nova_compute[259627]: 2025-10-14 09:40:31.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:40:32
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Oct 14 05:40:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:40:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:40:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:40:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:35.718 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:35 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:35.719 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:40:35 np0005486808 nova_compute[259627]: 2025-10-14 09:40:35.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.147 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8::f816:3eff:fe88:1e41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1bf2b41c-8c9f-45cd-aeb9-2459d1373791) old=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.149 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 updated#033[00m
Oct 14 05:40:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.151 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:40:36 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:36.153 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fd419d-f017-4d39-8c29-d61012445fb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:36 np0005486808 nova_compute[259627]: 2025-10-14 09:40:36.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.163 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8:0:1:f816:3eff:fe88:1e41 2001:db8::f816:3eff:fe88:1e41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe88:1e41/64 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1bf2b41c-8c9f-45cd-aeb9-2459d1373791) old=Port_Binding(mac=['fa:16:3e:88:1e:41 10.100.0.2 2001:db8::f816:3eff:fe88:1e41'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:1e41/64', 'neutron:device_id': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.165 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 updated#033[00m
Oct 14 05:40:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.167 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:40:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.168 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[75df4e3c-26cf-4c47-aa65-3153c94adeb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:40.721 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:40 np0005486808 nova_compute[259627]: 2025-10-14 09:40:40.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:41 np0005486808 podman[417468]: 2025-10-14 09:40:41.662676625 +0000 UTC m=+0.070386599 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 14 05:40:41 np0005486808 podman[417467]: 2025-10-14 09:40:41.707939845 +0000 UTC m=+0.126950747 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:40:41 np0005486808 nova_compute[259627]: 2025-10-14 09:40:41.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:40:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:40:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.114 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.114 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.142 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.218 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.219 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.226 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.227 2 INFO nova.compute.claims [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.324 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:40:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856832795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.783 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.823 2 DEBUG nova.compute.provider_tree [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.842 2 DEBUG nova.scheduler.client.report [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.870 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.871 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.925 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.926 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.960 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:40:45 np0005486808 nova_compute[259627]: 2025-10-14 09:40:45.988 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.079 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.081 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.082 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating image(s)#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.120 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.146 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.169 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.174 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.215 2 DEBUG nova.policy [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.259 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.260 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.261 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.261 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.291 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.295 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0239a56b-babd-4c44-b52b-ade80229be78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.609 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 0239a56b-babd-4c44-b52b-ade80229be78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.669 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:40:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.778 2 DEBUG nova.objects.instance [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.839 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.840 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Ensure instance console log exists: /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.841 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.841 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.842 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:46 np0005486808 nova_compute[259627]: 2025-10-14 09:40:46.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:47 np0005486808 nova_compute[259627]: 2025-10-14 09:40:47.832 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Successfully created port: 0b5d3762-db25-4cc9-90f3-79d3eb662378 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:40:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:40:48 np0005486808 nova_compute[259627]: 2025-10-14 09:40:48.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.315 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Successfully updated port: 0b5d3762-db25-4cc9-90f3-79d3eb662378 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.334 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.334 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.335 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.522 2 DEBUG nova.compute.manager [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.523 2 DEBUG nova.compute.manager [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.523 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:40:49 np0005486808 nova_compute[259627]: 2025-10-14 09:40:49.761 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:40:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:40:50 np0005486808 nova_compute[259627]: 2025-10-14 09:40:50.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:51 np0005486808 nova_compute[259627]: 2025-10-14 09:40:51.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.368 2 DEBUG nova.network.neutron [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance network_info: |[{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.398 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.399 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.403 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start _get_guest_xml network_info=[{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.409 2 WARNING nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.414 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.415 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.419 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.419 2 DEBUG nova.virt.libvirt.host [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.420 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.420 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.421 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.422 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.423 2 DEBUG nova.virt.hardware [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.426 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:40:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:40:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397886194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.911 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.934 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:52 np0005486808 nova_compute[259627]: 2025-10-14 09:40:52.938 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:40:53 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221020274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.380 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.383 2 DEBUG nova.virt.libvirt.vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:40:46Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.384 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.386 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.388 2 DEBUG nova.objects.instance [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.419 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <uuid>0239a56b-babd-4c44-b52b-ade80229be78</uuid>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <name>instance-00000096</name>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-35816390</nova:name>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:40:52</nova:creationTime>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <nova:port uuid="0b5d3762-db25-4cc9-90f3-79d3eb662378">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe37:ee6c" ipVersion="6"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe37:ee6c" ipVersion="6"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="serial">0239a56b-babd-4c44-b52b-ade80229be78</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="uuid">0239a56b-babd-4c44-b52b-ade80229be78</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0239a56b-babd-4c44-b52b-ade80229be78_disk">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/0239a56b-babd-4c44-b52b-ade80229be78_disk.config">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:37:ee:6c"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <target dev="tap0b5d3762-db"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/console.log" append="off"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:40:53 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:40:53 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:40:53 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:40:53 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.421 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Preparing to wait for external event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.422 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.423 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.423 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.424 2 DEBUG nova.virt.libvirt.vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:40:46Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.425 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.426 2 DEBUG nova.network.os_vif_util [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.427 2 DEBUG os_vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b5d3762-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b5d3762-db, col_values=(('external_ids', {'iface-id': '0b5d3762-db25-4cc9-90f3-79d3eb662378', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ee:6c', 'vm-uuid': '0239a56b-babd-4c44-b52b-ade80229be78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:53 np0005486808 NetworkManager[44885]: <info>  [1760434853.4396] manager: (tap0b5d3762-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.453 2 INFO os_vif [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db')#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.535 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.536 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.536 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:37:ee:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.537 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Using config drive#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.574 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:53 np0005486808 nova_compute[259627]: 2025-10-14 09:40:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.349 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Creating config drive at /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.359 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk9fmaip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.515 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmk9fmaip" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.560 2 DEBUG nova.storage.rbd_utils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 0239a56b-babd-4c44-b52b-ade80229be78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.567 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config 0239a56b-babd-4c44-b52b-ade80229be78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.789 2 DEBUG oslo_concurrency.processutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config 0239a56b-babd-4c44-b52b-ade80229be78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.791 2 INFO nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deleting local config drive /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78/disk.config because it was imported into RBD.#033[00m
Oct 14 05:40:54 np0005486808 kernel: tap0b5d3762-db: entered promiscuous mode
Oct 14 05:40:54 np0005486808 NetworkManager[44885]: <info>  [1760434854.8743] manager: (tap0b5d3762-db): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:54Z|01648|binding|INFO|Claiming lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 for this chassis.
Oct 14 05:40:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:54Z|01649|binding|INFO|0b5d3762-db25-4cc9-90f3-79d3eb662378: Claiming fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.903 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], port_security=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe37:ee6c/64 2001:db8::f816:3eff:fe37:ee6c/64', 'neutron:device_id': '0239a56b-babd-4c44-b52b-ade80229be78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b5d3762-db25-4cc9-90f3-79d3eb662378) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.904 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b5d3762-db25-4cc9-90f3-79d3eb662378 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 bound to our chassis#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.905 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[483525cd-d108-4a9a-b7af-20bb14e6d786]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.927 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2d242a4-f1 in ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.931 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2d242a4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.931 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4e352cf0-d727-48f0-a634-e51e4dac3f00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:54 np0005486808 systemd-machined[214636]: New machine qemu-183-instance-00000096.
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.932 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[3d12b999-d69c-43dc-a03d-8c3b779a572b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.946 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[4a242d89-d3f1-4257-8422-f5a4cf148204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:54 np0005486808 systemd[1]: Started Virtual Machine qemu-183-instance-00000096.
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:54Z|01650|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 ovn-installed in OVS
Oct 14 05:40:54 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:54Z|01651|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 up in Southbound
Oct 14 05:40:54 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:54.982 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[49e637e5-a95a-4678-adb2-6e180ef1164c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:54 np0005486808 nova_compute[259627]: 2025-10-14 09:40:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:54 np0005486808 systemd-udevd[417840]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:40:55 np0005486808 NetworkManager[44885]: <info>  [1760434855.0031] device (tap0b5d3762-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:40:55 np0005486808 NetworkManager[44885]: <info>  [1760434855.0041] device (tap0b5d3762-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[41b92a0e-0c03-4349-9c8b-b413c963d18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 systemd-udevd[417844]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:40:55 np0005486808 NetworkManager[44885]: <info>  [1760434855.0346] manager: (tapd2d242a4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/676)
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.035 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a49ad672-dc3b-480d-86c7-cb091e6aacdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.078 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[7881f8f7-54c9-4378-b362-3508590b3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.083 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[cbde498f-fff6-43ba-a5e8-98e949f44e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 NetworkManager[44885]: <info>  [1760434855.1168] device (tapd2d242a4-f0): carrier: link connected
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.123 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[681c9eb5-ef28-4b70-ab18-7211112345cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.147 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55fe5d-d36c-4c9e-967b-83bc53ac0b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417870, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.170 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[949c792d-4c76-4a9b-8bc0-7d08ee542f15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:1e41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863387, 'tstamp': 863387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417871, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.195 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[759c8ef6-333b-40fd-91a8-720aa41e4e7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417872, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.243 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[bb104c3a-0b82-4055-9a37-7c101f3277ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[a83a13a2-ad9d-48e3-a40b-dce3c66d7ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.317 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:55 np0005486808 kernel: tapd2d242a4-f0: entered promiscuous mode
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:55 np0005486808 NetworkManager[44885]: <info>  [1760434855.3202] manager: (tapd2d242a4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.321 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:40:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:55Z|01652|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.341 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.342 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8c7f68-24f9-4f12-8879-25f1ef5b8dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.343 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.pid.haproxy
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID d2d242a4-fdb7-41f9-9ee7-4e1b17687d68
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:40:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:40:55.344 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'env', 'PROCESS_TAG=haproxy-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2d242a4-fdb7-41f9-9ee7-4e1b17687d68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.372 2 DEBUG nova.compute.manager [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.375 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.376 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.376 2 DEBUG oslo_concurrency.lockutils [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.377 2 DEBUG nova.compute.manager [req-c02771fc-c883-4d0c-b30e-cdcdbcb6e305 req-fa40f779-f510-4489-b407-b4ab2845245b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Processing event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.622 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.624 2 DEBUG nova.network.neutron [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:40:55 np0005486808 nova_compute[259627]: 2025-10-14 09:40:55.647 2 DEBUG oslo_concurrency.lockutils [req-aa9d3cef-fc04-4e03-87a7-45fc9e11c9fa req-2a116f53-f356-4278-b6ea-66a1ff2cf753 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:40:55 np0005486808 podman[417945]: 2025-10-14 09:40:55.746645644 +0000 UTC m=+0.049823994 container create 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:40:55 np0005486808 systemd[1]: Started libpod-conmon-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope.
Oct 14 05:40:55 np0005486808 podman[417945]: 2025-10-14 09:40:55.71715472 +0000 UTC m=+0.020333090 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:40:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:40:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b2f022e3216a77d69566ed9e97088f68d82c271e111cd888b10c372258ed61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:40:55 np0005486808 podman[417945]: 2025-10-14 09:40:55.845567692 +0000 UTC m=+0.148746032 container init 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:40:55 np0005486808 podman[417945]: 2025-10-14 09:40:55.851153829 +0000 UTC m=+0.154332169 container start 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:40:55 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : New worker (417966) forked
Oct 14 05:40:55 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : Loading success.
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.0452204, 0239a56b-babd-4c44-b52b-ade80229be78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Started (Lifecycle Event)#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.049 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.053 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.058 2 INFO nova.virt.libvirt.driver [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance spawned successfully.#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.059 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.077 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.084 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.094 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.095 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.096 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.096 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.097 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.098 2 DEBUG nova.virt.libvirt.driver [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.113 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.114 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.0453238, 0239a56b-babd-4c44-b52b-ade80229be78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.114 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.151 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.158 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434856.052432, 0239a56b-babd-4c44-b52b-ade80229be78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.158 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.199 2 INFO nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 10.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.200 2 DEBUG nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.201 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.210 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.253 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.274 2 INFO nova.compute.manager [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 11.08 seconds to build instance.#033[00m
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.291 2 DEBUG oslo_concurrency.lockutils [None req-5a974eef-0b49-45a9-904b-ce09a8f9b749 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 05:40:56 np0005486808 nova_compute[259627]: 2025-10-14 09:40:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.049 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.050 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.051 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.535 2 DEBUG nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.536 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.537 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.538 2 DEBUG oslo_concurrency.lockutils [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.538 2 DEBUG nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.539 2 WARNING nova.compute.manager [req-9b9b46b7-403f-4af3-94bb-57ca752ddaa4 req-2282553e-4481-4f79-8f54-a459ccef778e 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received unexpected event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:40:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:40:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104448304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.575 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.653 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.654 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.875 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.877 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3457MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.877 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.878 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.950 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 0239a56b-babd-4c44-b52b-ade80229be78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.951 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:40:57 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.952 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:57.995 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:40:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:40:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:40:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637702024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.437 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.447 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.467 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.498 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:40:58 np0005486808 nova_compute[259627]: 2025-10-14 09:40:58.498 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:40:58 np0005486808 podman[418020]: 2025-10-14 09:40:58.658250194 +0000 UTC m=+0.067289332 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:40:58 np0005486808 podman[418021]: 2025-10-14 09:40:58.681578047 +0000 UTC m=+0.079436741 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 05:40:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 14 05:40:59 np0005486808 NetworkManager[44885]: <info>  [1760434859.7375] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Oct 14 05:40:59 np0005486808 NetworkManager[44885]: <info>  [1760434859.7396] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Oct 14 05:40:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:59Z|01653|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 05:40:59 np0005486808 nova_compute[259627]: 2025-10-14 09:40:59.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:40:59Z|01654|binding|INFO|Releasing lport 1bf2b41c-8c9f-45cd-aeb9-2459d1373791 from this chassis (sb_readonly=0)
Oct 14 05:40:59 np0005486808 nova_compute[259627]: 2025-10-14 09:40:59.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:40:59 np0005486808 nova_compute[259627]: 2025-10-14 09:40:59.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:00 np0005486808 nova_compute[259627]: 2025-10-14 09:41:00.374 2 DEBUG nova.compute.manager [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:00 np0005486808 nova_compute[259627]: 2025-10-14 09:41:00.375 2 DEBUG nova.compute.manager [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:41:00 np0005486808 nova_compute[259627]: 2025-10-14 09:41:00.376 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:00 np0005486808 nova_compute[259627]: 2025-10-14 09:41:00.376 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:00 np0005486808 nova_compute[259627]: 2025-10-14 09:41:00.377 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:41:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.500 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.500 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.501 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.528 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.528 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.529 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:01 np0005486808 nova_compute[259627]: 2025-10-14 09:41:01.530 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:41:02 np0005486808 nova_compute[259627]: 2025-10-14 09:41:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:02 np0005486808 nova_compute[259627]: 2025-10-14 09:41:02.029 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:41:02 np0005486808 nova_compute[259627]: 2025-10-14 09:41:02.030 2 DEBUG nova.network.neutron [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:02 np0005486808 nova_compute[259627]: 2025-10-14 09:41:02.054 2 DEBUG oslo_concurrency.lockutils [req-5da5573e-9bbf-44a6-b46d-b5a1fa1c7306 req-c2ecaf2a-a367-41a5-a156-4bf50ae2d30f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:03 np0005486808 nova_compute[259627]: 2025-10-14 09:41:03.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:41:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:41:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:41:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:41:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282492851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:41:05 np0005486808 nova_compute[259627]: 2025-10-14 09:41:05.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:41:07 np0005486808 nova_compute[259627]: 2025-10-14 09:41:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.056 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:08 np0005486808 nova_compute[259627]: 2025-10-14 09:41:08.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:08Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:ee:6c 10.100.0.12
Oct 14 05:41:08 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:08Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:ee:6c 10.100.0.12
Oct 14 05:41:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 14 05:41:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 05:41:12 np0005486808 nova_compute[259627]: 2025-10-14 09:41:12.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:12 np0005486808 podman[418064]: 2025-10-14 09:41:12.692399521 +0000 UTC m=+0.097166306 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:41:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 05:41:12 np0005486808 podman[418063]: 2025-10-14 09:41:12.743927305 +0000 UTC m=+0.150034543 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:41:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:13 np0005486808 nova_compute[259627]: 2025-10-14 09:41:13.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 05:41:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:17 np0005486808 nova_compute[259627]: 2025-10-14 09:41:17.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:18 np0005486808 nova_compute[259627]: 2025-10-14 09:41:18.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:19 np0005486808 nova_compute[259627]: 2025-10-14 09:41:19.027 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:19 np0005486808 nova_compute[259627]: 2025-10-14 09:41:19.028 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:19 np0005486808 nova_compute[259627]: 2025-10-14 09:41:19.045 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 91d5641c-5ff0-4b64-a883-88dbe2e9d0b6 does not exist
Oct 14 05:41:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c80beece-36fc-45dc-963d-742d9d98dbef does not exist
Oct 14 05:41:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 872df25f-eb6b-43b0-bfaf-a24cbeae476e does not exist
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:41:19 np0005486808 podman[418378]: 2025-10-14 09:41:19.950445461 +0000 UTC m=+0.060827003 container create 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:41:19 np0005486808 systemd[1]: Started libpod-conmon-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope.
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:19.918245111 +0000 UTC m=+0.028626703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:20.057173871 +0000 UTC m=+0.167555443 container init 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:20.066203632 +0000 UTC m=+0.176585164 container start 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:20.070669552 +0000 UTC m=+0.181051154 container attach 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:41:20 np0005486808 wonderful_pascal[418394]: 167 167
Oct 14 05:41:20 np0005486808 systemd[1]: libpod-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope: Deactivated successfully.
Oct 14 05:41:20 np0005486808 conmon[418394]: conmon 1cd9cd5b996f3cce9feb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope/container/memory.events
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:20.073932552 +0000 UTC m=+0.184314064 container died 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:41:20 np0005486808 nova_compute[259627]: 2025-10-14 09:41:20.101 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:20 np0005486808 nova_compute[259627]: 2025-10-14 09:41:20.104 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-466537f7b95a03f4b98c6f6fa002ee93d6540e9f2d17f6280890b78c64ef4236-merged.mount: Deactivated successfully.
Oct 14 05:41:20 np0005486808 nova_compute[259627]: 2025-10-14 09:41:20.118 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:41:20 np0005486808 nova_compute[259627]: 2025-10-14 09:41:20.118 2 INFO nova.compute.claims [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:41:20 np0005486808 podman[418378]: 2025-10-14 09:41:20.132838537 +0000 UTC m=+0.243220049 container remove 1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:41:20 np0005486808 systemd[1]: libpod-conmon-1cd9cd5b996f3cce9feb805f1e8351728f760073541d8afcbbb269694148a0fc.scope: Deactivated successfully.
Oct 14 05:41:20 np0005486808 podman[418418]: 2025-10-14 09:41:20.373727119 +0000 UTC m=+0.075622847 container create 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:41:20 np0005486808 systemd[1]: Started libpod-conmon-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope.
Oct 14 05:41:20 np0005486808 podman[418418]: 2025-10-14 09:41:20.342275977 +0000 UTC m=+0.044171745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:20 np0005486808 podman[418418]: 2025-10-14 09:41:20.508718262 +0000 UTC m=+0.210614030 container init 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:41:20 np0005486808 podman[418418]: 2025-10-14 09:41:20.525719739 +0000 UTC m=+0.227615457 container start 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:41:20 np0005486808 podman[418418]: 2025-10-14 09:41:20.530163388 +0000 UTC m=+0.232059096 container attach 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:41:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.086 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:41:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887099128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.589 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.597 2 DEBUG nova.compute.provider_tree [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:41:21 np0005486808 laughing_sinoussi[418434]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:41:21 np0005486808 laughing_sinoussi[418434]: --> relative data size: 1.0
Oct 14 05:41:21 np0005486808 laughing_sinoussi[418434]: --> All data devices are unavailable
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.626 2 DEBUG nova.scheduler.client.report [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.647 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.648 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:41:21 np0005486808 systemd[1]: libpod-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Deactivated successfully.
Oct 14 05:41:21 np0005486808 podman[418418]: 2025-10-14 09:41:21.66680121 +0000 UTC m=+1.368696898 container died 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:41:21 np0005486808 systemd[1]: libpod-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Consumed 1.096s CPU time.
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.689 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.690 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:41:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3fd67728abedc195b4842ede9b7443b310961ab2531f3b5a8292be0c948954d7-merged.mount: Deactivated successfully.
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.717 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:41:21 np0005486808 podman[418418]: 2025-10-14 09:41:21.732871821 +0000 UTC m=+1.434767509 container remove 47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:41:21 np0005486808 systemd[1]: libpod-conmon-47d138c73befed8d60e94277ad2c1c5fa88153022f5fa011e2ab7a9a9675204a.scope: Deactivated successfully.
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.749 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.836 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.838 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.839 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating image(s)#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.866 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.911 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.949 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:21 np0005486808 nova_compute[259627]: 2025-10-14 09:41:21.953 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.030 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.031 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.032 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.032 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.097 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.102 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3e37b67b-524c-4098-9609-97b0b31e72c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.200 2 DEBUG nova.policy [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.361 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 3e37b67b-524c-4098-9609-97b0b31e72c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.485 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.605 2 DEBUG nova.objects.instance [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:41:22 np0005486808 podman[418791]: 2025-10-14 09:41:22.631484804 +0000 UTC m=+0.044742229 container create 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:41:22 np0005486808 systemd[1]: Started libpod-conmon-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope.
Oct 14 05:41:22 np0005486808 podman[418791]: 2025-10-14 09:41:22.612568219 +0000 UTC m=+0.025825724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 05:41:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.727 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.729 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Ensure instance console log exists: /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.729 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.730 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:22 np0005486808 nova_compute[259627]: 2025-10-14 09:41:22.731 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:22 np0005486808 podman[418791]: 2025-10-14 09:41:22.732292567 +0000 UTC m=+0.145550012 container init 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:41:22 np0005486808 podman[418791]: 2025-10-14 09:41:22.743542453 +0000 UTC m=+0.156799878 container start 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:41:22 np0005486808 podman[418791]: 2025-10-14 09:41:22.74707629 +0000 UTC m=+0.160333715 container attach 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:41:22 np0005486808 cool_leavitt[418819]: 167 167
Oct 14 05:41:22 np0005486808 systemd[1]: libpod-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope: Deactivated successfully.
Oct 14 05:41:22 np0005486808 podman[418824]: 2025-10-14 09:41:22.815911449 +0000 UTC m=+0.041563981 container died 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:41:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5a9d115116c42831b425052bd275f8540bbd0a8ae4ceec80b1ed4ef3474f8d84-merged.mount: Deactivated successfully.
Oct 14 05:41:22 np0005486808 podman[418824]: 2025-10-14 09:41:22.861262572 +0000 UTC m=+0.086915094 container remove 3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leavitt, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:41:22 np0005486808 systemd[1]: libpod-conmon-3cbb11f8bbdf53e901ee6008800f08753f87cbd95577addc39eec0f01c5eb694.scope: Deactivated successfully.
Oct 14 05:41:23 np0005486808 podman[418846]: 2025-10-14 09:41:23.099777056 +0000 UTC m=+0.051177497 container create 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:41:23 np0005486808 systemd[1]: Started libpod-conmon-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope.
Oct 14 05:41:23 np0005486808 podman[418846]: 2025-10-14 09:41:23.074513976 +0000 UTC m=+0.025914507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:23 np0005486808 podman[418846]: 2025-10-14 09:41:23.20631441 +0000 UTC m=+0.157714911 container init 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:41:23 np0005486808 podman[418846]: 2025-10-14 09:41:23.222469736 +0000 UTC m=+0.173870217 container start 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:41:23 np0005486808 podman[418846]: 2025-10-14 09:41:23.226999828 +0000 UTC m=+0.178400319 container attach 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:41:23 np0005486808 nova_compute[259627]: 2025-10-14 09:41:23.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:23 np0005486808 nova_compute[259627]: 2025-10-14 09:41:23.702 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Successfully created port: 3300e6b2-d3bc-432e-925e-1d837fab4a11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]: {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    "0": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "devices": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "/dev/loop3"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            ],
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_name": "ceph_lv0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_size": "21470642176",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "name": "ceph_lv0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "tags": {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_name": "ceph",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.crush_device_class": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.encrypted": "0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_id": "0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.vdo": "0"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            },
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "vg_name": "ceph_vg0"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        }
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    ],
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    "1": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "devices": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "/dev/loop4"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            ],
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_name": "ceph_lv1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_size": "21470642176",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "name": "ceph_lv1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "tags": {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_name": "ceph",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.crush_device_class": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.encrypted": "0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_id": "1",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.vdo": "0"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            },
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "vg_name": "ceph_vg1"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        }
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    ],
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    "2": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "devices": [
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "/dev/loop5"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            ],
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_name": "ceph_lv2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_size": "21470642176",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "name": "ceph_lv2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "tags": {
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.cluster_name": "ceph",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.crush_device_class": "",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.encrypted": "0",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osd_id": "2",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:                "ceph.vdo": "0"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            },
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "type": "block",
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:            "vg_name": "ceph_vg2"
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:        }
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]:    ]
Oct 14 05:41:24 np0005486808 kind_brahmagupta[418863]: }
Oct 14 05:41:24 np0005486808 systemd[1]: libpod-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope: Deactivated successfully.
Oct 14 05:41:24 np0005486808 podman[418846]: 2025-10-14 09:41:24.068091978 +0000 UTC m=+1.019492429 container died 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:41:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-038bf6fb65a655c7a1315d2bc6f5ebf61555b87dc5581045caef66b9b401815d-merged.mount: Deactivated successfully.
Oct 14 05:41:24 np0005486808 podman[418846]: 2025-10-14 09:41:24.172647494 +0000 UTC m=+1.124047975 container remove 7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:41:24 np0005486808 systemd[1]: libpod-conmon-7b09bf58b34868c1fb35e1d292858a97c0daf65e8011b3651d3df0e50077185d.scope: Deactivated successfully.
Oct 14 05:41:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.740 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Successfully updated port: 3300e6b2-d3bc-432e-925e-1d837fab4a11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.768 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.887 2 DEBUG nova.compute.manager [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.888 2 DEBUG nova.compute.manager [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:41:24 np0005486808 nova_compute[259627]: 2025-10-14 09:41:24.888 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.043670708 +0000 UTC m=+0.052754596 container create f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:41:25 np0005486808 systemd[1]: Started libpod-conmon-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope.
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.021474013 +0000 UTC m=+0.030557871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.139225993 +0000 UTC m=+0.148309921 container init f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.150146591 +0000 UTC m=+0.159230449 container start f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.15378396 +0000 UTC m=+0.162867888 container attach f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:41:25 np0005486808 exciting_clarke[419042]: 167 167
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.159600063 +0000 UTC m=+0.168683961 container died f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:41:25 np0005486808 systemd[1]: libpod-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope: Deactivated successfully.
Oct 14 05:41:25 np0005486808 nova_compute[259627]: 2025-10-14 09:41:25.173 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:41:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f9b8cd1a76e6b09894df7650950783850e811b6138b82c14ee2b6df9aad9d0dc-merged.mount: Deactivated successfully.
Oct 14 05:41:25 np0005486808 podman[419025]: 2025-10-14 09:41:25.213797523 +0000 UTC m=+0.222881371 container remove f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:41:25 np0005486808 systemd[1]: libpod-conmon-f2b77e0fad1af8fdca4a560a489fb106dcad27910979dddb18537954d92881a5.scope: Deactivated successfully.
Oct 14 05:41:25 np0005486808 podman[419066]: 2025-10-14 09:41:25.462973968 +0000 UTC m=+0.070348268 container create 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:41:25 np0005486808 podman[419066]: 2025-10-14 09:41:25.435080453 +0000 UTC m=+0.042454803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:41:25 np0005486808 systemd[1]: Started libpod-conmon-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope.
Oct 14 05:41:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:41:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:41:25 np0005486808 podman[419066]: 2025-10-14 09:41:25.615999333 +0000 UTC m=+0.223373683 container init 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:41:25 np0005486808 podman[419066]: 2025-10-14 09:41:25.627475965 +0000 UTC m=+0.234850265 container start 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:41:25 np0005486808 podman[419066]: 2025-10-14 09:41:25.633050542 +0000 UTC m=+0.240424842 container attach 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:41:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]: {
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_id": 2,
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "type": "bluestore"
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    },
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_id": 1,
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "type": "bluestore"
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    },
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_id": 0,
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:        "type": "bluestore"
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]:    }
Oct 14 05:41:26 np0005486808 vibrant_brown[419082]: }
Oct 14 05:41:26 np0005486808 systemd[1]: libpod-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Deactivated successfully.
Oct 14 05:41:26 np0005486808 systemd[1]: libpod-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Consumed 1.136s CPU time.
Oct 14 05:41:26 np0005486808 podman[419115]: 2025-10-14 09:41:26.796631866 +0000 UTC m=+0.029857274 container died 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:41:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b78e313f8fd38501d7eebabb56edeb2a47c447541a90d2dd523202971342f945-merged.mount: Deactivated successfully.
Oct 14 05:41:26 np0005486808 podman[419115]: 2025-10-14 09:41:26.853285206 +0000 UTC m=+0.086510614 container remove 3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:41:26 np0005486808 systemd[1]: libpod-conmon-3f5ce003222fdb08bebc7cb29e3ad079f49be2277a5af55ac4f225ec6e630beb.scope: Deactivated successfully.
Oct 14 05:41:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:41:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:41:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5ff90b68-5547-4ccf-983c-48a5ce044fb3 does not exist
Oct 14 05:41:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0fe86e24-926f-4329-933f-af043d04de34 does not exist
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.385 2 DEBUG nova.network.neutron [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.428 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance network_info: |[{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.429 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.432 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start _get_guest_xml network_info=[{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.437 2 WARNING nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.448 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.449 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.452 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.452 2 DEBUG nova.virt.libvirt.host [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.453 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.454 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.455 2 DEBUG nova.virt.hardware [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.458 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:41:27 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333800934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:41:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.916 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.943 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:27 np0005486808 nova_compute[259627]: 2025-10-14 09:41:27.948 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:41:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472713562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.388 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.392 2 DEBUG nova.virt.libvirt.vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:41:21Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.393 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.395 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.397 2 DEBUG nova.objects.instance [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.420 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <uuid>3e37b67b-524c-4098-9609-97b0b31e72c4</uuid>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <name>instance-00000097</name>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-2135470383</nova:name>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:41:27</nova:creationTime>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <nova:port uuid="3300e6b2-d3bc-432e-925e-1d837fab4a11">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe15:7971" ipVersion="6"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe15:7971" ipVersion="6"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="serial">3e37b67b-524c-4098-9609-97b0b31e72c4</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="uuid">3e37b67b-524c-4098-9609-97b0b31e72c4</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3e37b67b-524c-4098-9609-97b0b31e72c4_disk">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:15:79:71"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <target dev="tap3300e6b2-d3"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/console.log" append="off"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:41:28 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:41:28 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:41:28 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:41:28 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.422 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Preparing to wait for external event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.423 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.424 2 DEBUG nova.virt.libvirt.vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:41:21Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.425 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.426 2 DEBUG nova.network.os_vif_util [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.426 2 DEBUG os_vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3300e6b2-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3300e6b2-d3, col_values=(('external_ids', {'iface-id': '3300e6b2-d3bc-432e-925e-1d837fab4a11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:79:71', 'vm-uuid': '3e37b67b-524c-4098-9609-97b0b31e72c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:28 np0005486808 NetworkManager[44885]: <info>  [1760434888.4406] manager: (tap3300e6b2-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/680)
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.451 2 INFO os_vif [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3')#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.534 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.535 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.536 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:15:79:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.537 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Using config drive#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.576 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.958 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Creating config drive at /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config#033[00m
Oct 14 05:41:28 np0005486808 nova_compute[259627]: 2025-10-14 09:41:28.966 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv65pca3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.142 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv65pca3j" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.175 2 DEBUG nova.storage.rbd_utils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.179 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.419 2 DEBUG oslo_concurrency.processutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config 3e37b67b-524c-4098-9609-97b0b31e72c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.421 2 INFO nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deleting local config drive /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4/disk.config because it was imported into RBD.#033[00m
Oct 14 05:41:29 np0005486808 kernel: tap3300e6b2-d3: entered promiscuous mode
Oct 14 05:41:29 np0005486808 NetworkManager[44885]: <info>  [1760434889.4972] manager: (tap3300e6b2-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/681)
Oct 14 05:41:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:29Z|01655|binding|INFO|Claiming lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 for this chassis.
Oct 14 05:41:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:29Z|01656|binding|INFO|3300e6b2-d3bc-432e-925e-1d837fab4a11: Claiming fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.541 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], port_security=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fe15:7971/64 2001:db8::f816:3eff:fe15:7971/64', 'neutron:device_id': '3e37b67b-524c-4098-9609-97b0b31e72c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3300e6b2-d3bc-432e-925e-1d837fab4a11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.542 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3300e6b2-d3bc-432e-925e-1d837fab4a11 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 bound to our chassis#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.543 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68#033[00m
Oct 14 05:41:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:29Z|01657|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 ovn-installed in OVS
Oct 14 05:41:29 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:29Z|01658|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 up in Southbound
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.566 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8800aa87-5674-42c4-969b-2c7afec82870]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 systemd-machined[214636]: New machine qemu-184-instance-00000097.
Oct 14 05:41:29 np0005486808 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.599 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6b5b3d-2385-452b-87db-e90dd523f1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.603 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd70f13-5be8-4ae9-ad01-7379c5a79cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 systemd-udevd[419351]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:41:29 np0005486808 NetworkManager[44885]: <info>  [1760434889.6359] device (tap3300e6b2-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:41:29 np0005486808 NetworkManager[44885]: <info>  [1760434889.6367] device (tap3300e6b2-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.635 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a67f4375-7cda-4c0d-8fcd-2a732316a56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 podman[419315]: 2025-10-14 09:41:29.651191076 +0000 UTC m=+0.076209411 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.661 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[8e24309c-169b-4d28-8f24-d9d33c51bb2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419363, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 podman[419313]: 2025-10-14 09:41:29.677787289 +0000 UTC m=+0.108611057 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.676 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[33d2c6dd-c76a-4ade-b008-aa4866a2783d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863403, 'tstamp': 863403}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419368, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863407, 'tstamp': 863407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419368, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.678 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:29 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:29.681 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.840 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.840 2 DEBUG nova.network.neutron [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.857 2 DEBUG oslo_concurrency.lockutils [req-b954894c-0bfa-4308-8ac1-3ec80b468e47 req-b51ff910-40a1-404b-9f6c-1c7263762de4 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.962 2 DEBUG nova.compute.manager [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.963 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.964 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.965 2 DEBUG oslo_concurrency.lockutils [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:29 np0005486808 nova_compute[259627]: 2025-10-14 09:41:29.966 2 DEBUG nova.compute.manager [req-5afec603-1076-4618-8cc7-5c91804e6ca0 req-9a7ac775-3f97-4b37-9f08-0f222e70e2b5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Processing event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:41:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:30.066 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:30 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:30.068 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:41:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.857 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8565214, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.859 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Started (Lifecycle Event)#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.861 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.863 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.866 2 INFO nova.virt.libvirt.driver [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance spawned successfully.#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.866 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.897 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.908 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.909 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.910 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.910 2 DEBUG nova.virt.libvirt.driver [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.956 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.957 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8569922, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:41:30 np0005486808 nova_compute[259627]: 2025-10-14 09:41:30.957 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.033 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.036 2 INFO nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 9.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.037 2 DEBUG nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.045 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434890.8630536, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.046 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.082 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.085 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.127 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.140 2 INFO nova.compute.manager [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 12.04 seconds to build instance.#033[00m
Oct 14 05:41:31 np0005486808 nova_compute[259627]: 2025-10-14 09:41:31.158 2 DEBUG oslo_concurrency.lockutils [None req-2313e351-9387-46b4-9bca-df9de44c6670 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.154 2 DEBUG nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG oslo_concurrency.lockutils [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.155 2 DEBUG nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.156 2 WARNING nova.compute.manager [req-c092d41d-4845-4836-ad21-fb5f114743c9 req-1b439b69-a0b4-42ad-b8b1-e31d4e0c935b 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:41:32 np0005486808 nova_compute[259627]: 2025-10-14 09:41:32.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:41:32
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'cephfs.cephfs.data']
Oct 14 05:41:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:41:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:41:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:41:33 np0005486808 nova_compute[259627]: 2025-10-14 09:41:33.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 14 05:41:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:41:37 np0005486808 nova_compute[259627]: 2025-10-14 09:41:37.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.057 2 DEBUG nova.compute.manager [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.059 2 DEBUG nova.compute.manager [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.059 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.060 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.060 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:41:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:38 np0005486808 nova_compute[259627]: 2025-10-14 09:41:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:41:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:40.070 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 14 05:41:41 np0005486808 nova_compute[259627]: 2025-10-14 09:41:41.393 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:41:41 np0005486808 nova_compute[259627]: 2025-10-14 09:41:41.394 2 DEBUG nova.network.neutron [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:41 np0005486808 nova_compute[259627]: 2025-10-14 09:41:41.427 2 DEBUG oslo_concurrency.lockutils [req-2044d707-febf-4a95-b590-dab8ade25766 req-9d56c999-77d6-4ba5-bc4f-6e1a74d3f5fc 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:41:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:41Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:79:71 10.100.0.10
Oct 14 05:41:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:41Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:79:71 10.100.0.10
Oct 14 05:41:42 np0005486808 nova_compute[259627]: 2025-10-14 09:41:42.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 05:41:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:43 np0005486808 nova_compute[259627]: 2025-10-14 09:41:43.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011080044930650518 of space, bias 1.0, pg target 0.33240134791951553 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:41:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:41:43 np0005486808 podman[419414]: 2025-10-14 09:41:43.701680452 +0000 UTC m=+0.099289607 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:41:43 np0005486808 podman[419413]: 2025-10-14 09:41:43.70361105 +0000 UTC m=+0.110071412 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:41:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Oct 14 05:41:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 14 05:41:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:41:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1334 writes, 5814 keys, 1334 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1334 writes, 1334 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    101.1      0.64              0.25        38    0.017       0      0       0.0       0.0#012  L6      1/0    7.90 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    184.3    155.0      1.95              1.04        37    0.053    224K    20K       0.0       0.0#012 Sum      1/0    7.90 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6    138.6    141.7      2.59              1.30        75    0.035    224K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.6    146.9    146.7      0.28              0.18         8    0.035     31K   1981       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    184.3    155.0      1.95              1.04        37    0.053    224K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    102.2      0.64              0.25        37    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.064, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 2.6 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 40.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000354 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2661,39.20 MB,12.8937%) FilterBlock(76,633.86 KB,0.20362%) IndexBlock(76,1.04 MB,0.342299%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:41:47 np0005486808 nova_compute[259627]: 2025-10-14 09:41:47.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:48 np0005486808 nova_compute[259627]: 2025-10-14 09:41:48.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:49 np0005486808 nova_compute[259627]: 2025-10-14 09:41:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 14 05:41:52 np0005486808 nova_compute[259627]: 2025-10-14 09:41:52.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:53 np0005486808 nova_compute[259627]: 2025-10-14 09:41:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:41:54 np0005486808 nova_compute[259627]: 2025-10-14 09:41:54.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:54 np0005486808 nova_compute[259627]: 2025-10-14 09:41:54.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.937 2 DEBUG nova.compute.manager [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.938 2 DEBUG nova.compute.manager [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing instance network info cache due to event network-changed-3300e6b2-d3bc-432e-925e-1d837fab4a11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.939 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.939 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.940 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Refreshing network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.976 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.977 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.977 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.978 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.979 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.980 2 INFO nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Terminating instance#033[00m
Oct 14 05:41:55 np0005486808 nova_compute[259627]: 2025-10-14 09:41:55.982 2 DEBUG nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:41:56 np0005486808 kernel: tap3300e6b2-d3 (unregistering): left promiscuous mode
Oct 14 05:41:56 np0005486808 NetworkManager[44885]: <info>  [1760434916.0405] device (tap3300e6b2-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:41:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:56Z|01659|binding|INFO|Releasing lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 from this chassis (sb_readonly=0)
Oct 14 05:41:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:56Z|01660|binding|INFO|Setting lport 3300e6b2-d3bc-432e-925e-1d837fab4a11 down in Southbound
Oct 14 05:41:56 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:56Z|01661|binding|INFO|Removing iface tap3300e6b2-d3 ovn-installed in OVS
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.071 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], port_security=['fa:16:3e:15:79:71 10.100.0.10 2001:db8:0:1:f816:3eff:fe15:7971 2001:db8::f816:3eff:fe15:7971'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fe15:7971/64 2001:db8::f816:3eff:fe15:7971/64', 'neutron:device_id': '3e37b67b-524c-4098-9609-97b0b31e72c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=3300e6b2-d3bc-432e-925e-1d837fab4a11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.073 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 3300e6b2-d3bc-432e-925e-1d837fab4a11 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 unbound from our chassis#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.076 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.107 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f821d238-5727-4680-a844-1dfb66bb7991]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 14 05:41:56 np0005486808 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 13.456s CPU time.
Oct 14 05:41:56 np0005486808 systemd-machined[214636]: Machine qemu-184-instance-00000097 terminated.
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.149 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3c1ac1-a41a-4ddb-8fb2-0a9055d7670a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.152 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[d577cde1-d435-4268-ab03-0d4df158deaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.195 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[c04bc658-1f4b-4bf0-9e29-9288900f8495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.215 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[982ab23f-1391-48d6-8b09-ce85ebfde597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2d242a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:1e:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863387, 'reachable_time': 34263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419473, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.232 2 INFO nova.virt.libvirt.driver [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Instance destroyed successfully.#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.237 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3f35f8-f4ce-47fb-a598-fb6e661c9f20]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863403, 'tstamp': 863403}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419480, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2d242a4-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 863407, 'tstamp': 863407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419480, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.238 2 DEBUG nova.objects.instance [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 3e37b67b-524c-4098-9609-97b0b31e72c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.239 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.246 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2d242a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.247 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2d242a4-f0, col_values=(('external_ids', {'iface-id': '1bf2b41c-8c9f-45cd-aeb9-2459d1373791'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:56.248 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.253 2 DEBUG nova.virt.libvirt.vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2135470383',display_name='tempest-TestGettingAddress-server-2135470383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2135470383',id=151,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:41:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-ny0720r7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:41:31Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=3e37b67b-524c-4098-9609-97b0b31e72c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.254 2 DEBUG nova.network.os_vif_util [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.255 2 DEBUG nova.network.os_vif_util [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.256 2 DEBUG os_vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3300e6b2-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.267 2 INFO os_vif [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:79:71,bridge_name='br-int',has_traffic_filtering=True,id=3300e6b2-d3bc-432e-925e-1d837fab4a11,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3300e6b2-d3')#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.708 2 INFO nova.virt.libvirt.driver [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deleting instance files /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4_del#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.709 2 INFO nova.virt.libvirt.driver [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deletion of /var/lib/nova/instances/3e37b67b-524c-4098-9609-97b0b31e72c4_del complete#033[00m
Oct 14 05:41:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.776 2 INFO nova.compute.manager [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.777 2 DEBUG oslo.service.loopingcall [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.778 2 DEBUG nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:41:56 np0005486808 nova_compute[259627]: 2025-10-14 09:41:56.779 2 DEBUG nova.network.neutron [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.531 2 DEBUG nova.network.neutron [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.554 2 INFO nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.615 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.615 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.660 2 DEBUG nova.compute.manager [req-1075a146-5d77-4878-96d7-ce0ebb145982 req-5b0ce4f4-642a-49e8-a621-099a3437230c 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-deleted-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.715 2 DEBUG oslo_concurrency.processutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.968 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updated VIF entry in instance network info cache for port 3300e6b2-d3bc-432e-925e-1d837fab4a11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.969 2 DEBUG nova.network.neutron [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Updating instance_info_cache with network_info: [{"id": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "address": "fa:16:3e:15:79:71", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:7971", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3300e6b2-d3", "ovs_interfaceid": "3300e6b2-d3bc-432e-925e-1d837fab4a11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:41:57 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:57.999 2 DEBUG oslo_concurrency.lockutils [req-d3929f3c-2fa7-46ce-b7ea-e31d94478ab5 req-192c70a6-4fe8-4de1-a0b5-5ed802f99fa5 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-3e37b67b-524c-4098-9609-97b0b31e72c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.003 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.016 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.017 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.018 2 WARNING nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-unplugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.018 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.019 2 DEBUG oslo_concurrency.lockutils [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.019 2 DEBUG nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] No waiting events found dispatching network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.019 2 WARNING nova.compute.manager [req-ce74c4e7-1730-41bd-89ca-24e905153529 req-fd7e0f9f-d268-4983-958c-dde5c9020893 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Received unexpected event network-vif-plugged-3300e6b2-d3bc-432e-925e-1d837fab4a11 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:41:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:41:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231750227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.178 2 DEBUG oslo_concurrency.processutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.183 2 DEBUG nova.compute.provider_tree [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.199 2 DEBUG nova.scheduler.client.report [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.216 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.218 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.219 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.302 2 INFO nova.scheduler.client.report [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 3e37b67b-524c-4098-9609-97b0b31e72c4#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.368 2 DEBUG oslo_concurrency.lockutils [None req-438ec4a6-3d17-4d29-a940-069dceaddea9 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "3e37b67b-524c-4098-9609-97b0b31e72c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:41:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344874566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.694 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 2 op/s
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.774 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:41:58 np0005486808 nova_compute[259627]: 2025-10-14 09:41:58.775 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.034 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.036 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3384MB free_disk=59.897186279296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.037 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.136 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance 0239a56b-babd-4c44-b52b-ade80229be78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.137 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.137 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.190 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:41:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:41:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192242180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.657 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.665 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.669 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.670 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.671 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.674 2 INFO nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Terminating instance#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.676 2 DEBUG nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.683 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:41:59 np0005486808 kernel: tap0b5d3762-db (unregistering): left promiscuous mode
Oct 14 05:41:59 np0005486808 NetworkManager[44885]: <info>  [1760434919.7277] device (tap0b5d3762-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.770 2 DEBUG nova.compute.manager [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.771 2 DEBUG nova.compute.manager [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing instance network info cache due to event network-changed-0b5d3762-db25-4cc9-90f3-79d3eb662378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.772 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Refreshing network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:59Z|01662|binding|INFO|Releasing lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 from this chassis (sb_readonly=0)
Oct 14 05:41:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:59Z|01663|binding|INFO|Setting lport 0b5d3762-db25-4cc9-90f3-79d3eb662378 down in Southbound
Oct 14 05:41:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:41:59Z|01664|binding|INFO|Removing iface tap0b5d3762-db ovn-installed in OVS
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.813 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], port_security=['fa:16:3e:37:ee:6c 10.100.0.12 2001:db8:0:1:f816:3eff:fe37:ee6c 2001:db8::f816:3eff:fe37:ee6c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe37:ee6c/64 2001:db8::f816:3eff:fe37:ee6c/64', 'neutron:device_id': '0239a56b-babd-4c44-b52b-ade80229be78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0efff049-0165-4ea1-b912-974883009802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42692f54-9e26-49af-aa3a-b5cb78b0a2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=0b5d3762-db25-4cc9-90f3-79d3eb662378) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:41:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.814 162547 INFO neutron.agent.ovn.metadata.agent [-] Port 0b5d3762-db25-4cc9-90f3-79d3eb662378 in datapath d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 unbound from our chassis#033[00m
Oct 14 05:41:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.815 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:41:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.815 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b85dad7-d1cf-4bc4-9eb7-906f3686835f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:41:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:41:59.816 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 namespace which is not needed anymore#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:59 np0005486808 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 14 05:41:59 np0005486808 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Consumed 15.381s CPU time.
Oct 14 05:41:59 np0005486808 systemd-machined[214636]: Machine qemu-183-instance-00000096 terminated.
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.912 2 INFO nova.virt.libvirt.driver [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Instance destroyed successfully.#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.913 2 DEBUG nova.objects.instance [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 0239a56b-babd-4c44-b52b-ade80229be78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.928 2 DEBUG nova.virt.libvirt.vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-35816390',display_name='tempest-TestGettingAddress-server-35816390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-35816390',id=150,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMlxPr0cTZ8keQZTmg22ARWYe0xInByLkMD0cYdsQADwvB4qpdeYqfVEDfHnweHS+Sevb4jVHNf0dCnB4WMRrDqSKUzLaj+iYBC6weCzrTPlPHZbuNpXR0FfjzztQHSYpg==',key_name='tempest-TestGettingAddress-1794090516',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:40:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-r3oxjpue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:40:56Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=0239a56b-babd-4c44-b52b-ade80229be78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.928 2 DEBUG nova.network.os_vif_util [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.929 2 DEBUG nova.network.os_vif_util [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.929 2 DEBUG os_vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b5d3762-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:41:59 np0005486808 podman[419575]: 2025-10-14 09:41:59.933472058 +0000 UTC m=+0.102515787 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd)
Oct 14 05:41:59 np0005486808 podman[419577]: 2025-10-14 09:41:59.93357129 +0000 UTC m=+0.110082082 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:41:59 np0005486808 nova_compute[259627]: 2025-10-14 09:41:59.937 2 INFO os_vif [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:ee:6c,bridge_name='br-int',has_traffic_filtering=True,id=0b5d3762-db25-4cc9-90f3-79d3eb662378,network=Network(d2d242a4-fdb7-41f9-9ee7-4e1b17687d68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b5d3762-db')#033[00m
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : haproxy version is 2.8.14-c23fe91
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [NOTICE]   (417964) : path to executable is /usr/sbin/haproxy
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : Exiting Master process...
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : Exiting Master process...
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [ALERT]    (417964) : Current worker (417966) exited with code 143 (Terminated)
Oct 14 05:41:59 np0005486808 neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68[417960]: [WARNING]  (417964) : All workers exited. Exiting... (0)
Oct 14 05:41:59 np0005486808 systemd[1]: libpod-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope: Deactivated successfully.
Oct 14 05:41:59 np0005486808 podman[419638]: 2025-10-14 09:41:59.980092842 +0000 UTC m=+0.059159143 container died 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:42:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4-userdata-shm.mount: Deactivated successfully.
Oct 14 05:42:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-10b2f022e3216a77d69566ed9e97088f68d82c271e111cd888b10c372258ed61-merged.mount: Deactivated successfully.
Oct 14 05:42:00 np0005486808 podman[419638]: 2025-10-14 09:42:00.026507741 +0000 UTC m=+0.105574052 container cleanup 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 14 05:42:00 np0005486808 systemd[1]: libpod-conmon-7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4.scope: Deactivated successfully.
Oct 14 05:42:00 np0005486808 podman[419693]: 2025-10-14 09:42:00.098680362 +0000 UTC m=+0.050941541 container remove 7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.106 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[866a3d7b-10e5-441a-b99c-5c4f9116c943]: (4, ('Tue Oct 14 09:41:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 (7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4)\n7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4\nTue Oct 14 09:42:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 (7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4)\n7a9ac68f9ca6a9afb5d0b762993e61a8233cc81c50dc97c4bff92800724f2db4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.108 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7e23c71e-5a92-4a75-adf3-99d3cab54768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.109 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2d242a4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:00 np0005486808 kernel: tapd2d242a4-f0: left promiscuous mode
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.126 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[1c890a57-17a5-4e8b-ae1c-113336c2bbb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.158 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[34d806fd-ef56-4cfe-bec1-6018988df5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.159 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fc733c53-cb5d-492f-a09a-1214f351787e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.173 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7bcfb-c8c7-4aa3-b8fd-4588443dc417]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 863378, 'reachable_time': 37524, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419710, 'error': None, 'target': 'ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.176 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2d242a4-fdb7-41f9-9ee7-4e1b17687d68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:42:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:00.176 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b90453-38d5-4737-a736-0bed004860ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:00 np0005486808 systemd[1]: run-netns-ovnmeta\x2dd2d242a4\x2dfdb7\x2d41f9\x2d9ee7\x2d4e1b17687d68.mount: Deactivated successfully.
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.320 2 INFO nova.virt.libvirt.driver [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deleting instance files /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78_del#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.321 2 INFO nova.virt.libvirt.driver [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deletion of /var/lib/nova/instances/0239a56b-babd-4c44-b52b-ade80229be78_del complete#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.399 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.400 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.400 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.401 2 DEBUG oslo_concurrency.lockutils [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.401 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.402 2 DEBUG nova.compute.manager [req-49b3b5f1-9bbd-4d0a-8eaa-701e10764480 req-fafb9e65-0607-4761-957a-2be2e82b18b7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-unplugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.411 2 INFO nova.compute.manager [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.412 2 DEBUG oslo.service.loopingcall [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.413 2 DEBUG nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:42:00 np0005486808 nova_compute[259627]: 2025-10-14 09:42:00.413 2 DEBUG nova.network.neutron [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:42:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 25 KiB/s wr, 48 op/s
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.191 2 DEBUG nova.network.neutron [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.207 2 INFO nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.246 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.246 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.290 2 DEBUG oslo_concurrency.processutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.583 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updated VIF entry in instance network info cache for port 0b5d3762-db25-4cc9-90f3-79d3eb662378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.585 2 DEBUG nova.network.neutron [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [{"id": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "address": "fa:16:3e:37:ee:6c", "network": {"id": "d2d242a4-fdb7-41f9-9ee7-4e1b17687d68", "bridge": "br-int", "label": "tempest-network-smoke--1523514929", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:ee6c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b5d3762-db", "ovs_interfaceid": "0b5d3762-db25-4cc9-90f3-79d3eb662378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.626 2 DEBUG oslo_concurrency.lockutils [req-5ea70705-b641-4b2b-b75d-7f48475546e1 req-2469ffea-f612-46aa-8028-18ad40808937 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-0239a56b-babd-4c44-b52b-ade80229be78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.717 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.718 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:42:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:42:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419065968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.770 2 DEBUG oslo_concurrency.processutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.776 2 DEBUG nova.compute.provider_tree [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.792 2 DEBUG nova.scheduler.client.report [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.816 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.845 2 INFO nova.scheduler.client.report [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 0239a56b-babd-4c44-b52b-ade80229be78#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.888 2 DEBUG nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-deleted-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.888 2 INFO nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Neutron deleted interface 0b5d3762-db25-4cc9-90f3-79d3eb662378; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.889 2 DEBUG nova.network.neutron [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.949 2 DEBUG nova.compute.manager [req-48bfe682-e7ab-4104-b6cb-50d646b01f0d req-3f77bf9e-262d-49e9-9acf-e20511b0d44a 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Detach interface failed, port_id=0b5d3762-db25-4cc9-90f3-79d3eb662378, reason: Instance 0239a56b-babd-4c44-b52b-ade80229be78 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.952 2 DEBUG oslo_concurrency.lockutils [None req-9fee0abf-7d5a-4dda-a643-82bda38def37 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.993 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:42:01 np0005486808 nova_compute[259627]: 2025-10-14 09:42:01.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.542 2 DEBUG nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.542 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "0239a56b-babd-4c44-b52b-ade80229be78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG oslo_concurrency.lockutils [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "0239a56b-babd-4c44-b52b-ade80229be78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.543 2 DEBUG nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] No waiting events found dispatching network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.543 2 WARNING nova.compute.manager [req-b5931aaa-3f84-4a73-8e1e-4c0b5def0b6e req-5ce9ba45-0471-4e79-8849-05dd31666ef9 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Received unexpected event network-vif-plugged-0b5d3762-db25-4cc9-90f3-79d3eb662378 for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:02 np0005486808 nova_compute[259627]: 2025-10-14 09:42:02.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 13 KiB/s wr, 48 op/s
Oct 14 05:42:04 np0005486808 nova_compute[259627]: 2025-10-14 09:42:04.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:42:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:42:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:42:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2607413162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:42:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct 14 05:42:06 np0005486808 nova_compute[259627]: 2025-10-14 09:42:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.057 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:07 np0005486808 nova_compute[259627]: 2025-10-14 09:42:07.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:08 np0005486808 nova_compute[259627]: 2025-10-14 09:42:08.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 05:42:08 np0005486808 nova_compute[259627]: 2025-10-14 09:42:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:09 np0005486808 nova_compute[259627]: 2025-10-14 09:42:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 14 05:42:11 np0005486808 nova_compute[259627]: 2025-10-14 09:42:11.230 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434916.2287223, 3e37b67b-524c-4098-9609-97b0b31e72c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:42:11 np0005486808 nova_compute[259627]: 2025-10-14 09:42:11.230 2 INFO nova.compute.manager [-] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:42:11 np0005486808 nova_compute[259627]: 2025-10-14 09:42:11.253 2 DEBUG nova.compute.manager [None req-978b4e9d-affa-4b8a-a42a-b6f625f2d1c9 - - - - - -] [instance: 3e37b67b-524c-4098-9609-97b0b31e72c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:12 np0005486808 nova_compute[259627]: 2025-10-14 09:42:12.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 05:42:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:14 np0005486808 podman[419738]: 2025-10-14 09:42:14.653508166 +0000 UTC m=+0.066552764 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 14 05:42:14 np0005486808 podman[419737]: 2025-10-14 09:42:14.691760625 +0000 UTC m=+0.109076858 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 05:42:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 05:42:14 np0005486808 nova_compute[259627]: 2025-10-14 09:42:14.912 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760434919.9109924, 0239a56b-babd-4c44-b52b-ade80229be78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:42:14 np0005486808 nova_compute[259627]: 2025-10-14 09:42:14.912 2 INFO nova.compute.manager [-] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:42:14 np0005486808 nova_compute[259627]: 2025-10-14 09:42:14.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:14 np0005486808 nova_compute[259627]: 2025-10-14 09:42:14.943 2 DEBUG nova.compute.manager [None req-90424742-1551-4a56-96d2-872118ee2c13 - - - - - -] [instance: 0239a56b-babd-4c44-b52b-ade80229be78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 597 B/s wr, 9 op/s
Oct 14 05:42:17 np0005486808 nova_compute[259627]: 2025-10-14 09:42:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:19 np0005486808 nova_compute[259627]: 2025-10-14 09:42:19.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:22 np0005486808 nova_compute[259627]: 2025-10-14 09:42:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.707 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:26:39 10.100.0.2 2001:db8::f816:3eff:fe08:2639'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe08:2639/64', 'neutron:device_id': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=176e1cce-63d0-4e90-9078-245732aff057) old=Port_Binding(mac=['fa:16:3e:08:26:39 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:42:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.709 162547 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 176e1cce-63d0-4e90-9078-245732aff057 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 updated#033[00m
Oct 14 05:42:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.710 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa331e50-389c-4e3c-80a7-7c8364a3fce5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:42:23 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:23.712 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[f283fb2c-1e86-446e-bc96-5540a63023d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:24 np0005486808 nova_compute[259627]: 2025-10-14 09:42:24.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:27 np0005486808 nova_compute[259627]: 2025-10-14 09:42:27.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 54e37c1f-ca94-4e94-b895-10aa7cc2777d does not exist
Oct 14 05:42:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 64ea7c65-2165-43fe-9af3-9ece2bef1666 does not exist
Oct 14 05:42:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 76b84b66-4081-4783-8c1c-f2024df4dae6 does not exist
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.700952568 +0000 UTC m=+0.066757179 container create b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:42:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.671312761 +0000 UTC m=+0.037117442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:28 np0005486808 systemd[1]: Started libpod-conmon-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope.
Oct 14 05:42:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.831621035 +0000 UTC m=+0.197425656 container init b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.841447636 +0000 UTC m=+0.207252257 container start b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.846415318 +0000 UTC m=+0.212219939 container attach b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:42:28 np0005486808 youthful_shtern[420070]: 167 167
Oct 14 05:42:28 np0005486808 systemd[1]: libpod-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope: Deactivated successfully.
Oct 14 05:42:28 np0005486808 conmon[420070]: conmon b11bc8a501cad7b4fea0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope/container/memory.events
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.849954875 +0000 UTC m=+0.215759466 container died b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:42:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e90eaac3c445155ae93ea04257f0ac96fa4d0d4b3bfa6700f23f383ccd58afc8-merged.mount: Deactivated successfully.
Oct 14 05:42:28 np0005486808 podman[420054]: 2025-10-14 09:42:28.898712021 +0000 UTC m=+0.264516612 container remove b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:42:28 np0005486808 systemd[1]: libpod-conmon-b11bc8a501cad7b4fea052e852db080fdd5cc64ed3109416e7f76e8536cb866c.scope: Deactivated successfully.
Oct 14 05:42:29 np0005486808 podman[420094]: 2025-10-14 09:42:29.171442583 +0000 UTC m=+0.070577203 container create f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:42:29 np0005486808 systemd[1]: Started libpod-conmon-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope.
Oct 14 05:42:29 np0005486808 podman[420094]: 2025-10-14 09:42:29.14281076 +0000 UTC m=+0.041945450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:29 np0005486808 podman[420094]: 2025-10-14 09:42:29.283172465 +0000 UTC m=+0.182307115 container init f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 14 05:42:29 np0005486808 podman[420094]: 2025-10-14 09:42:29.298943982 +0000 UTC m=+0.198078602 container start f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:42:29 np0005486808 podman[420094]: 2025-10-14 09:42:29.302842528 +0000 UTC m=+0.201977238 container attach f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:42:29 np0005486808 nova_compute[259627]: 2025-10-14 09:42:29.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:30 np0005486808 infallible_nightingale[420111]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:42:30 np0005486808 infallible_nightingale[420111]: --> relative data size: 1.0
Oct 14 05:42:30 np0005486808 infallible_nightingale[420111]: --> All data devices are unavailable
Oct 14 05:42:30 np0005486808 systemd[1]: libpod-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Deactivated successfully.
Oct 14 05:42:30 np0005486808 podman[420094]: 2025-10-14 09:42:30.502651031 +0000 UTC m=+1.401785661 container died f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:42:30 np0005486808 systemd[1]: libpod-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Consumed 1.157s CPU time.
Oct 14 05:42:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8964133078fa97de6f55185c93b91569618c208e1c091ac3f13b1d80aca16896-merged.mount: Deactivated successfully.
Oct 14 05:42:30 np0005486808 podman[420094]: 2025-10-14 09:42:30.569484301 +0000 UTC m=+1.468618951 container remove f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:42:30 np0005486808 systemd[1]: libpod-conmon-f35fd26980cf85ba7810af21e4d085ab9322cd7f2a0672aef52d23a719535e75.scope: Deactivated successfully.
Oct 14 05:42:30 np0005486808 podman[420141]: 2025-10-14 09:42:30.642959004 +0000 UTC m=+0.105000907 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:42:30 np0005486808 podman[420148]: 2025-10-14 09:42:30.643927738 +0000 UTC m=+0.104070815 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:42:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.272614996 +0000 UTC m=+0.064747120 container create 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 05:42:31 np0005486808 systemd[1]: Started libpod-conmon-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope.
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.237585167 +0000 UTC m=+0.029717361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.373623405 +0000 UTC m=+0.165755509 container init 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.381512979 +0000 UTC m=+0.173645073 container start 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.384888391 +0000 UTC m=+0.177020535 container attach 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:42:31 np0005486808 stupefied_kowalevski[420342]: 167 167
Oct 14 05:42:31 np0005486808 systemd[1]: libpod-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope: Deactivated successfully.
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.387316761 +0000 UTC m=+0.179448875 container died 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:42:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-eb35fd1d902b07273a195b091081147db918c6c1012a8391dec3482532ba510a-merged.mount: Deactivated successfully.
Oct 14 05:42:31 np0005486808 podman[420326]: 2025-10-14 09:42:31.426558284 +0000 UTC m=+0.218690378 container remove 2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kowalevski, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:42:31 np0005486808 systemd[1]: libpod-conmon-2993705ef9454ebd9271089d6456a764e62356797cdf3fa6e929f08eb2a40aa9.scope: Deactivated successfully.
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.602 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:31 np0005486808 podman[420365]: 2025-10-14 09:42:31.613961903 +0000 UTC m=+0.045564599 container create b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.616 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:42:31 np0005486808 systemd[1]: Started libpod-conmon-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope.
Oct 14 05:42:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:31 np0005486808 podman[420365]: 2025-10-14 09:42:31.595730976 +0000 UTC m=+0.027333692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:31 np0005486808 podman[420365]: 2025-10-14 09:42:31.693367521 +0000 UTC m=+0.124970237 container init b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:42:31 np0005486808 podman[420365]: 2025-10-14 09:42:31.705182451 +0000 UTC m=+0.136785147 container start b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.706 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.707 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:31 np0005486808 podman[420365]: 2025-10-14 09:42:31.70878534 +0000 UTC m=+0.140388056 container attach b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.715 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.715 2 INFO nova.compute.claims [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:42:31 np0005486808 nova_compute[259627]: 2025-10-14 09:42:31.821 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.052625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952052651, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3407007, "memory_usage": 3467624, "flush_reason": "Manual Compaction"}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952067749, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3351426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54426, "largest_seqno": 56485, "table_properties": {"data_size": 3342000, "index_size": 5983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19013, "raw_average_key_size": 20, "raw_value_size": 3323329, "raw_average_value_size": 3527, "num_data_blocks": 265, "num_entries": 942, "num_filter_entries": 942, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434724, "oldest_key_time": 1760434724, "file_creation_time": 1760434952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15164 microseconds, and 7882 cpu microseconds.
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.067787) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3351426 bytes OK
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.067804) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069768) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069782) EVENT_LOG_v1 {"time_micros": 1760434952069777, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.069797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3398360, prev total WAL file size 3398360, number of live WAL files 2.
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.070775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3272KB)], [128(8088KB)]
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952070835, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11633765, "oldest_snapshot_seqno": -1}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7743 keys, 9926871 bytes, temperature: kUnknown
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952112859, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9926871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9876417, "index_size": 29989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 201186, "raw_average_key_size": 25, "raw_value_size": 9739440, "raw_average_value_size": 1257, "num_data_blocks": 1170, "num_entries": 7743, "num_filter_entries": 7743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.113068) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9926871 bytes
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.114348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.5 rd, 235.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8257, records dropped: 514 output_compression: NoCompression
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.114361) EVENT_LOG_v1 {"time_micros": 1760434952114355, "job": 78, "event": "compaction_finished", "compaction_time_micros": 42081, "compaction_time_cpu_micros": 20615, "output_level": 6, "num_output_files": 1, "total_output_size": 9926871, "num_input_records": 8257, "num_output_records": 7743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952114917, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434952116160, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.070686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:32.116262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:42:32 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559109412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.268 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.275 2 DEBUG nova.compute.provider_tree [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.299 2 DEBUG nova.scheduler.client.report [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.328 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.329 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.382 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.383 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.403 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.424 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]: {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    "0": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "devices": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "/dev/loop3"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            ],
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_name": "ceph_lv0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_size": "21470642176",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "name": "ceph_lv0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "tags": {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_name": "ceph",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.crush_device_class": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.encrypted": "0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_id": "0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.vdo": "0"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            },
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "vg_name": "ceph_vg0"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        }
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    ],
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    "1": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "devices": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "/dev/loop4"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            ],
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_name": "ceph_lv1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_size": "21470642176",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "name": "ceph_lv1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "tags": {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_name": "ceph",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.crush_device_class": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.encrypted": "0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_id": "1",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.vdo": "0"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            },
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "vg_name": "ceph_vg1"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        }
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    ],
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    "2": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "devices": [
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "/dev/loop5"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            ],
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_name": "ceph_lv2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_size": "21470642176",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "name": "ceph_lv2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "tags": {
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.cluster_name": "ceph",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.crush_device_class": "",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.encrypted": "0",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osd_id": "2",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:                "ceph.vdo": "0"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            },
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "type": "block",
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:            "vg_name": "ceph_vg2"
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:        }
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]:    ]
Oct 14 05:42:32 np0005486808 hopeful_boyd[420382]: }
Oct 14 05:42:32 np0005486808 systemd[1]: libpod-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope: Deactivated successfully.
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.549 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.552 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.552 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating image(s)#033[00m
Oct 14 05:42:32 np0005486808 podman[420413]: 2025-10-14 09:42:32.576269938 +0000 UTC m=+0.032266523 container died b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.577 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-69a9cefbf54de7e71f753c819cfefa545b416a568166a265bee32bd551418e77-merged.mount: Deactivated successfully.
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.607 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:32 np0005486808 podman[420413]: 2025-10-14 09:42:32.645540768 +0000 UTC m=+0.101537323 container remove b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.650 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:32 np0005486808 systemd[1]: libpod-conmon-b7e7bf7c57f60083021b5c5a7680bb46da2b62ed29e7eb30403351ac5fab0729.scope: Deactivated successfully.
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.659 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.753 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.755 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.756 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.757 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.790 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.797 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:42:32
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images']
Oct 14 05:42:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:42:32 np0005486808 nova_compute[259627]: 2025-10-14 09:42:32.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.099 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.166 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:42:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.262 2 DEBUG nova.objects.instance [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:42:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.383 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.383 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Ensure instance console log exists: /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.384 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:33 np0005486808 nova_compute[259627]: 2025-10-14 09:42:33.428 2 DEBUG nova.policy [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.486781071 +0000 UTC m=+0.043417076 container create 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:42:33 np0005486808 systemd[1]: Started libpod-conmon-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope.
Oct 14 05:42:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.469484607 +0000 UTC m=+0.026120662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.568182349 +0000 UTC m=+0.124818354 container init 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.578962123 +0000 UTC m=+0.135598128 container start 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.58207196 +0000 UTC m=+0.138707995 container attach 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:42:33 np0005486808 practical_cannon[420750]: 167 167
Oct 14 05:42:33 np0005486808 systemd[1]: libpod-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope: Deactivated successfully.
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.583640478 +0000 UTC m=+0.140276493 container died 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:42:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b70932201dcddd1f9d64d6493654e4e6b61e4851359877357f677b60040e3660-merged.mount: Deactivated successfully.
Oct 14 05:42:33 np0005486808 podman[420734]: 2025-10-14 09:42:33.632980019 +0000 UTC m=+0.189616044 container remove 04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:42:33 np0005486808 systemd[1]: libpod-conmon-04cbfcc542c0965d75648103d0d7564e2e98f844b5be7f1488252eadc4b47a0f.scope: Deactivated successfully.
Oct 14 05:42:33 np0005486808 podman[420772]: 2025-10-14 09:42:33.889607537 +0000 UTC m=+0.074304755 container create 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:42:33 np0005486808 systemd[1]: Started libpod-conmon-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope.
Oct 14 05:42:33 np0005486808 podman[420772]: 2025-10-14 09:42:33.860546413 +0000 UTC m=+0.045243671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:42:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:34 np0005486808 podman[420772]: 2025-10-14 09:42:34.00790516 +0000 UTC m=+0.192602398 container init 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:42:34 np0005486808 podman[420772]: 2025-10-14 09:42:34.020123679 +0000 UTC m=+0.204820887 container start 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:42:34 np0005486808 podman[420772]: 2025-10-14 09:42:34.024304872 +0000 UTC m=+0.209002140 container attach 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:42:34 np0005486808 nova_compute[259627]: 2025-10-14 09:42:34.379 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Successfully created port: b3e71767-ffef-473e-947a-bb35562569c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:42:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:42:34 np0005486808 nova_compute[259627]: 2025-10-14 09:42:34.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]: {
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_id": 2,
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "type": "bluestore"
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    },
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_id": 1,
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "type": "bluestore"
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    },
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_id": 0,
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:        "type": "bluestore"
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]:    }
Oct 14 05:42:35 np0005486808 stupefied_stonebraker[420789]: }
Oct 14 05:42:35 np0005486808 systemd[1]: libpod-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Deactivated successfully.
Oct 14 05:42:35 np0005486808 podman[420772]: 2025-10-14 09:42:35.052651688 +0000 UTC m=+1.237348956 container died 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:42:35 np0005486808 systemd[1]: libpod-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Consumed 1.043s CPU time.
Oct 14 05:42:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fa042371da3fe5e7632683869b9fe364b4a57e3a795b797d2d25b1e06672e662-merged.mount: Deactivated successfully.
Oct 14 05:42:35 np0005486808 podman[420772]: 2025-10-14 09:42:35.12529128 +0000 UTC m=+1.309988488 container remove 4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:42:35 np0005486808 systemd[1]: libpod-conmon-4b0d6cde0883b40fc52af6a6ecb3bca48583b434b20b52dc8d55eb340d5bfaaf.scope: Deactivated successfully.
Oct 14 05:42:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:42:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:42:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bfe1fe07-a333-4ac2-a6ae-0b1ca3781651 does not exist
Oct 14 05:42:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3a35b79a-5b3b-4c6e-ba63-6519d55be060 does not exist
Oct 14 05:42:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.310 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Successfully updated port: b3e71767-ffef-473e-947a-bb35562569c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.334 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.415 2 DEBUG nova.compute.manager [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.416 2 DEBUG nova.compute.manager [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.416 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:42:36 np0005486808 nova_compute[259627]: 2025-10-14 09:42:36.515 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:42:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:42:37 np0005486808 nova_compute[259627]: 2025-10-14 09:42:37.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.461 2 DEBUG nova.network.neutron [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.480 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.481 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance network_info: |[{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.482 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.482 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.487 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start _get_guest_xml network_info=[{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.495 2 WARNING nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.509 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.510 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.515 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.516 2 DEBUG nova.virt.libvirt.host [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.516 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.517 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.518 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.518 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.519 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.519 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.520 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.520 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.521 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.521 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.522 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.522 2 DEBUG nova.virt.hardware [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:42:38 np0005486808 nova_compute[259627]: 2025-10-14 09:42:38.527 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:42:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:42:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978747132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.034 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.072 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.079 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:39 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:42:39 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533610027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.553 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.556 2 DEBUG nova.virt.libvirt.vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:42:32Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.556 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.558 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.561 2 DEBUG nova.objects.instance [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.599 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <uuid>f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</uuid>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <name>instance-00000098</name>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-754347042</nova:name>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:42:38</nova:creationTime>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <nova:port uuid="b3e71767-ffef-473e-947a-bb35562569c9">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe57:ac57" ipVersion="6"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="serial">f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="uuid">f6f1bb88-2b88-4876-ab72-3e4e4dc9a578</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:57:ac:57"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <target dev="tapb3e71767-ff"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/console.log" append="off"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:42:39 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:42:39 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:42:39 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:42:39 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.600 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Preparing to wait for external event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.600 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.601 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.602 2 DEBUG nova.virt.libvirt.vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:42:32Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.603 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.604 2 DEBUG nova.network.os_vif_util [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.605 2 DEBUG os_vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.607 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e71767-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e71767-ff, col_values=(('external_ids', {'iface-id': 'b3e71767-ffef-473e-947a-bb35562569c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:ac:57', 'vm-uuid': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:39 np0005486808 NetworkManager[44885]: <info>  [1760434959.6161] manager: (tapb3e71767-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/682)
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.624 2 INFO os_vif [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff')#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.696 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.696 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.697 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:57:ac:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.698 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Using config drive#033[00m
Oct 14 05:42:39 np0005486808 nova_compute[259627]: 2025-10-14 09:42:39.731 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.389 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Creating config drive at /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.397 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yp76zfn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.569 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yp76zfn" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.600 2 DEBUG nova.storage.rbd_utils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.603 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.744 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.745 2 DEBUG nova.network.neutron [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.778 2 DEBUG oslo_concurrency.lockutils [req-102d6eba-8970-45f8-a661-35ab5f5ea2a6 req-5c184796-ba86-48ee-9515-f140aed2e056 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.789 2 DEBUG oslo_concurrency.processutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.790 2 INFO nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deleting local config drive /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578/disk.config because it was imported into RBD.#033[00m
Oct 14 05:42:40 np0005486808 kernel: tapb3e71767-ff: entered promiscuous mode
Oct 14 05:42:40 np0005486808 NetworkManager[44885]: <info>  [1760434960.8668] manager: (tapb3e71767-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:40Z|01665|binding|INFO|Claiming lport b3e71767-ffef-473e-947a-bb35562569c9 for this chassis.
Oct 14 05:42:40 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:40Z|01666|binding|INFO|b3e71767-ffef-473e-947a-bb35562569c9: Claiming fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.884 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], port_security=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe57:ac57/64', 'neutron:device_id': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b3e71767-ffef-473e-947a-bb35562569c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.885 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b3e71767-ffef-473e-947a-bb35562569c9 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 bound to our chassis#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.887 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5#033[00m
Oct 14 05:42:40 np0005486808 systemd-udevd[421017]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.905 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca68f9b-0df6-420a-9a49-a481a83e1624]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.906 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa331e50-31 in ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.908 276588 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa331e50-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.908 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[78030806-8c6c-4ce3-b214-f927e3d5bb10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.909 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[4b94d23c-01f7-409d-96dd-0472c10a945e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 NetworkManager[44885]: <info>  [1760434960.9223] device (tapb3e71767-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.922 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[10db6d3d-39c7-45b8-ba17-8d0bd38dee59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 NetworkManager[44885]: <info>  [1760434960.9252] device (tapb3e71767-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:42:40 np0005486808 systemd-machined[214636]: New machine qemu-185-instance-00000098.
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.954 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[6c859cb9-05ff-4aab-8198-9b112165813b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 systemd[1]: Started Virtual Machine qemu-185-instance-00000098.
Oct 14 05:42:40 np0005486808 nova_compute[259627]: 2025-10-14 09:42:40.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.986 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a15e95a6-3e2f-470f-9b01-4931e945991c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 NetworkManager[44885]: <info>  [1760434960.9929] manager: (tapaa331e50-30): new Veth device (/org/freedesktop/NetworkManager/Devices/684)
Oct 14 05:42:40 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:40.992 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b710c0-d498-413a-8f49-ed76b24c769e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:40 np0005486808 systemd-udevd[421023]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:42:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:41Z|01667|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 ovn-installed in OVS
Oct 14 05:42:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:41Z|01668|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 up in Southbound
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.020 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[74ea99d1-d242-412b-a912-db12ebbbedaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.032 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[8e34cd34-1334-47fe-a0b9-7241909593fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 NetworkManager[44885]: <info>  [1760434961.0607] device (tapaa331e50-30): carrier: link connected
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.068 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[dde2768e-ef2d-480d-8ea0-d613abb3078e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.084 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcbad9b-ad69-4540-9805-f6c0fba995a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421053, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.104 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[caa0ef22-4042-499e-b888-99ed6f9f7066]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:2639'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873982, 'tstamp': 873982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421054, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.125 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e359392a-186d-4be8-b0a3-4bd8f61745b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421055, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.173 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e343fd-ca9f-4613-830a-0ad3f5eaa591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d19de77c-50f7-4fc7-992c-135cc0224810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.262 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.263 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.264 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:41 np0005486808 kernel: tapaa331e50-30: entered promiscuous mode
Oct 14 05:42:41 np0005486808 NetworkManager[44885]: <info>  [1760434961.2674] manager: (tapaa331e50-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/685)
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.270 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:41 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:41Z|01669|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.302 162547 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.304 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[288e2e03-6283-4955-9bab-ef06a34cee44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.305 162547 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: global
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    log         /dev/log local0 debug
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    log-tag     haproxy-metadata-proxy-aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    user        root
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    group       root
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    maxconn     1024
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    pidfile     /var/lib/neutron/external/pids/aa331e50-389c-4e3c-80a7-7c8364a3fce5.pid.haproxy
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    daemon
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: defaults
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    log global
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    mode http
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    option httplog
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    option dontlognull
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    option http-server-close
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    option forwardfor
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    retries                 3
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    timeout http-request    30s
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    timeout connect         30s
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    timeout client          32s
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    timeout server          32s
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    timeout http-keep-alive 30s
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: listen listener
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    bind 169.254.169.254:80
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    server metadata /var/lib/neutron/metadata_proxy
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]:    http-request add-header X-OVN-Network-ID aa331e50-389c-4e3c-80a7-7c8364a3fce5
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.309 162547 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'env', 'PROCESS_TAG=haproxy-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa331e50-389c-4e3c-80a7-7c8364a3fce5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.437 2 DEBUG nova.compute.manager [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.437 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.438 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.438 2 DEBUG oslo_concurrency.lockutils [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.439 2 DEBUG nova.compute.manager [req-364f26f3-b3c2-42f5-831f-46ecfb094829 req-38458ec8-ce01-4469-96bd-154a91a89f64 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Processing event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.477 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:42:41 np0005486808 nova_compute[259627]: 2025-10-14 09:42:41.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:41 np0005486808 podman[421129]: 2025-10-14 09:42:41.756258803 +0000 UTC m=+0.072731566 container create 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:42:41 np0005486808 systemd[1]: Started libpod-conmon-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope.
Oct 14 05:42:41 np0005486808 podman[421129]: 2025-10-14 09:42:41.727432456 +0000 UTC m=+0.043905239 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576
Oct 14 05:42:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:42:41 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7492a1d1260024e32fdfbc4b0a12cad644de8f71781dd5c42aeb3535a093dda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 14 05:42:41 np0005486808 podman[421129]: 2025-10-14 09:42:41.857140359 +0000 UTC m=+0.173613132 container init 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:42:41 np0005486808 podman[421129]: 2025-10-14 09:42:41.869190494 +0000 UTC m=+0.185663247 container start 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 05:42:41 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : New worker (421151) forked
Oct 14 05:42:41 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : Loading success.
Oct 14 05:42:41 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:41.935 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.073 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.072886, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.074 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Started (Lifecycle Event)#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.076 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.084 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.089 2 INFO nova.virt.libvirt.driver [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance spawned successfully.#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.089 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.092 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.096 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.106 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.107 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.107 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.108 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.108 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.109 2 DEBUG nova.virt.libvirt.driver [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.116 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.117 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.073146, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.117 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.177 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.182 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434962.0804393, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.182 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.191 2 INFO nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 9.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.192 2 DEBUG nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.203 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.208 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.235 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.255 2 INFO nova.compute.manager [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 10.59 seconds to build instance.#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.282 2 DEBUG oslo_concurrency.lockutils [None req-f9667c64-0da7-4969-a7be-35a76d230e6e 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:42 np0005486808 nova_compute[259627]: 2025-10-14 09:42:42.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:42:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.552 2 DEBUG nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.553 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.553 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.554 2 DEBUG oslo_concurrency.lockutils [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.554 2 DEBUG nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:42:43 np0005486808 nova_compute[259627]: 2025-10-14 09:42:43.555 2 WARNING nova.compute.manager [req-4f976299-c27d-4e6f-8a8e-08a211204d93 req-6ba79b73-6807-4086-a2eb-731dbf674e4f 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received unexpected event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with vm_state active and task_state None.#033[00m
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:42:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:42:44 np0005486808 nova_compute[259627]: 2025-10-14 09:42:44.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:42:45 np0005486808 podman[421161]: 2025-10-14 09:42:45.687513956 +0000 UTC m=+0.090746748 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:42:45 np0005486808 podman[421160]: 2025-10-14 09:42:45.736416156 +0000 UTC m=+0.141845392 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 14 05:42:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:47 np0005486808 NetworkManager[44885]: <info>  [1760434967.3422] manager: (patch-br-int-to-provnet-37ec2fc4-c886-4f39-a998-8b851619b851): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/686)
Oct 14 05:42:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:47Z|01670|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 05:42:47 np0005486808 NetworkManager[44885]: <info>  [1760434967.3440] manager: (patch-provnet-37ec2fc4-c886-4f39-a998-8b851619b851-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/687)
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:47 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:47Z|01671|binding|INFO|Releasing lport 176e1cce-63d0-4e90-9078-245732aff057 from this chassis (sb_readonly=0)
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.811 2 DEBUG nova.compute.manager [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.812 2 DEBUG nova.compute.manager [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.813 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.814 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:42:47 np0005486808 nova_compute[259627]: 2025-10-14 09:42:47.815 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:42:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:42:49 np0005486808 nova_compute[259627]: 2025-10-14 09:42:49.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:42:49.938 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:42:49 np0005486808 nova_compute[259627]: 2025-10-14 09:42:49.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:49 np0005486808 nova_compute[259627]: 2025-10-14 09:42:49.997 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:42:49 np0005486808 nova_compute[259627]: 2025-10-14 09:42:49.998 2 DEBUG nova.network.neutron [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:42:50 np0005486808 nova_compute[259627]: 2025-10-14 09:42:50.017 2 DEBUG oslo_concurrency.lockutils [req-c857ce3c-dd55-4ddd-b90b-49a1c33f8a88 req-75957de1-26ce-40d6-8d61-b00e296b4507 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:42:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:42:52 np0005486808 nova_compute[259627]: 2025-10-14 09:42:52.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:42:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:53Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:ac:57 10.100.0.6
Oct 14 05:42:53 np0005486808 ovn_controller[152662]: 2025-10-14T09:42:53Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:ac:57 10.100.0.6
Oct 14 05:42:54 np0005486808 nova_compute[259627]: 2025-10-14 09:42:54.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 88 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:42:55 np0005486808 nova_compute[259627]: 2025-10-14 09:42:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Oct 14 05:42:56 np0005486808 nova_compute[259627]: 2025-10-14 09:42:56.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:57 np0005486808 nova_compute[259627]: 2025-10-14 09:42:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:42:57 np0005486808 nova_compute[259627]: 2025-10-14 09:42:57.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.214669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978214791, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 477, "num_deletes": 257, "total_data_size": 428852, "memory_usage": 439496, "flush_reason": "Manual Compaction"}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978222497, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 425331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56486, "largest_seqno": 56962, "table_properties": {"data_size": 422595, "index_size": 771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6368, "raw_average_key_size": 18, "raw_value_size": 417114, "raw_average_value_size": 1198, "num_data_blocks": 34, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434953, "oldest_key_time": 1760434953, "file_creation_time": 1760434978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 7867 microseconds, and 4263 cpu microseconds.
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.222552) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 425331 bytes OK
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.222576) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224915) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224935) EVENT_LOG_v1 {"time_micros": 1760434978224928, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.224956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 426000, prev total WAL file size 426000, number of live WAL files 2.
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.225765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(415KB)], [131(9694KB)]
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978225941, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10352202, "oldest_snapshot_seqno": -1}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7566 keys, 10232868 bytes, temperature: kUnknown
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978294400, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10232868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10182657, "index_size": 30194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198427, "raw_average_key_size": 26, "raw_value_size": 10047819, "raw_average_value_size": 1328, "num_data_blocks": 1177, "num_entries": 7566, "num_filter_entries": 7566, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760434978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.294728) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10232868 bytes
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.296287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(48.4) write-amplify(24.1) OK, records in: 8091, records dropped: 525 output_compression: NoCompression
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.296318) EVENT_LOG_v1 {"time_micros": 1760434978296304, "job": 80, "event": "compaction_finished", "compaction_time_micros": 68541, "compaction_time_cpu_micros": 48017, "output_level": 6, "num_output_files": 1, "total_output_size": 10232868, "num_input_records": 8091, "num_output_records": 7566, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978296660, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760434978300963, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.225524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:42:58.301126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:42:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145387823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.498 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.590 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:42:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.867 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.869 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3408MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.870 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.870 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.967 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:42:58 np0005486808 nova_compute[259627]: 2025-10-14 09:42:58.968 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:42:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:42:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734493827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.501 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.508 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.527 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.549 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.549 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:42:59 np0005486808 nova_compute[259627]: 2025-10-14 09:42:59.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 05:43:01 np0005486808 podman[421252]: 2025-10-14 09:43:01.692427893 +0000 UTC m=+0.092041500 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 14 05:43:01 np0005486808 podman[421253]: 2025-10-14 09:43:01.719466646 +0000 UTC m=+0.115366192 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=iscsid)
Oct 14 05:43:02 np0005486808 nova_compute[259627]: 2025-10-14 09:43:02.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.551 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.552 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.552 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.553 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.765 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.766 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.766 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 14 05:43:03 np0005486808 nova_compute[259627]: 2025-10-14 09:43:03.767 2 DEBUG nova.objects.instance [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:43:04 np0005486808 nova_compute[259627]: 2025-10-14 09:43:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 05:43:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:43:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:43:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:43:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151493486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:43:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 14 05:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.058 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.059 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.759 2 DEBUG nova.network.neutron [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.776 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:07 np0005486808 nova_compute[259627]: 2025-10-14 09:43:07.777 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:43:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.351 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.353 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.370 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.457 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.457 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.465 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.466 2 INFO nova.compute.claims [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:43:08 np0005486808 nova_compute[259627]: 2025-10-14 09:43:08.611 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 12 KiB/s wr, 0 op/s
Oct 14 05:43:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:43:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790316001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.111 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.121 2 DEBUG nova.compute.provider_tree [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.142 2 DEBUG nova.scheduler.client.report [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.172 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.173 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.251 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.252 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.278 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.314 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.440 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.443 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.444 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating image(s)#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.485 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.523 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.561 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.567 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.679 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.680 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.682 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.682 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.721 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.729 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4c73d661-07da-475c-be98-686abd5354f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:09 np0005486808 nova_compute[259627]: 2025-10-14 09:43:09.836 2 DEBUG nova.policy [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cd4a7832e74a8ba07238937405c4ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.046 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 4c73d661-07da-475c-be98-686abd5354f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.108 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] resizing rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.207 2 DEBUG nova.objects.instance [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'migration_context' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.229 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.229 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Ensure instance console log exists: /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.230 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.231 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:10 np0005486808 nova_compute[259627]: 2025-10-14 09:43:10.231 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:43:11 np0005486808 nova_compute[259627]: 2025-10-14 09:43:11.077 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Successfully created port: bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 14 05:43:12 np0005486808 nova_compute[259627]: 2025-10-14 09:43:12.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 05:43:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.282 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Successfully updated port: bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.309 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.310 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.310 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.449 2 DEBUG nova.compute.manager [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.449 2 DEBUG nova.compute.manager [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.450 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:13 np0005486808 nova_compute[259627]: 2025-10-14 09:43:13.715 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:43:14 np0005486808 nova_compute[259627]: 2025-10-14 09:43:14.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.369 2 DEBUG nova.network.neutron [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.392 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.392 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance network_info: |[{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.393 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.393 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.399 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start _get_guest_xml network_info=[{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.405 2 WARNING nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.416 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.417 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.421 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.422 2 DEBUG nova.virt.libvirt.host [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.423 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.423 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.424 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.424 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.425 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.425 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.426 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.427 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.427 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.428 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.428 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.429 2 DEBUG nova.virt.hardware [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.434 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:43:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332365176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:43:15 np0005486808 nova_compute[259627]: 2025-10-14 09:43:15.983 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.025 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.031 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:43:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263957661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.516 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.518 2 DEBUG nova.virt.libvirt.vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:43:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.518 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.519 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.521 2 DEBUG nova.objects.instance [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.544 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <uuid>4c73d661-07da-475c-be98-686abd5354f9</uuid>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <name>instance-00000099</name>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:name>tempest-TestGettingAddress-server-199870946</nova:name>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:43:15</nova:creationTime>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:user uuid="30cd4a7832e74a8ba07238937405c4ea">tempest-TestGettingAddress-262346303-project-member</nova:user>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:project uuid="878ad3f42dd94bef8cc66ab3d67f03bf">tempest-TestGettingAddress-262346303</nova:project>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <nova:ports>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <nova:port uuid="bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:3484" ipVersion="6"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        </nova:port>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </nova:ports>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="serial">4c73d661-07da-475c-be98-686abd5354f9</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="uuid">4c73d661-07da-475c-be98-686abd5354f9</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4c73d661-07da-475c-be98-686abd5354f9_disk">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/4c73d661-07da-475c-be98-686abd5354f9_disk.config">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <interface type="ethernet">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <mac address="fa:16:3e:f0:34:84"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <driver name="vhost" rx_queue_size="512"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <mtu size="1442"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <target dev="tapbc5220f4-1b"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </interface>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/console.log" append="off"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:43:16 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:43:16 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:43:16 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:43:16 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.546 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Preparing to wait for external event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.546 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.547 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.547 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.548 2 DEBUG nova.virt.libvirt.vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-14T09:43:09Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.548 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG nova.network.os_vif_util [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG os_vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc5220f4-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc5220f4-1b, col_values=(('external_ids', {'iface-id': 'bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:34:84', 'vm-uuid': '4c73d661-07da-475c-be98-686abd5354f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:16 np0005486808 NetworkManager[44885]: <info>  [1760434996.5593] manager: (tapbc5220f4-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/688)
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.567 2 INFO os_vif [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b')#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.625 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] No VIF found with MAC fa:16:3e:f0:34:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.626 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Using config drive#033[00m
Oct 14 05:43:16 np0005486808 podman[421546]: 2025-10-14 09:43:16.629878875 +0000 UTC m=+0.046475941 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:43:16 np0005486808 podman[421543]: 2025-10-14 09:43:16.652588492 +0000 UTC m=+0.073797262 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:43:16 np0005486808 nova_compute[259627]: 2025-10-14 09:43:16.653 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.018 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Creating config drive at /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.023 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeptijra8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.164 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeptijra8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.188 2 DEBUG nova.storage.rbd_utils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] rbd image 4c73d661-07da-475c-be98-686abd5354f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.191 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config 4c73d661-07da-475c-be98-686abd5354f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:17 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.343 2 DEBUG oslo_concurrency.processutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config 4c73d661-07da-475c-be98-686abd5354f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.344 2 INFO nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deleting local config drive /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9/disk.config because it was imported into RBD.#033[00m
Oct 14 05:43:17 np0005486808 kernel: tapbc5220f4-1b: entered promiscuous mode
Oct 14 05:43:17 np0005486808 NetworkManager[44885]: <info>  [1760434997.3941] manager: (tapbc5220f4-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/689)
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:17Z|01672|binding|INFO|Claiming lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for this chassis.
Oct 14 05:43:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:17Z|01673|binding|INFO|bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe: Claiming fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.404 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], port_security=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fef0:3484/64', 'neutron:device_id': '4c73d661-07da-475c-be98-686abd5354f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.406 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 bound to our chassis#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.408 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:17Z|01674|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe up in Southbound
Oct 14 05:43:17 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:17Z|01675|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe ovn-installed in OVS
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:17 np0005486808 systemd-machined[214636]: New machine qemu-186-instance-00000099.
Oct 14 05:43:17 np0005486808 systemd-udevd[421662]: Network interface NamePolicy= disabled on kernel command line.
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.432 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[e10d193e-a8ad-49d8-b8a5-5411ede24a6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Oct 14 05:43:17 np0005486808 NetworkManager[44885]: <info>  [1760434997.4468] device (tapbc5220f4-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 14 05:43:17 np0005486808 NetworkManager[44885]: <info>  [1760434997.4475] device (tapbc5220f4-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.471 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b42e32-085e-488c-b772-2c16e7d25260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.474 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[b303643c-759f-4ec8-b5da-50f29908d4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.508 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a8db9706-c940-43cc-a39f-385154c0ce67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.526 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c59646fb-d538-408e-91be-a96be6d773d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421674, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.545 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[b071442c-0d3b-4e69-8751-c4fdde064cf2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873997, 'tstamp': 873997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421676, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874001, 'tstamp': 874001}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421676, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.547 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.552 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.553 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:17 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:17.554 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.603 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.604 2 DEBUG nova.network.neutron [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.623 2 DEBUG oslo_concurrency.lockutils [req-7a2af32f-efc1-4872-a615-5c713e0a5042 req-e32d1658-497f-48d4-8fa1-960d782684d7 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.649 2 DEBUG nova.compute.manager [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.650 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.650 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.651 2 DEBUG oslo_concurrency.lockutils [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:17 np0005486808 nova_compute[259627]: 2025-10-14 09:43:17.651 2 DEBUG nova.compute.manager [req-2957ed86-9c3b-44a2-b60f-948f932df59d req-8bbcc64e-b7c5-4a1a-bd2f-0b535a0a0c35 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Processing event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 14 05:43:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.234 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.2333598, 4c73d661-07da-475c-be98-686abd5354f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.234 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Started (Lifecycle Event)#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.237 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.242 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.246 2 INFO nova.virt.libvirt.driver [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance spawned successfully.#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.247 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.281 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.293 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.297 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.297 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.298 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.298 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.299 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.299 2 DEBUG nova.virt.libvirt.driver [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.331 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.332 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.233727, 4c73d661-07da-475c-be98-686abd5354f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.332 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Paused (Lifecycle Event)#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.363 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.367 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760434998.2416985, 4c73d661-07da-475c-be98-686abd5354f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.367 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.378 2 INFO nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 8.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.378 2 DEBUG nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.389 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.393 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.424 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.454 2 INFO nova.compute.manager [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 10.03 seconds to build instance.#033[00m
Oct 14 05:43:18 np0005486808 nova_compute[259627]: 2025-10-14 09:43:18.485 2 DEBUG oslo_concurrency.lockutils [None req-63180b6f-69f6-4155-a6f1-befa3def9d7a 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.771 2 DEBUG nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.772 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.772 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.773 2 DEBUG oslo_concurrency.lockutils [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.773 2 DEBUG nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:43:19 np0005486808 nova_compute[259627]: 2025-10-14 09:43:19.773 2 WARNING nova.compute.manager [req-c72a2410-5a0d-45bb-b25f-08e254cf5794 req-112a8262-0e14-45e5-8ae3-f219ecb6fbb8 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received unexpected event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with vm_state active and task_state None.#033[00m
Oct 14 05:43:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.783 2 DEBUG nova.compute.manager [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.784 2 DEBUG nova.compute.manager [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.784 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.785 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:21 np0005486808 nova_compute[259627]: 2025-10-14 09:43:21.785 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:43:22 np0005486808 nova_compute[259627]: 2025-10-14 09:43:22.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:43:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:23 np0005486808 nova_compute[259627]: 2025-10-14 09:43:23.912 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:43:23 np0005486808 nova_compute[259627]: 2025-10-14 09:43:23.913 2 DEBUG nova.network.neutron [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:23 np0005486808 nova_compute[259627]: 2025-10-14 09:43:23.937 2 DEBUG oslo_concurrency.lockutils [req-db4b9fc4-1c12-4ddc-a3f9-1b20123ccb6e req-065e5d2e-f8f7-4e23-bcf4-40de8aae9a41 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:43:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 14 05:43:26 np0005486808 nova_compute[259627]: 2025-10-14 09:43:26.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 14 05:43:27 np0005486808 nova_compute[259627]: 2025-10-14 09:43:27.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 14 05:43:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:43:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2800 writes, 11K keys, 2800 commit groups, 1.0 writes per commit group, ingest: 14.21 MB, 0.02 MB/s#012Interval WAL: 2799 writes, 1060 syncs, 2.64 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:43:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Oct 14 05:43:31 np0005486808 nova_compute[259627]: 2025-10-14 09:43:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:32 np0005486808 nova_compute[259627]: 2025-10-14 09:43:32.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:32 np0005486808 podman[421720]: 2025-10-14 09:43:32.695974635 +0000 UTC m=+0.080924827 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:43:32 np0005486808 podman[421719]: 2025-10-14 09:43:32.725759586 +0000 UTC m=+0.111000215 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:43:32
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Oct 14 05:43:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:43:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:33Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:34:84 10.100.0.11
Oct 14 05:43:33 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:33Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:34:84 10.100.0.11
Oct 14 05:43:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:43:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:43:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:43:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s#012Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:43:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eabe0181-4bce-4ad0-9efb-c5fa37cb0e54 does not exist
Oct 14 05:43:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 643d0043-ae13-4809-868c-36f0d555bd48 does not exist
Oct 14 05:43:36 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 06fc6d1a-30f5-4cda-8613-a79455cfcd7c does not exist
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:43:36 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:43:36 np0005486808 nova_compute[259627]: 2025-10-14 09:43:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:43:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:43:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:37 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.119728172 +0000 UTC m=+0.074051717 container create f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:43:37 np0005486808 systemd[1]: Started libpod-conmon-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope.
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.088418004 +0000 UTC m=+0.042741559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.225339624 +0000 UTC m=+0.179663219 container init f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.237351399 +0000 UTC m=+0.191674934 container start f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:37 np0005486808 suspicious_herschel[422046]: 167 167
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.241573302 +0000 UTC m=+0.195896837 container attach f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:43:37 np0005486808 systemd[1]: libpod-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope: Deactivated successfully.
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.243220533 +0000 UTC m=+0.197544078 container died f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:43:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f5240cd7e26c287c815195e639e9644b2e11fdfbf14d585695adcc382bed9e0a-merged.mount: Deactivated successfully.
Oct 14 05:43:37 np0005486808 podman[422029]: 2025-10-14 09:43:37.292148634 +0000 UTC m=+0.246472159 container remove f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:43:37 np0005486808 systemd[1]: libpod-conmon-f6b5d0d48030f2eec225a1d50a5d08338dea8582c3833727f73916adf2d8441f.scope: Deactivated successfully.
Oct 14 05:43:37 np0005486808 nova_compute[259627]: 2025-10-14 09:43:37.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:37 np0005486808 podman[422069]: 2025-10-14 09:43:37.508489173 +0000 UTC m=+0.043064288 container create cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 14 05:43:37 np0005486808 systemd[1]: Started libpod-conmon-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope.
Oct 14 05:43:37 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:37 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:37 np0005486808 podman[422069]: 2025-10-14 09:43:37.489611139 +0000 UTC m=+0.024186244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:37 np0005486808 podman[422069]: 2025-10-14 09:43:37.601412973 +0000 UTC m=+0.135988058 container init cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:43:37 np0005486808 podman[422069]: 2025-10-14 09:43:37.613644273 +0000 UTC m=+0.148219368 container start cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:43:37 np0005486808 podman[422069]: 2025-10-14 09:43:37.617150909 +0000 UTC m=+0.151726024 container attach cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:38 np0005486808 unruffled_meitner[422086]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:43:38 np0005486808 unruffled_meitner[422086]: --> relative data size: 1.0
Oct 14 05:43:38 np0005486808 unruffled_meitner[422086]: --> All data devices are unavailable
Oct 14 05:43:38 np0005486808 systemd[1]: libpod-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Deactivated successfully.
Oct 14 05:43:38 np0005486808 podman[422069]: 2025-10-14 09:43:38.747458147 +0000 UTC m=+1.282033252 container died cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:43:38 np0005486808 systemd[1]: libpod-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Consumed 1.079s CPU time.
Oct 14 05:43:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 14 05:43:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c6b7f2d5b99eda788be8fbca5bef0bd14681214a1274d72c3c2bd66493208f64-merged.mount: Deactivated successfully.
Oct 14 05:43:38 np0005486808 podman[422069]: 2025-10-14 09:43:38.82950447 +0000 UTC m=+1.364079585 container remove cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 14 05:43:38 np0005486808 systemd[1]: libpod-conmon-cb5b71c19c7995a5e36aa8088cb9d9fdbad812b1571dcb01179d718186a295e2.scope: Deactivated successfully.
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.546749942 +0000 UTC m=+0.047443666 container create 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 14 05:43:39 np0005486808 systemd[1]: Started libpod-conmon-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope.
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.52222608 +0000 UTC m=+0.022919864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.641985649 +0000 UTC m=+0.142679443 container init 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.649569445 +0000 UTC m=+0.150263219 container start 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.653634255 +0000 UTC m=+0.154328039 container attach 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:43:39 np0005486808 angry_williams[422286]: 167 167
Oct 14 05:43:39 np0005486808 systemd[1]: libpod-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope: Deactivated successfully.
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.654584998 +0000 UTC m=+0.155278752 container died 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:43:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7a47f03df60258a46095102995fbc96ea668b45a253f780d4f0ce4fb7431544d-merged.mount: Deactivated successfully.
Oct 14 05:43:39 np0005486808 podman[422270]: 2025-10-14 09:43:39.695222755 +0000 UTC m=+0.195916529 container remove 35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williams, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 14 05:43:39 np0005486808 systemd[1]: libpod-conmon-35a27864440c95cbe4705c89df1b539acc34836bf407b9bef6bec2f0358acf46.scope: Deactivated successfully.
Oct 14 05:43:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:43:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s#012Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:43:39 np0005486808 podman[422310]: 2025-10-14 09:43:39.880557654 +0000 UTC m=+0.048355968 container create b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:39 np0005486808 systemd[1]: Started libpod-conmon-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope.
Oct 14 05:43:39 np0005486808 podman[422310]: 2025-10-14 09:43:39.858692457 +0000 UTC m=+0.026490851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:39 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:39 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:39 np0005486808 podman[422310]: 2025-10-14 09:43:39.981006559 +0000 UTC m=+0.148804873 container init b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:43:39 np0005486808 podman[422310]: 2025-10-14 09:43:39.989308122 +0000 UTC m=+0.157106446 container start b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 14 05:43:39 np0005486808 podman[422310]: 2025-10-14 09:43:39.992709396 +0000 UTC m=+0.160507730 container attach b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:43:40 np0005486808 silly_jackson[422327]: {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    "0": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "devices": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "/dev/loop3"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            ],
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_name": "ceph_lv0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_size": "21470642176",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "name": "ceph_lv0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "tags": {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_name": "ceph",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.crush_device_class": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.encrypted": "0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_id": "0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.vdo": "0"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            },
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "vg_name": "ceph_vg0"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        }
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    ],
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    "1": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "devices": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "/dev/loop4"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            ],
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_name": "ceph_lv1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_size": "21470642176",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "name": "ceph_lv1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "tags": {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_name": "ceph",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.crush_device_class": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.encrypted": "0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_id": "1",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.vdo": "0"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            },
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "vg_name": "ceph_vg1"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        }
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    ],
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    "2": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "devices": [
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "/dev/loop5"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            ],
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_name": "ceph_lv2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_size": "21470642176",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "name": "ceph_lv2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "tags": {
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.cluster_name": "ceph",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.crush_device_class": "",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.encrypted": "0",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osd_id": "2",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:                "ceph.vdo": "0"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            },
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "type": "block",
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:            "vg_name": "ceph_vg2"
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:        }
Oct 14 05:43:40 np0005486808 silly_jackson[422327]:    ]
Oct 14 05:43:40 np0005486808 silly_jackson[422327]: }
Oct 14 05:43:40 np0005486808 systemd[1]: libpod-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope: Deactivated successfully.
Oct 14 05:43:40 np0005486808 podman[422310]: 2025-10-14 09:43:40.751628689 +0000 UTC m=+0.919427033 container died b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 14 05:43:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 14 05:43:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3c22459c1c04a9ba72f5edb1135636a28940f69425bd14c9ce2726312bfdab27-merged.mount: Deactivated successfully.
Oct 14 05:43:40 np0005486808 podman[422310]: 2025-10-14 09:43:40.830129215 +0000 UTC m=+0.997927529 container remove b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:43:40 np0005486808 systemd[1]: libpod-conmon-b2f9e02a3c0a8385a3574736d29d48f6a31684fe2107c31e6e03d88b538016ae.scope: Deactivated successfully.
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.550283578 +0000 UTC m=+0.063003517 container create 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:43:41 np0005486808 nova_compute[259627]: 2025-10-14 09:43:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:41 np0005486808 systemd[1]: Started libpod-conmon-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope.
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.528260187 +0000 UTC m=+0.040980146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.646976151 +0000 UTC m=+0.159696120 container init 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.653333367 +0000 UTC m=+0.166053296 container start 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.656878784 +0000 UTC m=+0.169598743 container attach 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:43:41 np0005486808 funny_nobel[422504]: 167 167
Oct 14 05:43:41 np0005486808 systemd[1]: libpod-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope: Deactivated successfully.
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.659935479 +0000 UTC m=+0.172655438 container died 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:43:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-90e5b2ae09ad41eaf9c4a6f939d263da24ff2bc7328d036a7509f34a3af76ec4-merged.mount: Deactivated successfully.
Oct 14 05:43:41 np0005486808 podman[422488]: 2025-10-14 09:43:41.701597491 +0000 UTC m=+0.214317430 container remove 7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:43:41 np0005486808 systemd[1]: libpod-conmon-7056c04d3efab048d2aee3e7526f7047946112459313fa8607a39623e94f9bad.scope: Deactivated successfully.
Oct 14 05:43:41 np0005486808 podman[422527]: 2025-10-14 09:43:41.924107002 +0000 UTC m=+0.054469938 container create cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:43:41 np0005486808 systemd[1]: Started libpod-conmon-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope.
Oct 14 05:43:41 np0005486808 podman[422527]: 2025-10-14 09:43:41.901350733 +0000 UTC m=+0.031713709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:43:41 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:43:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:42 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:43:42 np0005486808 podman[422527]: 2025-10-14 09:43:42.017862912 +0000 UTC m=+0.148225888 container init cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:43:42 np0005486808 podman[422527]: 2025-10-14 09:43:42.03082139 +0000 UTC m=+0.161184326 container start cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:43:42 np0005486808 podman[422527]: 2025-10-14 09:43:42.035158777 +0000 UTC m=+0.165521713 container attach cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:43:42 np0005486808 nova_compute[259627]: 2025-10-14 09:43:42.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 05:43:43 np0005486808 serene_villani[422543]: {
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_id": 2,
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "type": "bluestore"
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    },
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_id": 1,
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "type": "bluestore"
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    },
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_id": 0,
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:43:43 np0005486808 serene_villani[422543]:        "type": "bluestore"
Oct 14 05:43:43 np0005486808 serene_villani[422543]:    }
Oct 14 05:43:43 np0005486808 serene_villani[422543]: }
Oct 14 05:43:43 np0005486808 systemd[1]: libpod-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Deactivated successfully.
Oct 14 05:43:43 np0005486808 systemd[1]: libpod-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Consumed 1.093s CPU time.
Oct 14 05:43:43 np0005486808 podman[422527]: 2025-10-14 09:43:43.122232444 +0000 UTC m=+1.252595380 container died cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:43:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bbd6c0e1bd29eb4c735f471ca6e4bb3888bfe5fd710f6796c4807e5295aa9d48-merged.mount: Deactivated successfully.
Oct 14 05:43:43 np0005486808 podman[422527]: 2025-10-14 09:43:43.194116248 +0000 UTC m=+1.324479184 container remove cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:43:43 np0005486808 systemd[1]: libpod-conmon-cdcef57ff3475e113b06716f592a7966ec8b036978eed400f7e7657a14165c14.scope: Deactivated successfully.
Oct 14 05:43:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:43:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:43:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 49fd0fa4-4a71-45ca-9d2c-715dd1b7d841 does not exist
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d370b1ea-3fbe-4138-9e5e-7b7233c0c400 does not exist
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518291739923162 of space, bias 1.0, pg target 0.4554875219769486 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:43:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:43:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:44 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:43:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 05:43:46 np0005486808 nova_compute[259627]: 2025-10-14 09:43:46.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 145 KiB/s wr, 39 op/s
Oct 14 05:43:47 np0005486808 nova_compute[259627]: 2025-10-14 09:43:47.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:47 np0005486808 podman[422642]: 2025-10-14 09:43:47.711579976 +0000 UTC m=+0.104318121 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 05:43:47 np0005486808 podman[422641]: 2025-10-14 09:43:47.766961745 +0000 UTC m=+0.166856906 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:43:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 05:43:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 14 05:43:50 np0005486808 nova_compute[259627]: 2025-10-14 09:43:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:51 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:51Z|01676|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 14 05:43:51 np0005486808 nova_compute[259627]: 2025-10-14 09:43:51.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:52 np0005486808 nova_compute[259627]: 2025-10-14 09:43:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:43:52 np0005486808 nova_compute[259627]: 2025-10-14 09:43:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:52 np0005486808 nova_compute[259627]: 2025-10-14 09:43:52.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:43:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.475 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.478 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.713 2 DEBUG nova.compute.manager [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.714 2 DEBUG nova.compute.manager [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing instance network info cache due to event network-changed-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.714 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.715 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.715 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Refreshing network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.790 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.791 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.792 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.793 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.794 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.796 2 INFO nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Terminating instance#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.798 2 DEBUG nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:43:55 np0005486808 kernel: tapbc5220f4-1b (unregistering): left promiscuous mode
Oct 14 05:43:55 np0005486808 NetworkManager[44885]: <info>  [1760435035.8636] device (tapbc5220f4-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:55Z|01677|binding|INFO|Releasing lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe from this chassis (sb_readonly=0)
Oct 14 05:43:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:55Z|01678|binding|INFO|Setting lport bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe down in Southbound
Oct 14 05:43:55 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:55Z|01679|binding|INFO|Removing iface tapbc5220f4-1b ovn-installed in OVS
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.894 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], port_security=['fa:16:3e:f0:34:84 10.100.0.11 2001:db8::f816:3eff:fef0:3484'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fef0:3484/64', 'neutron:device_id': '4c73d661-07da-475c-be98-686abd5354f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.897 162547 INFO neutron.agent.ovn.metadata.agent [-] Port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 unbound from our chassis#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.898 162547 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa331e50-389c-4e3c-80a7-7c8364a3fce5#033[00m
Oct 14 05:43:55 np0005486808 nova_compute[259627]: 2025-10-14 09:43:55.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.926 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[463db305-372f-4d84-a995-5464915656e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:55 np0005486808 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 14 05:43:55 np0005486808 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 13.766s CPU time.
Oct 14 05:43:55 np0005486808 systemd-machined[214636]: Machine qemu-186-instance-00000099 terminated.
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.975 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[028d7a15-865e-4295-8d5d-5a3cd553100c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:55 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:55.982 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[fa61b758-4ede-4908-8ded-8bf3eea6bd1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.025 276686 DEBUG oslo.privsep.daemon [-] privsep: reply[a09dacca-661e-406c-b8b7-f2a7788052f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.052 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[d41d7781-7ed7-4e89-a3e4-6cff02e19976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa331e50-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:26:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873982, 'reachable_time': 36414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422702, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.055 2 INFO nova.virt.libvirt.driver [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Instance destroyed successfully.#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.056 2 DEBUG nova.objects.instance [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid 4c73d661-07da-475c-be98-686abd5354f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.069 2 DEBUG nova.virt.libvirt.vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-199870946',display_name='tempest-TestGettingAddress-server-199870946',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-199870946',id=153,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:43:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-hpaifwur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:43:18Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=4c73d661-07da-475c-be98-686abd5354f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.070 2 DEBUG nova.network.os_vif_util [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.071 2 DEBUG nova.network.os_vif_util [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.071 2 DEBUG os_vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc5220f4-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.083 2 INFO os_vif [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:34:84,bridge_name='br-int',has_traffic_filtering=True,id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc5220f4-1b')#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.083 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[de399ffb-e640-4b99-9e9c-2bcf7906bb07]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873997, 'tstamp': 873997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422710, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa331e50-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874001, 'tstamp': 874001}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422710, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.085 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.090 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa331e50-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.090 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.091 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa331e50-30, col_values=(('external_ids', {'iface-id': '176e1cce-63d0-4e90-9078-245732aff057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:43:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:56.092 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.404 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.405 2 DEBUG oslo_concurrency.lockutils [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.406 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.406 2 DEBUG nova.compute.manager [req-ec1e9c44-15d8-4c07-a25b-de01129132e7 req-c44db356-0145-4e85-afbd-bd5564f4dcac 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-unplugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.556 2 INFO nova.virt.libvirt.driver [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deleting instance files /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9_del#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.558 2 INFO nova.virt.libvirt.driver [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deletion of /var/lib/nova/instances/4c73d661-07da-475c-be98-686abd5354f9_del complete#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.636 2 INFO nova.compute.manager [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.637 2 DEBUG oslo.service.loopingcall [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.637 2 DEBUG nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:43:56 np0005486808 nova_compute[259627]: 2025-10-14 09:43:56.638 2 DEBUG nova.network.neutron [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:43:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.543 2 DEBUG nova.network.neutron [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.562 2 INFO nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Took 0.92 seconds to deallocate network for instance.#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.600 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updated VIF entry in instance network info cache for port bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.601 2 DEBUG nova.network.neutron [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [{"id": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "address": "fa:16:3e:f0:34:84", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:3484", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc5220f4-1b", "ovs_interfaceid": "bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.607 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.607 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.627 2 DEBUG oslo_concurrency.lockutils [req-8ce810e8-3fd6-489b-95cf-1c2cb010ddf3 req-9d7cb2a6-90e9-46ad-9736-dec60b868fef 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-4c73d661-07da-475c-be98-686abd5354f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.699 2 DEBUG oslo_concurrency.processutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.814 2 DEBUG nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-deleted-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.815 2 INFO nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Neutron deleted interface bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe; detaching it from the instance and deleting it from the info cache#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.816 2 DEBUG nova.network.neutron [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.841 2 DEBUG nova.compute.manager [req-1eca8c53-ccd7-463d-b20a-49fcf9faf19e req-e257d97d-9002-4366-a741-bcb8d3462ba1 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Detach interface failed, port_id=bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe, reason: Instance 4c73d661-07da-475c-be98-686abd5354f9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:57 np0005486808 nova_compute[259627]: 2025-10-14 09:43:57.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.024 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:43:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201868617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:43:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.223 2 DEBUG oslo_concurrency.processutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.230 2 DEBUG nova.compute.provider_tree [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.250 2 DEBUG nova.scheduler.client.report [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.276 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.279 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.279 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.280 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.280 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.353 2 INFO nova.scheduler.client.report [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance 4c73d661-07da-475c-be98-686abd5354f9#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.425 2 DEBUG oslo_concurrency.lockutils [None req-934b0b86-c084-4a44-9b10-551bacf07f1c 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.514 2 DEBUG nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "4c73d661-07da-475c-be98-686abd5354f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.515 2 DEBUG oslo_concurrency.lockutils [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "4c73d661-07da-475c-be98-686abd5354f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.516 2 DEBUG nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] No waiting events found dispatching network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.516 2 WARNING nova.compute.manager [req-64aab57c-41f8-4b96-8466-c9a56c485c52 req-0d6170ce-9dd2-44fe-a903-be5b2240f3a0 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Received unexpected event network-vif-plugged-bc5220f4-1b43-4d09-a5a7-4fdfcbf4bfbe for instance with vm_state deleted and task_state None.#033[00m
Oct 14 05:43:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:43:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6284015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.729 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 85 B/s wr, 0 op/s
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.825 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:43:58 np0005486808 nova_compute[259627]: 2025-10-14 09:43:58.826 2 DEBUG nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.006 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3367MB free_disk=59.89720153808594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.097 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.098 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.098 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.151 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:43:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:43:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653863839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.621 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.629 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.696 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.779 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.779 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.874 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.875 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.876 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.878 2 INFO nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Terminating instance#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.880 2 DEBUG nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.911 2 DEBUG nova.compute.manager [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-changed-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.911 2 DEBUG nova.compute.manager [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing instance network info cache due to event network-changed-b3e71767-ffef-473e-947a-bb35562569c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquired lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.912 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Refreshing network info cache for port b3e71767-ffef-473e-947a-bb35562569c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 14 05:43:59 np0005486808 kernel: tapb3e71767-ff (unregistering): left promiscuous mode
Oct 14 05:43:59 np0005486808 NetworkManager[44885]: <info>  [1760435039.9441] device (tapb3e71767-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 14 05:43:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:59Z|01680|binding|INFO|Releasing lport b3e71767-ffef-473e-947a-bb35562569c9 from this chassis (sb_readonly=0)
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:59Z|01681|binding|INFO|Setting lport b3e71767-ffef-473e-947a-bb35562569c9 down in Southbound
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:43:59 np0005486808 ovn_controller[152662]: 2025-10-14T09:43:59Z|01682|binding|INFO|Removing iface tapb3e71767-ff ovn-installed in OVS
Oct 14 05:43:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.969 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], port_security=['fa:16:3e:57:ac:57 10.100.0.6 2001:db8::f816:3eff:fe57:ac57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe57:ac57/64', 'neutron:device_id': 'f6f1bb88-2b88-4876-ab72-3e4e4dc9a578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '878ad3f42dd94bef8cc66ab3d67f03bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0bf2f6f-4af7-4c93-8c0d-bb1990d0ba82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf1cb75-5af4-410a-83b2-dc168fe7fc9d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>], logical_port=b3e71767-ffef-473e-947a-bb35562569c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcf3ff81a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:43:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.971 162547 INFO neutron.agent.ovn.metadata.agent [-] Port b3e71767-ffef-473e-947a-bb35562569c9 in datapath aa331e50-389c-4e3c-80a7-7c8364a3fce5 unbound from our chassis#033[00m
Oct 14 05:43:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.973 162547 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa331e50-389c-4e3c-80a7-7c8364a3fce5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 14 05:43:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.974 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[fe75a7d9-c926-4191-b3ff-8189e47abf74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:43:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:43:59.975 162547 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 namespace which is not needed anymore#033[00m
Oct 14 05:43:59 np0005486808 nova_compute[259627]: 2025-10-14 09:43:59.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct 14 05:44:00 np0005486808 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Consumed 17.165s CPU time.
Oct 14 05:44:00 np0005486808 systemd-machined[214636]: Machine qemu-185-instance-00000098 terminated.
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.117 2 INFO nova.virt.libvirt.driver [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Instance destroyed successfully.#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.117 2 DEBUG nova.objects.instance [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lazy-loading 'resources' on Instance uuid f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : haproxy version is 2.8.14-c23fe91
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [NOTICE]   (421149) : path to executable is /usr/sbin/haproxy
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : Exiting Master process...
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : Exiting Master process...
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [ALERT]    (421149) : Current worker (421151) exited with code 143 (Terminated)
Oct 14 05:44:00 np0005486808 neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5[421145]: [WARNING]  (421149) : All workers exited. Exiting... (0)
Oct 14 05:44:00 np0005486808 systemd[1]: libpod-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope: Deactivated successfully.
Oct 14 05:44:00 np0005486808 podman[422821]: 2025-10-14 09:44:00.131800015 +0000 UTC m=+0.052097030 container died 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.141 2 DEBUG nova.virt.libvirt.vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-14T09:42:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-754347042',display_name='tempest-TestGettingAddress-server-754347042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-754347042',id=152,image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6mjdW3JQlIBQQd52sNXAHhyt49LvnDwoLKe8QrZpUHOv1Ky+CeaemM120Tsqn8n4PYQVzJvOC7aPmnxBpwslIeaFMTr8U05DGb5bjn09df7W51DFKPiOou1XQCmZ1ugQ==',key_name='tempest-TestGettingAddress-1974181320',keypairs=<?>,launch_index=0,launched_at=2025-10-14T09:42:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='878ad3f42dd94bef8cc66ab3d67f03bf',ramdisk_id='',reservation_id='r-072443ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4789543-f429-47d7-9f79-80a9d90a59f9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-262346303',owner_user_name='tempest-TestGettingAddress-262346303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-14T09:42:42Z,user_data=None,user_id='30cd4a7832e74a8ba07238937405c4ea',uuid=f6f1bb88-2b88-4876-ab72-3e4e4dc9a578,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.142 2 DEBUG nova.network.os_vif_util [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converting VIF {"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.143 2 DEBUG nova.network.os_vif_util [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.143 2 DEBUG os_vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e71767-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.152 2 INFO os_vif [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:ac:57,bridge_name='br-int',has_traffic_filtering=True,id=b3e71767-ffef-473e-947a-bb35562569c9,network=Network(aa331e50-389c-4e3c-80a7-7c8364a3fce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e71767-ff')#033[00m
Oct 14 05:44:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839-userdata-shm.mount: Deactivated successfully.
Oct 14 05:44:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f7492a1d1260024e32fdfbc4b0a12cad644de8f71781dd5c42aeb3535a093dda-merged.mount: Deactivated successfully.
Oct 14 05:44:00 np0005486808 podman[422821]: 2025-10-14 09:44:00.188477276 +0000 UTC m=+0.108774291 container cleanup 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:44:00 np0005486808 systemd[1]: libpod-conmon-6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839.scope: Deactivated successfully.
Oct 14 05:44:00 np0005486808 podman[422879]: 2025-10-14 09:44:00.253932782 +0000 UTC m=+0.045415735 container remove 6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.260 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e0485c-744f-4463-9f72-4f3dc82191f1]: (4, ('Tue Oct 14 09:44:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 (6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839)\n6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839\nTue Oct 14 09:44:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 (6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839)\n6170dfef274431fdbbcac478a53f85c41a2ba4b76c0aa160503242eeb9e68839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.261 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[7020c281-46ff-4bc4-a1ff-5a1be6345e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.262 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa331e50-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 kernel: tapaa331e50-30: left promiscuous mode
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.281 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[5e02379e-3acd-47a4-bc41-b3e6f5922bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.316 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fdf9d4-37f9-48b8-9bbc-89b14063075f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.317 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[87c416f1-cf10-4e83-8774-90a652859e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.333 276588 DEBUG oslo.privsep.daemon [-] privsep: reply[20b526d1-027f-457b-97a5-71a30288e9fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873974, 'reachable_time': 41843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422896, 'error': None, 'target': 'ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.335 162749 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa331e50-389c-4e3c-80a7-7c8364a3fce5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.335 162749 DEBUG oslo.privsep.daemon [-] privsep: reply[7296be15-fc24-4e22-ad72-ecdf83e769fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 14 05:44:00 np0005486808 systemd[1]: run-netns-ovnmeta\x2daa331e50\x2d389c\x2d4e3c\x2d80a7\x2d7c8364a3fce5.mount: Deactivated successfully.
Oct 14 05:44:00 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:00.479 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.601 2 INFO nova.virt.libvirt.driver [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deleting instance files /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_del#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.602 2 INFO nova.virt.libvirt.driver [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deletion of /var/lib/nova/instances/f6f1bb88-2b88-4876-ab72-3e4e4dc9a578_del complete#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.617 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.618 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.619 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.619 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-unplugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.620 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.620 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Acquiring lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.621 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.621 2 DEBUG oslo_concurrency.lockutils [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.622 2 DEBUG nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] No waiting events found dispatching network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.622 2 WARNING nova.compute.manager [req-142fc428-649c-4b0a-821b-3dff808d5cd0 req-21fa06f6-8703-4b41-97c2-5c8c8b9f8b66 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received unexpected event network-vif-plugged-b3e71767-ffef-473e-947a-bb35562569c9 for instance with vm_state active and task_state deleting.#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.686 2 INFO nova.compute.manager [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.687 2 DEBUG oslo.service.loopingcall [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.688 2 DEBUG nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:44:00 np0005486808 nova_compute[259627]: 2025-10-14 09:44:00.688 2 DEBUG nova.network.neutron [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:44:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 05:44:02 np0005486808 nova_compute[259627]: 2025-10-14 09:44:02.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:03 np0005486808 podman[422898]: 2025-10-14 09:44:03.659534025 +0000 UTC m=+0.064772270 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 14 05:44:03 np0005486808 podman[422899]: 2025-10-14 09:44:03.665384599 +0000 UTC m=+0.066271818 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:44:03 np0005486808 nova_compute[259627]: 2025-10-14 09:44:03.943 2 DEBUG nova.network.neutron [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:44:03 np0005486808 nova_compute[259627]: 2025-10-14 09:44:03.962 2 INFO nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Took 3.27 seconds to deallocate network for instance.#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.010 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updated VIF entry in instance network info cache for port b3e71767-ffef-473e-947a-bb35562569c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.010 2 DEBUG nova.network.neutron [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Updating instance_info_cache with network_info: [{"id": "b3e71767-ffef-473e-947a-bb35562569c9", "address": "fa:16:3e:57:ac:57", "network": {"id": "aa331e50-389c-4e3c-80a7-7c8364a3fce5", "bridge": "br-int", "label": "tempest-network-smoke--1451530961", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe57:ac57", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "878ad3f42dd94bef8cc66ab3d67f03bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e71767-ff", "ovs_interfaceid": "b3e71767-ffef-473e-947a-bb35562569c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.024 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.024 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.048 2 DEBUG oslo_concurrency.lockutils [req-c6c0d28f-1b18-4121-b5fa-95084da6ddf8 req-4b088bdc-3a85-4837-98df-106e372eb8e6 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] Releasing lock "refresh_cache-f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.056 2 DEBUG nova.compute.manager [req-80504e89-1f91-4246-929e-036ce1ad672f req-3a05c3ec-4c2d-4540-8ff1-300ca80de079 70c023d1312d402c80ad8f4da88a9196 a246563128804db9bc5cafa044f64b1c - - default default] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Received event network-vif-deleted-b3e71767-ffef-473e-947a-bb35562569c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.071 2 DEBUG oslo_concurrency.processutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:44:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438459309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.497 2 DEBUG oslo_concurrency.processutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.506 2 DEBUG nova.compute.provider_tree [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.530 2 DEBUG nova.scheduler.client.report [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.566 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.603 2 INFO nova.scheduler.client.report [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Deleted allocations for instance f6f1bb88-2b88-4876-ab72-3e4e4dc9a578#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.703 2 DEBUG oslo_concurrency.lockutils [None req-d5bbb92c-35d8-4451-a477-069ce65928eb 30cd4a7832e74a8ba07238937405c4ea 878ad3f42dd94bef8cc66ab3d67f03bf - - default default] Lock "f6f1bb88-2b88-4876-ab72-3e4e4dc9a578" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.759 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.760 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.777 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.778 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.778 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:04 np0005486808 nova_compute[259627]: 2025-10-14 09:44:04.779 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:44:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 11 KiB/s wr, 35 op/s
Oct 14 05:44:05 np0005486808 nova_compute[259627]: 2025-10-14 09:44:05.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:44:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:44:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:44:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/795924222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:44:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 14 05:44:06 np0005486808 nova_compute[259627]: 2025-10-14 09:44:06.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.060 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:07 np0005486808 nova_compute[259627]: 2025-10-14 09:44:07.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:07 np0005486808 nova_compute[259627]: 2025-10-14 09:44:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:08 np0005486808 nova_compute[259627]: 2025-10-14 09:44:08.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 05:44:10 np0005486808 nova_compute[259627]: 2025-10-14 09:44:10.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Oct 14 05:44:11 np0005486808 nova_compute[259627]: 2025-10-14 09:44:11.053 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435036.0522313, 4c73d661-07da-475c-be98-686abd5354f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:44:11 np0005486808 nova_compute[259627]: 2025-10-14 09:44:11.054 2 INFO nova.compute.manager [-] [instance: 4c73d661-07da-475c-be98-686abd5354f9] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:44:11 np0005486808 nova_compute[259627]: 2025-10-14 09:44:11.083 2 DEBUG nova.compute.manager [None req-0e6ab70f-f4fb-46f9-829b-2ce898060edb - - - - - -] [instance: 4c73d661-07da-475c-be98-686abd5354f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:12 np0005486808 nova_compute[259627]: 2025-10-14 09:44:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 05:44:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 05:44:15 np0005486808 nova_compute[259627]: 2025-10-14 09:44:15.115 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435040.1139867, f6f1bb88-2b88-4876-ab72-3e4e4dc9a578 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:44:15 np0005486808 nova_compute[259627]: 2025-10-14 09:44:15.115 2 INFO nova.compute.manager [-] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:44:15 np0005486808 nova_compute[259627]: 2025-10-14 09:44:15.150 2 DEBUG nova.compute.manager [None req-7e1163d9-b40b-42c8-8904-c53514253f19 - - - - - -] [instance: f6f1bb88-2b88-4876-ab72-3e4e4dc9a578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:15 np0005486808 nova_compute[259627]: 2025-10-14 09:44:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Oct 14 05:44:16 np0005486808 nova_compute[259627]: 2025-10-14 09:44:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:17 np0005486808 nova_compute[259627]: 2025-10-14 09:44:17.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:18 np0005486808 podman[422960]: 2025-10-14 09:44:18.716259387 +0000 UTC m=+0.115902075 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:44:18 np0005486808 podman[422959]: 2025-10-14 09:44:18.7428884 +0000 UTC m=+0.148300840 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 14 05:44:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:44:18 np0005486808 nova_compute[259627]: 2025-10-14 09:44:18.949 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:18 np0005486808 nova_compute[259627]: 2025-10-14 09:44:18.949 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:18 np0005486808 nova_compute[259627]: 2025-10-14 09:44:18.968 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.060 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.061 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.073 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.073 2 INFO nova.compute.claims [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.190 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:44:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751397359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.661 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.670 2 DEBUG nova.compute.provider_tree [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.693 2 DEBUG nova.scheduler.client.report [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.748 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.749 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.819 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.820 2 DEBUG nova.network.neutron [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.852 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.881 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.989 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.991 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 14 05:44:19 np0005486808 nova_compute[259627]: 2025-10-14 09:44:19.992 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating image(s)#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.026 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.061 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.098 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.103 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.213 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.214 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "342c3cf69558783c61e2fc446ea836becb687963" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.214 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.215 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "342c3cf69558783c61e2fc446ea836becb687963" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.244 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.248 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a630cb9-9f1a-4ea2-9047-158683504522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.331 2 DEBUG nova.network.neutron [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.332 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.556 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 9a630cb9-9f1a-4ea2-9047-158683504522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.629 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] resizing rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.740 2 DEBUG nova.objects.instance [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.763 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.764 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Ensure instance console log exists: /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.764 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.765 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.765 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.767 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'size': 0, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a4789543-f429-47d7-9f79-80a9d90a59f9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.771 2 WARNING nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.777 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.778 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.785 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.785 2 DEBUG nova.virt.libvirt.host [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.786 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.786 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-14T08:53:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b1ba3503-183d-4116-a4a7-1e752f2247c6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-14T08:53:07Z,direct_url=<?>,disk_format='qcow2',id=a4789543-f429-47d7-9f79-80a9d90a59f9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ecc47810d1fb409dbea633329126389e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-14T08:53:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.787 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.787 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.788 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.788 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.789 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.790 2 DEBUG nova.virt.hardware [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 14 05:44:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 05:44:20 np0005486808 nova_compute[259627]: 2025-10-14 09:44:20.794 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:44:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662102445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.283 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.309 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.314 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 14 05:44:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414932043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.769 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.772 2 DEBUG nova.objects.instance [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.792 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] End _get_guest_xml xml=<domain type="kvm">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <uuid>9a630cb9-9f1a-4ea2-9047-158683504522</uuid>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <name>instance-0000009a</name>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <memory>131072</memory>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <vcpu>1</vcpu>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <metadata>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1922733757</nova:name>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:creationTime>2025-10-14 09:44:20</nova:creationTime>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:flavor name="m1.nano">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:memory>128</nova:memory>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:disk>1</nova:disk>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:swap>0</nova:swap>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:ephemeral>0</nova:ephemeral>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:vcpus>1</nova:vcpus>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </nova:flavor>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:owner>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:user uuid="3088994959d141278978eee7990c1ab0">tempest-AggregatesAdminTestJSON-414260326-project-member</nova:user>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <nova:project uuid="71af0b96f64c4075b5773834ff62aa97">tempest-AggregatesAdminTestJSON-414260326</nova:project>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </nova:owner>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:root type="image" uuid="a4789543-f429-47d7-9f79-80a9d90a59f9"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <nova:ports/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </nova:instance>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </metadata>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <sysinfo type="smbios">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <system>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="manufacturer">RDO</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="product">OpenStack Compute</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="serial">9a630cb9-9f1a-4ea2-9047-158683504522</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="uuid">9a630cb9-9f1a-4ea2-9047-158683504522</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <entry name="family">Virtual Machine</entry>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </system>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </sysinfo>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <os>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <boot dev="hd"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <smbios mode="sysinfo"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </os>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <features>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <acpi/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <apic/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <vmcoreinfo/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </features>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <clock offset="utc">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <timer name="pit" tickpolicy="delay"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <timer name="hpet" present="no"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </clock>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <cpu mode="host-model" match="exact">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <topology sockets="1" cores="1" threads="1"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </cpu>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  <devices>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <disk type="network" device="disk">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9a630cb9-9f1a-4ea2-9047-158683504522_disk">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <target dev="vda" bus="virtio"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <disk type="network" device="cdrom">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <driver type="raw" cache="none"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <source protocol="rbd" name="vms/9a630cb9-9f1a-4ea2-9047-158683504522_disk.config">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <host name="192.168.122.100" port="6789"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </source>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <auth username="openstack">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:        <secret type="ceph" uuid="c49aadb6-9b04-5cb1-8f5f-4c91676c568e"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      </auth>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <target dev="sda" bus="sata"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </disk>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <serial type="pty">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <log file="/var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/console.log" append="off"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </serial>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <video>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <model type="virtio"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </video>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <input type="tablet" bus="usb"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <rng model="virtio">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <backend model="random">/dev/urandom</backend>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </rng>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="pci" model="pcie-root-port"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <controller type="usb" index="0"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    <memballoon model="virtio">
Oct 14 05:44:21 np0005486808 nova_compute[259627]:      <stats period="10"/>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:    </memballoon>
Oct 14 05:44:21 np0005486808 nova_compute[259627]:  </devices>
Oct 14 05:44:21 np0005486808 nova_compute[259627]: </domain>
Oct 14 05:44:21 np0005486808 nova_compute[259627]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.852 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.853 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.853 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Using config drive#033[00m
Oct 14 05:44:21 np0005486808 nova_compute[259627]: 2025-10-14 09:44:21.881 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.199 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Creating config drive at /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.203 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rsfp4xm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.353 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6rsfp4xm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.394 2 DEBUG nova.storage.rbd_utils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] rbd image 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.399 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.600 2 DEBUG oslo_concurrency.processutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config 9a630cb9-9f1a-4ea2-9047-158683504522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:22 np0005486808 nova_compute[259627]: 2025-10-14 09:44:22.601 2 INFO nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deleting local config drive /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522/disk.config because it was imported into RBD.#033[00m
Oct 14 05:44:22 np0005486808 systemd-machined[214636]: New machine qemu-187-instance-0000009a.
Oct 14 05:44:22 np0005486808 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Oct 14 05:44:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 05:44:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.780 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760435063.7795699, 9a630cb9-9f1a-4ea2-9047-158683504522 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.781 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Resumed (Lifecycle Event)#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.783 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.783 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.787 2 INFO nova.virt.libvirt.driver [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance spawned successfully.#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.787 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.814 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.818 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.818 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.819 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.820 2 DEBUG nova.virt.libvirt.driver [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.824 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.873 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.874 2 DEBUG nova.virt.driver [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] Emitting event <LifecycleEvent: 1760435063.783396, 9a630cb9-9f1a-4ea2-9047-158683504522 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.874 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Started (Lifecycle Event)#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.898 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.901 2 DEBUG nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.904 2 INFO nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 3.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.905 2 DEBUG nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.937 2 INFO nova.compute.manager [None req-b4f808ba-01ec-4bcb-9718-d75c92663fce - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.963 2 INFO nova.compute.manager [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 4.94 seconds to build instance.#033[00m
Oct 14 05:44:23 np0005486808 nova_compute[259627]: 2025-10-14 09:44:23.977 2 DEBUG oslo_concurrency.lockutils [None req-7ee8dd4a-d1de-4982-bb98-c8336c84bec4 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 55 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 418 KiB/s wr, 11 op/s
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.713 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.714 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.715 2 INFO nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Terminating instance#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquired lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.716 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 14 05:44:25 np0005486808 nova_compute[259627]: 2025-10-14 09:44:25.900 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:44:26 np0005486808 nova_compute[259627]: 2025-10-14 09:44:26.285 2 DEBUG nova.network.neutron [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:44:26 np0005486808 nova_compute[259627]: 2025-10-14 09:44:26.305 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Releasing lock "refresh_cache-9a630cb9-9f1a-4ea2-9047-158683504522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 14 05:44:26 np0005486808 nova_compute[259627]: 2025-10-14 09:44:26.306 2 DEBUG nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 14 05:44:26 np0005486808 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Oct 14 05:44:26 np0005486808 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 3.672s CPU time.
Oct 14 05:44:26 np0005486808 systemd-machined[214636]: Machine qemu-187-instance-0000009a terminated.
Oct 14 05:44:26 np0005486808 nova_compute[259627]: 2025-10-14 09:44:26.535 2 INFO nova.virt.libvirt.driver [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance destroyed successfully.#033[00m
Oct 14 05:44:26 np0005486808 nova_compute[259627]: 2025-10-14 09:44:26.536 2 DEBUG nova.objects.instance [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lazy-loading 'resources' on Instance uuid 9a630cb9-9f1a-4ea2-9047-158683504522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 14 05:44:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.007 2 INFO nova.virt.libvirt.driver [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deleting instance files /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522_del#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.008 2 INFO nova.virt.libvirt.driver [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deletion of /var/lib/nova/instances/9a630cb9-9f1a-4ea2-9047-158683504522_del complete#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.083 2 INFO nova.compute.manager [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.083 2 DEBUG oslo.service.loopingcall [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.084 2 DEBUG nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.084 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.513 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.528 2 DEBUG nova.network.neutron [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.542 2 INFO nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Took 0.46 seconds to deallocate network for instance.#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.585 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.586 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:27 np0005486808 nova_compute[259627]: 2025-10-14 09:44:27.659 2 DEBUG oslo_concurrency.processutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:44:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425827472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.208 2 DEBUG oslo_concurrency.processutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.215 2 DEBUG nova.compute.provider_tree [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:44:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.232 2 DEBUG nova.scheduler.client.report [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.251 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.307 2 INFO nova.scheduler.client.report [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Deleted allocations for instance 9a630cb9-9f1a-4ea2-9047-158683504522#033[00m
Oct 14 05:44:28 np0005486808 nova_compute[259627]: 2025-10-14 09:44:28.365 2 DEBUG oslo_concurrency.lockutils [None req-eaaa3c44-5f7b-485a-a3ef-b9354d7080d6 3088994959d141278978eee7990c1ab0 71af0b96f64c4075b5773834ff62aa97 - - default default] Lock "9a630cb9-9f1a-4ea2-9047-158683504522" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 14 05:44:29 np0005486808 nova_compute[259627]: 2025-10-14 09:44:29.991 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:29 np0005486808 nova_compute[259627]: 2025-10-14 09:44:29.991 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:44:30 np0005486808 nova_compute[259627]: 2025-10-14 09:44:30.017 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:44:30 np0005486808 nova_compute[259627]: 2025-10-14 09:44:30.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 14 05:44:31 np0005486808 nova_compute[259627]: 2025-10-14 09:44:31.161 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:32 np0005486808 nova_compute[259627]: 2025-10-14 09:44:32.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:44:32
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Oct 14 05:44:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:44:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:44:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:44:33 np0005486808 nova_compute[259627]: 2025-10-14 09:44:33.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:34 np0005486808 podman[423414]: 2025-10-14 09:44:34.692396057 +0000 UTC m=+0.086753590 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:44:34 np0005486808 podman[423413]: 2025-10-14 09:44:34.69904098 +0000 UTC m=+0.095641348 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 14 05:44:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 05:44:35 np0005486808 nova_compute[259627]: 2025-10-14 09:44:35.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Oct 14 05:44:37 np0005486808 nova_compute[259627]: 2025-10-14 09:44:37.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:38 np0005486808 ovn_controller[152662]: 2025-10-14T09:44:38Z|01683|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 14 05:44:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 14 05:44:40 np0005486808 nova_compute[259627]: 2025-10-14 09:44:40.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct 14 05:44:41 np0005486808 nova_compute[259627]: 2025-10-14 09:44:41.532 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760435066.531276, 9a630cb9-9f1a-4ea2-9047-158683504522 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 14 05:44:41 np0005486808 nova_compute[259627]: 2025-10-14 09:44:41.533 2 INFO nova.compute.manager [-] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] VM Stopped (Lifecycle Event)#033[00m
Oct 14 05:44:41 np0005486808 nova_compute[259627]: 2025-10-14 09:44:41.558 2 DEBUG nova.compute.manager [None req-7771e10f-f69c-4104-8800-0f0bb123037c - - - - - -] [instance: 9a630cb9-9f1a-4ea2-9047-158683504522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 14 05:44:42 np0005486808 nova_compute[259627]: 2025-10-14 09:44:42.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 05:44:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:44:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:44:44 np0005486808 nova_compute[259627]: 2025-10-14 09:44:44.163 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3d6f2b1f-69f2-462c-a652-25168b7139ea does not exist
Oct 14 05:44:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0613f838-1b00-47cf-ba37-b0fed131d850 does not exist
Oct 14 05:44:44 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e1417b61-9154-4582-9729-7d2bc1707c82 does not exist
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:44:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:44:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 14 05:44:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:44:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:45 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.072254657 +0000 UTC m=+0.048332097 container create 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 05:44:45 np0005486808 systemd[1]: Started libpod-conmon-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope.
Oct 14 05:44:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.052610115 +0000 UTC m=+0.028687575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.166705284 +0000 UTC m=+0.142782744 container init 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.174621328 +0000 UTC m=+0.150698798 container start 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.178675637 +0000 UTC m=+0.154753087 container attach 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:44:45 np0005486808 amazing_lederberg[423740]: 167 167
Oct 14 05:44:45 np0005486808 systemd[1]: libpod-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope: Deactivated successfully.
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.181807284 +0000 UTC m=+0.157884724 container died 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 14 05:44:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b08a7328531acb1b75c94ff7e2a9979088c8ae12579aa1c79aafc57113d5b79f-merged.mount: Deactivated successfully.
Oct 14 05:44:45 np0005486808 podman[423724]: 2025-10-14 09:44:45.233837951 +0000 UTC m=+0.209915411 container remove 00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:44:45 np0005486808 systemd[1]: libpod-conmon-00c98af474c7f93884b46a9d7b795d00c31c476d0befa3d608915d3cb1a00be0.scope: Deactivated successfully.
Oct 14 05:44:45 np0005486808 nova_compute[259627]: 2025-10-14 09:44:45.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:45 np0005486808 podman[423762]: 2025-10-14 09:44:45.474926867 +0000 UTC m=+0.073144926 container create 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:44:45 np0005486808 systemd[1]: Started libpod-conmon-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope.
Oct 14 05:44:45 np0005486808 podman[423762]: 2025-10-14 09:44:45.446672334 +0000 UTC m=+0.044890413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:45 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:45 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:45 np0005486808 podman[423762]: 2025-10-14 09:44:45.579543645 +0000 UTC m=+0.177761704 container init 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:44:45 np0005486808 podman[423762]: 2025-10-14 09:44:45.595679931 +0000 UTC m=+0.193897980 container start 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:44:45 np0005486808 podman[423762]: 2025-10-14 09:44:45.599700609 +0000 UTC m=+0.197918658 container attach 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:44:46 np0005486808 boring_torvalds[423779]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:44:46 np0005486808 boring_torvalds[423779]: --> relative data size: 1.0
Oct 14 05:44:46 np0005486808 boring_torvalds[423779]: --> All data devices are unavailable
Oct 14 05:44:46 np0005486808 systemd[1]: libpod-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Deactivated successfully.
Oct 14 05:44:46 np0005486808 systemd[1]: libpod-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Consumed 1.089s CPU time.
Oct 14 05:44:46 np0005486808 podman[423762]: 2025-10-14 09:44:46.728381167 +0000 UTC m=+1.326599226 container died 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:44:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-10fe7ba637f770135f001be1bfc838701ebd0dd3818b5bfb9c515e573454e125-merged.mount: Deactivated successfully.
Oct 14 05:44:46 np0005486808 podman[423762]: 2025-10-14 09:44:46.797641907 +0000 UTC m=+1.395859946 container remove 8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:44:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:44:46 np0005486808 systemd[1]: libpod-conmon-8e024d98e6fc345423dca4bf097193f71f79c3b35a268ee4420c8ac2fdea12ee.scope: Deactivated successfully.
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.417267513 +0000 UTC m=+0.055852602 container create 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:44:47 np0005486808 systemd[1]: Started libpod-conmon-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope.
Oct 14 05:44:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.386922928 +0000 UTC m=+0.025508067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.493724459 +0000 UTC m=+0.132309528 container init 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.499241754 +0000 UTC m=+0.137826833 container start 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:44:47 np0005486808 quirky_hertz[423977]: 167 167
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.503249073 +0000 UTC m=+0.141834142 container attach 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:44:47 np0005486808 systemd[1]: libpod-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope: Deactivated successfully.
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.504155375 +0000 UTC m=+0.142740474 container died 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:44:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fe6f044e97aea2da9808f587d685754d0acf878869106ee614298344c6ad7cec-merged.mount: Deactivated successfully.
Oct 14 05:44:47 np0005486808 podman[423960]: 2025-10-14 09:44:47.547635842 +0000 UTC m=+0.186220911 container remove 05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:44:47 np0005486808 systemd[1]: libpod-conmon-05edce7672b885c04df3742fa5d95afe6e93f80cc1c513a9cc3d6cd63127024f.scope: Deactivated successfully.
Oct 14 05:44:47 np0005486808 nova_compute[259627]: 2025-10-14 09:44:47.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:47 np0005486808 podman[424001]: 2025-10-14 09:44:47.725706562 +0000 UTC m=+0.055269807 container create e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 14 05:44:47 np0005486808 systemd[1]: Started libpod-conmon-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope.
Oct 14 05:44:47 np0005486808 podman[424001]: 2025-10-14 09:44:47.697992562 +0000 UTC m=+0.027555867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:47 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:47 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:47 np0005486808 podman[424001]: 2025-10-14 09:44:47.82587634 +0000 UTC m=+0.155439635 container init e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:44:47 np0005486808 podman[424001]: 2025-10-14 09:44:47.833904307 +0000 UTC m=+0.163467512 container start e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:44:47 np0005486808 podman[424001]: 2025-10-14 09:44:47.836759697 +0000 UTC m=+0.166322912 container attach e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:44:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]: {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    "0": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "devices": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "/dev/loop3"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            ],
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_name": "ceph_lv0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_size": "21470642176",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "name": "ceph_lv0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "tags": {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_name": "ceph",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.crush_device_class": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.encrypted": "0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_id": "0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.vdo": "0"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            },
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "vg_name": "ceph_vg0"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        }
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    ],
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    "1": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "devices": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "/dev/loop4"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            ],
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_name": "ceph_lv1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_size": "21470642176",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "name": "ceph_lv1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "tags": {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_name": "ceph",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.crush_device_class": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.encrypted": "0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_id": "1",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.vdo": "0"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            },
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "vg_name": "ceph_vg1"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        }
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    ],
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    "2": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "devices": [
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "/dev/loop5"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            ],
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_name": "ceph_lv2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_size": "21470642176",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "name": "ceph_lv2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "tags": {
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.cluster_name": "ceph",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.crush_device_class": "",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.encrypted": "0",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osd_id": "2",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:                "ceph.vdo": "0"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            },
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "type": "block",
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:            "vg_name": "ceph_vg2"
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:        }
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]:    ]
Oct 14 05:44:48 np0005486808 peaceful_haibt[424017]: }
Oct 14 05:44:48 np0005486808 systemd[1]: libpod-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope: Deactivated successfully.
Oct 14 05:44:48 np0005486808 podman[424001]: 2025-10-14 09:44:48.692070586 +0000 UTC m=+1.021633791 container died e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:44:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-cc457e595c7df51480b09de12f0409974fdc06c1f592954ee47449b447fdcd77-merged.mount: Deactivated successfully.
Oct 14 05:44:48 np0005486808 podman[424001]: 2025-10-14 09:44:48.761601592 +0000 UTC m=+1.091164807 container remove e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 05:44:48 np0005486808 systemd[1]: libpod-conmon-e69382b2e0022927bbf2ed6ec23a968db804c9bef16b80bc81000e61f492aaf2.scope: Deactivated successfully.
Oct 14 05:44:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:44:48 np0005486808 podman[424041]: 2025-10-14 09:44:48.896429861 +0000 UTC m=+0.094818438 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:44:48 np0005486808 podman[424040]: 2025-10-14 09:44:48.897877996 +0000 UTC m=+0.092744397 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.472400645 +0000 UTC m=+0.048519592 container create eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:44:49 np0005486808 systemd[1]: Started libpod-conmon-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope.
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.454413204 +0000 UTC m=+0.030532191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.586945356 +0000 UTC m=+0.163064323 container init eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.599520125 +0000 UTC m=+0.175639062 container start eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.604833955 +0000 UTC m=+0.180952942 container attach eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:44:49 np0005486808 pensive_clarke[424241]: 167 167
Oct 14 05:44:49 np0005486808 systemd[1]: libpod-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope: Deactivated successfully.
Oct 14 05:44:49 np0005486808 conmon[424241]: conmon eb0c59a05bb32c880ebe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope/container/memory.events
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.609631533 +0000 UTC m=+0.185750500 container died eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:44:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2a07eef93812081a35ff66e2053bf846a069d0da409e052dab8d72dad56eb743-merged.mount: Deactivated successfully.
Oct 14 05:44:49 np0005486808 podman[424225]: 2025-10-14 09:44:49.655419276 +0000 UTC m=+0.231538233 container remove eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:44:49 np0005486808 systemd[1]: libpod-conmon-eb0c59a05bb32c880ebe57a9f8745d8e5a47cc10c04e30fffe56c798341e5b8b.scope: Deactivated successfully.
Oct 14 05:44:49 np0005486808 podman[424264]: 2025-10-14 09:44:49.852866812 +0000 UTC m=+0.047825545 container create a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:44:49 np0005486808 systemd[1]: Started libpod-conmon-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope.
Oct 14 05:44:49 np0005486808 podman[424264]: 2025-10-14 09:44:49.832141663 +0000 UTC m=+0.027100436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:44:49 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:44:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:49 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:44:49 np0005486808 podman[424264]: 2025-10-14 09:44:49.945624558 +0000 UTC m=+0.140583311 container init a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:44:49 np0005486808 podman[424264]: 2025-10-14 09:44:49.958116485 +0000 UTC m=+0.153075218 container start a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 05:44:49 np0005486808 podman[424264]: 2025-10-14 09:44:49.961807825 +0000 UTC m=+0.156766588 container attach a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:44:50 np0005486808 nova_compute[259627]: 2025-10-14 09:44:50.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:44:50 np0005486808 nova_compute[259627]: 2025-10-14 09:44:50.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]: {
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_id": 2,
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "type": "bluestore"
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    },
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_id": 1,
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "type": "bluestore"
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    },
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_id": 0,
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:        "type": "bluestore"
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]:    }
Oct 14 05:44:51 np0005486808 practical_gagarin[424281]: }
Oct 14 05:44:51 np0005486808 systemd[1]: libpod-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Deactivated successfully.
Oct 14 05:44:51 np0005486808 systemd[1]: libpod-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Consumed 1.114s CPU time.
Oct 14 05:44:51 np0005486808 podman[424264]: 2025-10-14 09:44:51.068597106 +0000 UTC m=+1.263555899 container died a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:44:51 np0005486808 systemd[1]: var-lib-containers-storage-overlay-3570e7c4b08e243534dc6de518df0013ef382110e7d821c9c15da918c60f64a7-merged.mount: Deactivated successfully.
Oct 14 05:44:51 np0005486808 podman[424264]: 2025-10-14 09:44:51.13271701 +0000 UTC m=+1.327675743 container remove a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:44:51 np0005486808 systemd[1]: libpod-conmon-a5f2ee984fa33dc54c39ab27852b80791a7721c48f576ef672a5f83ef840b774.scope: Deactivated successfully.
Oct 14 05:44:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:44:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:44:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:51 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c14abd6f-1c3c-40a9-99e8-14ffa4fd9e49 does not exist
Oct 14 05:44:51 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 213a146c-afff-46bb-86ed-f53884eb468a does not exist
Oct 14 05:44:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:44:52 np0005486808 nova_compute[259627]: 2025-10-14 09:44:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 05:44:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 05:44:55 np0005486808 nova_compute[259627]: 2025-10-14 09:44:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:56.808 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:44:56 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:44:56.810 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:44:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 05:44:56 np0005486808 nova_compute[259627]: 2025-10-14 09:44:56.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:57 np0005486808 nova_compute[259627]: 2025-10-14 09:44:57.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:44:57 np0005486808 nova_compute[259627]: 2025-10-14 09:44:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:57 np0005486808 nova_compute[259627]: 2025-10-14 09:44:57.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:44:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:44:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3963880948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.527 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.802 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.803 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.804 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.804 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:44:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.907 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.935 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.935 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.956 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:44:58 np0005486808 nova_compute[259627]: 2025-10-14 09:44:58.990 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:44:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:44:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340884269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.479 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.487 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.508 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.536 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:44:59 np0005486808 nova_compute[259627]: 2025-10-14 09:44:59.536 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:45:00 np0005486808 nova_compute[259627]: 2025-10-14 09:45:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:00 np0005486808 nova_compute[259627]: 2025-10-14 09:45:00.537 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:01 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:45:01.813 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:45:02 np0005486808 nova_compute[259627]: 2025-10-14 09:45:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:02 np0005486808 nova_compute[259627]: 2025-10-14 09:45:02.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:03 np0005486808 nova_compute[259627]: 2025-10-14 09:45:03.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:04 np0005486808 nova_compute[259627]: 2025-10-14 09:45:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:04 np0005486808 nova_compute[259627]: 2025-10-14 09:45:04.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:45:05 np0005486808 nova_compute[259627]: 2025-10-14 09:45:05.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:05 np0005486808 podman[424424]: 2025-10-14 09:45:05.689255804 +0000 UTC m=+0.093119806 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:45:05 np0005486808 podman[424425]: 2025-10-14 09:45:05.694709817 +0000 UTC m=+0.098151029 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid)
Oct 14 05:45:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:45:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:45:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:45:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485716450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:45:05 np0005486808 nova_compute[259627]: 2025-10-14 09:45:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:05 np0005486808 nova_compute[259627]: 2025-10-14 09:45:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:45:05 np0005486808 nova_compute[259627]: 2025-10-14 09:45:05.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:45:06 np0005486808 nova_compute[259627]: 2025-10-14 09:45:06.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:45:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:45:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:45:07.061 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:45:07 np0005486808 nova_compute[259627]: 2025-10-14 09:45:07.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:07 np0005486808 nova_compute[259627]: 2025-10-14 09:45:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:10 np0005486808 nova_compute[259627]: 2025-10-14 09:45:10.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:12 np0005486808 nova_compute[259627]: 2025-10-14 09:45:12.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:15 np0005486808 nova_compute[259627]: 2025-10-14 09:45:15.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:17 np0005486808 nova_compute[259627]: 2025-10-14 09:45:17.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:18 np0005486808 ovn_controller[152662]: 2025-10-14T09:45:18Z|01684|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 14 05:45:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:19 np0005486808 podman[424466]: 2025-10-14 09:45:19.685835427 +0000 UTC m=+0.084740721 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 14 05:45:19 np0005486808 podman[424465]: 2025-10-14 09:45:19.737474264 +0000 UTC m=+0.143299468 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:45:20 np0005486808 nova_compute[259627]: 2025-10-14 09:45:20.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:22 np0005486808 nova_compute[259627]: 2025-10-14 09:45:22.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:25 np0005486808 nova_compute[259627]: 2025-10-14 09:45:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 14 05:45:27 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 14 05:45:27 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 14 05:45:27 np0005486808 nova_compute[259627]: 2025-10-14 09:45:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:30 np0005486808 nova_compute[259627]: 2025-10-14 09:45:30.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 05:45:32 np0005486808 nova_compute[259627]: 2025-10-14 09:45:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:45:32
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct 14 05:45:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:45:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 14 05:45:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 14 05:45:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:45:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:45:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 14 05:45:35 np0005486808 nova_compute[259627]: 2025-10-14 09:45:35.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:36 np0005486808 podman[424522]: 2025-10-14 09:45:36.71221481 +0000 UTC m=+0.098520919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:45:36 np0005486808 podman[424523]: 2025-10-14 09:45:36.721625611 +0000 UTC m=+0.103091271 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:45:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 05:45:37 np0005486808 nova_compute[259627]: 2025-10-14 09:45:37.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 05:45:40 np0005486808 nova_compute[259627]: 2025-10-14 09:45:40.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:42 np0005486808 nova_compute[259627]: 2025-10-14 09:45:42.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:45:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:45:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:45 np0005486808 nova_compute[259627]: 2025-10-14 09:45:45.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:47 np0005486808 nova_compute[259627]: 2025-10-14 09:45:47.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:50 np0005486808 podman[424570]: 2025-10-14 09:45:50.673692393 +0000 UTC m=+0.077912904 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:45:50 np0005486808 podman[424569]: 2025-10-14 09:45:50.684835516 +0000 UTC m=+0.102028305 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:45:50 np0005486808 nova_compute[259627]: 2025-10-14 09:45:50.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:50 np0005486808 nova_compute[259627]: 2025-10-14 09:45:50.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:45:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:45:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:52 np0005486808 nova_compute[259627]: 2025-10-14 09:45:52.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 95f56ebb-2817-432d-b8b7-c6c47a0dbe67 does not exist
Oct 14 05:45:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b2a9aa0c-eab5-4050-8f9f-c40500e77fea does not exist
Oct 14 05:45:52 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6a86aced-22a1-4749-bbcf-5ec54ebad9df does not exist
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:45:52 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:45:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:45:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:45:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:53 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.236332319 +0000 UTC m=+0.038719151 container create e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 05:45:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:53 np0005486808 systemd[1]: Started libpod-conmon-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope.
Oct 14 05:45:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.218690976 +0000 UTC m=+0.021077839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.314728363 +0000 UTC m=+0.117115265 container init e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.324928134 +0000 UTC m=+0.127314976 container start e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.328348258 +0000 UTC m=+0.130735120 container attach e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:45:53 np0005486808 reverent_golick[425023]: 167 167
Oct 14 05:45:53 np0005486808 systemd[1]: libpod-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope: Deactivated successfully.
Oct 14 05:45:53 np0005486808 conmon[425023]: conmon e29586cb545854b6fc3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope/container/memory.events
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.332432248 +0000 UTC m=+0.134819090 container died e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:45:53 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a0c82df2e5e07cfada0ea762068357834ee4642712fb8ad551ed9c0dade825f9-merged.mount: Deactivated successfully.
Oct 14 05:45:53 np0005486808 podman[425007]: 2025-10-14 09:45:53.37328619 +0000 UTC m=+0.175673032 container remove e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:45:53 np0005486808 systemd[1]: libpod-conmon-e29586cb545854b6fc3f3a62935d6b8962c3ac2c532e340fc1714d78f0ac859a.scope: Deactivated successfully.
Oct 14 05:45:53 np0005486808 podman[425047]: 2025-10-14 09:45:53.532229841 +0000 UTC m=+0.044276178 container create 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:45:53 np0005486808 systemd[1]: Started libpod-conmon-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope.
Oct 14 05:45:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:53 np0005486808 podman[425047]: 2025-10-14 09:45:53.606974425 +0000 UTC m=+0.119020762 container init 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:45:53 np0005486808 podman[425047]: 2025-10-14 09:45:53.513678866 +0000 UTC m=+0.025725223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:53 np0005486808 podman[425047]: 2025-10-14 09:45:53.612802118 +0000 UTC m=+0.124848445 container start 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:45:53 np0005486808 podman[425047]: 2025-10-14 09:45:53.616378686 +0000 UTC m=+0.128425033 container attach 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:45:54 np0005486808 angry_boyd[425064]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:45:54 np0005486808 angry_boyd[425064]: --> relative data size: 1.0
Oct 14 05:45:54 np0005486808 angry_boyd[425064]: --> All data devices are unavailable
Oct 14 05:45:54 np0005486808 systemd[1]: libpod-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope: Deactivated successfully.
Oct 14 05:45:54 np0005486808 podman[425047]: 2025-10-14 09:45:54.621997814 +0000 UTC m=+1.134044151 container died 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:45:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fbb6474b735520ff1067f3a2b1f608f933c4939a229fe2351092456268783ea5-merged.mount: Deactivated successfully.
Oct 14 05:45:54 np0005486808 podman[425047]: 2025-10-14 09:45:54.673389735 +0000 UTC m=+1.185436062 container remove 78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:45:54 np0005486808 systemd[1]: libpod-conmon-78e288c97d68610c5fb791c4e195026fd58ef57bc0615832606d28ffdab3a950.scope: Deactivated successfully.
Oct 14 05:45:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.339871571 +0000 UTC m=+0.052139871 container create d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:45:55 np0005486808 systemd[1]: Started libpod-conmon-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope.
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.312981921 +0000 UTC m=+0.025250271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.444086458 +0000 UTC m=+0.156354798 container init d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.456756939 +0000 UTC m=+0.169025199 container start d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.461489735 +0000 UTC m=+0.173758025 container attach d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:45:55 np0005486808 pedantic_easley[425263]: 167 167
Oct 14 05:45:55 np0005486808 systemd[1]: libpod-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope: Deactivated successfully.
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.46414481 +0000 UTC m=+0.176413080 container died d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:45:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5bbd490e2fa846853bd10e60288e493776e10850b1856c7ffd135bec563f6499-merged.mount: Deactivated successfully.
Oct 14 05:45:55 np0005486808 podman[425247]: 2025-10-14 09:45:55.500687427 +0000 UTC m=+0.212955697 container remove d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:45:55 np0005486808 systemd[1]: libpod-conmon-d65013d27dd10add63105e5f20a86046ef583f1fc0b9ca2d3a377941d6812817.scope: Deactivated successfully.
Oct 14 05:45:55 np0005486808 podman[425287]: 2025-10-14 09:45:55.690976817 +0000 UTC m=+0.053125545 container create 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:45:55 np0005486808 systemd[1]: Started libpod-conmon-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope.
Oct 14 05:45:55 np0005486808 podman[425287]: 2025-10-14 09:45:55.668647569 +0000 UTC m=+0.030796377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:55 np0005486808 nova_compute[259627]: 2025-10-14 09:45:55.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:55 np0005486808 podman[425287]: 2025-10-14 09:45:55.788647353 +0000 UTC m=+0.150796171 container init 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:45:55 np0005486808 podman[425287]: 2025-10-14 09:45:55.795710447 +0000 UTC m=+0.157859205 container start 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:45:55 np0005486808 podman[425287]: 2025-10-14 09:45:55.799425618 +0000 UTC m=+0.161574386 container attach 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]: {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    "0": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "devices": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "/dev/loop3"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            ],
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_name": "ceph_lv0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_size": "21470642176",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "name": "ceph_lv0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "tags": {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_name": "ceph",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.crush_device_class": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.encrypted": "0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_id": "0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.vdo": "0"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            },
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "vg_name": "ceph_vg0"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        }
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    ],
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    "1": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "devices": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "/dev/loop4"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            ],
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_name": "ceph_lv1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_size": "21470642176",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "name": "ceph_lv1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "tags": {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_name": "ceph",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.crush_device_class": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.encrypted": "0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_id": "1",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.vdo": "0"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            },
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "vg_name": "ceph_vg1"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        }
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    ],
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    "2": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "devices": [
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "/dev/loop5"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            ],
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_name": "ceph_lv2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_size": "21470642176",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "name": "ceph_lv2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "tags": {
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.cluster_name": "ceph",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.crush_device_class": "",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.encrypted": "0",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osd_id": "2",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:                "ceph.vdo": "0"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            },
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "type": "block",
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:            "vg_name": "ceph_vg2"
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:        }
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]:    ]
Oct 14 05:45:56 np0005486808 stupefied_brahmagupta[425304]: }
Oct 14 05:45:56 np0005486808 systemd[1]: libpod-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope: Deactivated successfully.
Oct 14 05:45:56 np0005486808 podman[425287]: 2025-10-14 09:45:56.599056021 +0000 UTC m=+0.961204749 container died 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:45:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6a2dac09e1bfb9d45456c99daceea9637f9f42a0ceb8b072bebdeb6af4356ff9-merged.mount: Deactivated successfully.
Oct 14 05:45:56 np0005486808 podman[425287]: 2025-10-14 09:45:56.786208743 +0000 UTC m=+1.148357471 container remove 36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:45:56 np0005486808 systemd[1]: libpod-conmon-36acbcbb9a8de19127bae316bb6684f5a38c2691539813e900b832a333d08c4c.scope: Deactivated successfully.
Oct 14 05:45:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.601680725 +0000 UTC m=+0.061873040 container create 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:45:57 np0005486808 systemd[1]: Started libpod-conmon-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope.
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.568258814 +0000 UTC m=+0.028451129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:57 np0005486808 nova_compute[259627]: 2025-10-14 09:45:57.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:45:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.718663845 +0000 UTC m=+0.178856130 container init 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.727254206 +0000 UTC m=+0.187446511 container start 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:45:57 np0005486808 silly_beaver[425481]: 167 167
Oct 14 05:45:57 np0005486808 systemd[1]: libpod-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope: Deactivated successfully.
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.740723687 +0000 UTC m=+0.200916082 container attach 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.741815054 +0000 UTC m=+0.202007339 container died 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:45:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-25f0e43edca948d7d00c3242760025fcc8b36f1f5701dc77d5b77472836b1a11-merged.mount: Deactivated successfully.
Oct 14 05:45:57 np0005486808 podman[425465]: 2025-10-14 09:45:57.828096861 +0000 UTC m=+0.288289166 container remove 057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:45:57 np0005486808 systemd[1]: libpod-conmon-057342122d49c6ba440c91eb99e5924fda98d667d64b95753f940cd870762540.scope: Deactivated successfully.
Oct 14 05:45:58 np0005486808 podman[425507]: 2025-10-14 09:45:58.063223911 +0000 UTC m=+0.079346308 container create cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:45:58 np0005486808 systemd[1]: Started libpod-conmon-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope.
Oct 14 05:45:58 np0005486808 podman[425507]: 2025-10-14 09:45:58.028300514 +0000 UTC m=+0.044422951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:45:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:45:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:45:58 np0005486808 podman[425507]: 2025-10-14 09:45:58.177349592 +0000 UTC m=+0.193471979 container init cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:45:58 np0005486808 podman[425507]: 2025-10-14 09:45:58.188213368 +0000 UTC m=+0.204335765 container start cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:45:58 np0005486808 podman[425507]: 2025-10-14 09:45:58.20459479 +0000 UTC m=+0.220717167 container attach cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:45:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:45:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]: {
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_id": 2,
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "type": "bluestore"
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    },
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_id": 1,
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "type": "bluestore"
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    },
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_id": 0,
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:        "type": "bluestore"
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]:    }
Oct 14 05:45:59 np0005486808 xenodochial_carver[425524]: }
Oct 14 05:45:59 np0005486808 systemd[1]: libpod-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Deactivated successfully.
Oct 14 05:45:59 np0005486808 systemd[1]: libpod-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Consumed 1.169s CPU time.
Oct 14 05:45:59 np0005486808 podman[425557]: 2025-10-14 09:45:59.412604115 +0000 UTC m=+0.040844113 container died cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:45:59 np0005486808 systemd[1]: var-lib-containers-storage-overlay-86d32a37aa6b362b6cde2ce074a43a6957253cb52900d245e079be2e263214cb-merged.mount: Deactivated successfully.
Oct 14 05:45:59 np0005486808 podman[425557]: 2025-10-14 09:45:59.489257666 +0000 UTC m=+0.117497624 container remove cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:45:59 np0005486808 systemd[1]: libpod-conmon-cbccd99e8488e18ce803df2b0f1fb1b74152cb5b8e6d3a16a6387fac35da030c.scope: Deactivated successfully.
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:59 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev fa29527b-fe4d-4a87-9cf7-493f7ab36c29 does not exist
Oct 14 05:45:59 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5ad0b29e-3854-4aeb-a0a2-5734ac61d947 does not exist
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:45:59 np0005486808 nova_compute[259627]: 2025-10-14 09:45:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:45:59 np0005486808 nova_compute[259627]: 2025-10-14 09:45:59.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.019 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.020 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:46:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:46:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600493190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.526 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.771 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.773 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3545MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.774 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.774 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.846 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.846 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:46:00 np0005486808 nova_compute[259627]: 2025-10-14 09:46:00.863 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:46:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:46:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2981660896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:46:01 np0005486808 nova_compute[259627]: 2025-10-14 09:46:01.317 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:46:01 np0005486808 nova_compute[259627]: 2025-10-14 09:46:01.322 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:46:01 np0005486808 nova_compute[259627]: 2025-10-14 09:46:01.341 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:46:01 np0005486808 nova_compute[259627]: 2025-10-14 09:46:01.342 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:46:01 np0005486808 nova_compute[259627]: 2025-10-14 09:46:01.342 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:46:02 np0005486808 nova_compute[259627]: 2025-10-14 09:46:02.342 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:02 np0005486808 nova_compute[259627]: 2025-10-14 09:46:02.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:04 np0005486808 nova_compute[259627]: 2025-10-14 09:46:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:46:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:46:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:46:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930918275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.972 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.991 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.992 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:05 np0005486808 nova_compute[259627]: 2025-10-14 09:46:05.992 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:46:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.062 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.063 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:46:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:46:07.063 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:46:07 np0005486808 podman[425667]: 2025-10-14 09:46:07.701362139 +0000 UTC m=+0.097100604 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid)
Oct 14 05:46:07 np0005486808 podman[425666]: 2025-10-14 09:46:07.70182184 +0000 UTC m=+0.098269272 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:46:07 np0005486808 nova_compute[259627]: 2025-10-14 09:46:07.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:09 np0005486808 nova_compute[259627]: 2025-10-14 09:46:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:10 np0005486808 nova_compute[259627]: 2025-10-14 09:46:10.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:12 np0005486808 nova_compute[259627]: 2025-10-14 09:46:12.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:13.995420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435173995463, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1848, "num_deletes": 253, "total_data_size": 3017036, "memory_usage": 3071376, "flush_reason": "Manual Compaction"}
Oct 14 05:46:13 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174013312, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2954292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56963, "largest_seqno": 58810, "table_properties": {"data_size": 2945786, "index_size": 5255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17583, "raw_average_key_size": 20, "raw_value_size": 2928718, "raw_average_value_size": 3377, "num_data_blocks": 233, "num_entries": 867, "num_filter_entries": 867, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760434979, "oldest_key_time": 1760434979, "file_creation_time": 1760435173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17951 microseconds, and 11924 cpu microseconds.
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.013367) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2954292 bytes OK
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.013394) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014803) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014823) EVENT_LOG_v1 {"time_micros": 1760435174014816, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.014844) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3009151, prev total WAL file size 3009151, number of live WAL files 2.
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.016521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2885KB)], [134(9993KB)]
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174016591, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13187160, "oldest_snapshot_seqno": -1}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7911 keys, 11433302 bytes, temperature: kUnknown
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174089938, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11433302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11379589, "index_size": 32800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 206287, "raw_average_key_size": 26, "raw_value_size": 11237525, "raw_average_value_size": 1420, "num_data_blocks": 1284, "num_entries": 7911, "num_filter_entries": 7911, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.090260) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11433302 bytes
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.091303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 155.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 8433, records dropped: 522 output_compression: NoCompression
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.091319) EVENT_LOG_v1 {"time_micros": 1760435174091311, "job": 82, "event": "compaction_finished", "compaction_time_micros": 73483, "compaction_time_cpu_micros": 53636, "output_level": 6, "num_output_files": 1, "total_output_size": 11433302, "num_input_records": 8433, "num_output_records": 7911, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174091955, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435174093685, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.016294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:14.093740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:15 np0005486808 nova_compute[259627]: 2025-10-14 09:46:15.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:17 np0005486808 nova_compute[259627]: 2025-10-14 09:46:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:20 np0005486808 nova_compute[259627]: 2025-10-14 09:46:20.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:21 np0005486808 podman[425709]: 2025-10-14 09:46:21.67364901 +0000 UTC m=+0.078473807 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 05:46:21 np0005486808 podman[425708]: 2025-10-14 09:46:21.769418919 +0000 UTC m=+0.175921587 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:46:22 np0005486808 nova_compute[259627]: 2025-10-14 09:46:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:22 np0005486808 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 14 05:46:22 np0005486808 systemd[1]: virtsecretd.service: Consumed 1.234s CPU time.
Oct 14 05:46:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.260811) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183260892, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 318, "num_deletes": 250, "total_data_size": 151625, "memory_usage": 158480, "flush_reason": "Manual Compaction"}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183264272, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 150156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58811, "largest_seqno": 59128, "table_properties": {"data_size": 148094, "index_size": 289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5649, "raw_average_key_size": 20, "raw_value_size": 144041, "raw_average_value_size": 514, "num_data_blocks": 13, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435174, "oldest_key_time": 1760435174, "file_creation_time": 1760435183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 3502 microseconds, and 1669 cpu microseconds.
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.264324) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 150156 bytes OK
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.264346) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265870) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265891) EVENT_LOG_v1 {"time_micros": 1760435183265884, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.265914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 149389, prev total WAL file size 149389, number of live WAL files 2.
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.266424) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323530' seq:72057594037927935, type:22 .. '6D6772737461740032353031' seq:0, type:0; will stop at (end)
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(146KB)], [137(10MB)]
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183266483, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11583458, "oldest_snapshot_seqno": -1}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7684 keys, 8292053 bytes, temperature: kUnknown
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183325796, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8292053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8244601, "index_size": 27115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 201751, "raw_average_key_size": 26, "raw_value_size": 8111246, "raw_average_value_size": 1055, "num_data_blocks": 1046, "num_entries": 7684, "num_filter_entries": 7684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.326248) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8292053 bytes
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.327890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(132.4) write-amplify(55.2) OK, records in: 8191, records dropped: 507 output_compression: NoCompression
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.327929) EVENT_LOG_v1 {"time_micros": 1760435183327911, "job": 84, "event": "compaction_finished", "compaction_time_micros": 59426, "compaction_time_cpu_micros": 42843, "output_level": 6, "num_output_files": 1, "total_output_size": 8292053, "num_input_records": 8191, "num_output_records": 7684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183328199, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435183332569, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.266359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:23 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:46:23.332743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:46:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:25 np0005486808 nova_compute[259627]: 2025-10-14 09:46:25.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:27 np0005486808 nova_compute[259627]: 2025-10-14 09:46:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:30 np0005486808 nova_compute[259627]: 2025-10-14 09:46:30.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:32 np0005486808 nova_compute[259627]: 2025-10-14 09:46:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:46:32
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr']
Oct 14 05:46:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:46:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:46:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:46:33 np0005486808 nova_compute[259627]: 2025-10-14 09:46:33.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:36 np0005486808 nova_compute[259627]: 2025-10-14 09:46:36.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:37 np0005486808 nova_compute[259627]: 2025-10-14 09:46:37.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:38 np0005486808 podman[425753]: 2025-10-14 09:46:38.682478166 +0000 UTC m=+0.082704590 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:46:38 np0005486808 podman[425752]: 2025-10-14 09:46:38.690174175 +0000 UTC m=+0.094315145 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 14 05:46:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:41 np0005486808 nova_compute[259627]: 2025-10-14 09:46:41.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:42 np0005486808 nova_compute[259627]: 2025-10-14 09:46:42.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:46:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:46:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:46 np0005486808 nova_compute[259627]: 2025-10-14 09:46:46.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:47 np0005486808 nova_compute[259627]: 2025-10-14 09:46:47.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:50 np0005486808 nova_compute[259627]: 2025-10-14 09:46:50.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:51 np0005486808 nova_compute[259627]: 2025-10-14 09:46:51.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:52 np0005486808 podman[425791]: 2025-10-14 09:46:52.640913007 +0000 UTC m=+0.054944880 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:46:52 np0005486808 podman[425790]: 2025-10-14 09:46:52.680389545 +0000 UTC m=+0.094871899 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:46:52 np0005486808 nova_compute[259627]: 2025-10-14 09:46:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:56 np0005486808 nova_compute[259627]: 2025-10-14 09:46:56.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:57 np0005486808 nova_compute[259627]: 2025-10-14 09:46:57.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:46:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:46:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:46:59 np0005486808 nova_compute[259627]: 2025-10-14 09:46:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:46:59 np0005486808 nova_compute[259627]: 2025-10-14 09:46:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.019 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169817489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.495 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.687 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.688 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f8d9f02f-f0de-4abe-9a58-8961c14491f0 does not exist
Oct 14 05:47:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ef88425e-0c7c-4964-82f1-41d4be2a440a does not exist
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:47:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 40c89b73-e6f3-465d-9d9b-c8801b143c1b does not exist
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:47:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:47:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.914 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:47:00 np0005486808 nova_compute[259627]: 2025-10-14 09:47:00.915 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.046 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:47:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.421053089 +0000 UTC m=+0.057016020 container create 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 05:47:01 np0005486808 systemd[1]: Started libpod-conmon-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope.
Oct 14 05:47:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.393200466 +0000 UTC m=+0.029163437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.491560079 +0000 UTC m=+0.127523030 container init 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:47:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:47:01 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150856695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.50340943 +0000 UTC m=+0.139372391 container start 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.507600813 +0000 UTC m=+0.143563764 container attach 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:47:01 np0005486808 angry_mccarthy[426162]: 167 167
Oct 14 05:47:01 np0005486808 systemd[1]: libpod-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope: Deactivated successfully.
Oct 14 05:47:01 np0005486808 conmon[426162]: conmon 753f6e320fa80bed18ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope/container/memory.events
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.513252692 +0000 UTC m=+0.149215663 container died 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.515 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.522 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:47:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4cc717758962283ed6c4abb2725ef80c9fdfa5ee06288cb7f450655184acb642-merged.mount: Deactivated successfully.
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.547 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.549 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:47:01 np0005486808 nova_compute[259627]: 2025-10-14 09:47:01.549 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:47:01 np0005486808 podman[426146]: 2025-10-14 09:47:01.566005046 +0000 UTC m=+0.201968017 container remove 753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:47:01 np0005486808 systemd[1]: libpod-conmon-753f6e320fa80bed18ee1e9676eec1df0c4ec4d5912baa053771b265ca7d3477.scope: Deactivated successfully.
Oct 14 05:47:01 np0005486808 podman[426188]: 2025-10-14 09:47:01.808454406 +0000 UTC m=+0.073964196 container create 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:47:01 np0005486808 systemd[1]: Started libpod-conmon-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope.
Oct 14 05:47:01 np0005486808 podman[426188]: 2025-10-14 09:47:01.767291086 +0000 UTC m=+0.032800916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:01 np0005486808 podman[426188]: 2025-10-14 09:47:01.909843444 +0000 UTC m=+0.175353214 container init 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:47:01 np0005486808 podman[426188]: 2025-10-14 09:47:01.924962545 +0000 UTC m=+0.190472345 container start 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:47:01 np0005486808 podman[426188]: 2025-10-14 09:47:01.929258831 +0000 UTC m=+0.194768621 container attach 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:47:02 np0005486808 nova_compute[259627]: 2025-10-14 09:47:02.549 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:02 np0005486808 nova_compute[259627]: 2025-10-14 09:47:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:03 np0005486808 angry_northcutt[426205]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:47:03 np0005486808 angry_northcutt[426205]: --> relative data size: 1.0
Oct 14 05:47:03 np0005486808 angry_northcutt[426205]: --> All data devices are unavailable
Oct 14 05:47:03 np0005486808 systemd[1]: libpod-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Deactivated successfully.
Oct 14 05:47:03 np0005486808 systemd[1]: libpod-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Consumed 1.085s CPU time.
Oct 14 05:47:03 np0005486808 podman[426188]: 2025-10-14 09:47:03.063310181 +0000 UTC m=+1.328819971 container died 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:47:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ca3a22a039a7da98aff25e82a195bb68c3e6ec442d77b054fc0c99b51e7b3d9e-merged.mount: Deactivated successfully.
Oct 14 05:47:03 np0005486808 podman[426188]: 2025-10-14 09:47:03.129772932 +0000 UTC m=+1.395282682 container remove 89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:47:03 np0005486808 systemd[1]: libpod-conmon-89837a885b7636c31e8af858fb9bcb9272f9e490a27d5f29b3320505715cb5f7.scope: Deactivated successfully.
Oct 14 05:47:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:03 np0005486808 podman[426388]: 2025-10-14 09:47:03.905633991 +0000 UTC m=+0.066952834 container create 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:47:03 np0005486808 systemd[1]: Started libpod-conmon-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope.
Oct 14 05:47:03 np0005486808 podman[426388]: 2025-10-14 09:47:03.880081234 +0000 UTC m=+0.041400117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:04 np0005486808 podman[426388]: 2025-10-14 09:47:04.001475043 +0000 UTC m=+0.162793886 container init 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:47:04 np0005486808 podman[426388]: 2025-10-14 09:47:04.013221382 +0000 UTC m=+0.174540215 container start 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 05:47:04 np0005486808 podman[426388]: 2025-10-14 09:47:04.017206309 +0000 UTC m=+0.178525212 container attach 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:47:04 np0005486808 affectionate_leavitt[426404]: 167 167
Oct 14 05:47:04 np0005486808 systemd[1]: libpod-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope: Deactivated successfully.
Oct 14 05:47:04 np0005486808 podman[426388]: 2025-10-14 09:47:04.019614548 +0000 UTC m=+0.180933391 container died 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:47:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c3f89df327f558ffd345c6bdf4ece695df76c6bc7ceced2756389543cba510d2-merged.mount: Deactivated successfully.
Oct 14 05:47:04 np0005486808 podman[426388]: 2025-10-14 09:47:04.070778744 +0000 UTC m=+0.232097587 container remove 0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leavitt, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:47:04 np0005486808 systemd[1]: libpod-conmon-0920dbeec4268b96293159c585008ab00792d881f17a7b50746b551e52f9ab26.scope: Deactivated successfully.
Oct 14 05:47:04 np0005486808 podman[426428]: 2025-10-14 09:47:04.306667413 +0000 UTC m=+0.072415548 container create 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:47:04 np0005486808 systemd[1]: Started libpod-conmon-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope.
Oct 14 05:47:04 np0005486808 podman[426428]: 2025-10-14 09:47:04.276735008 +0000 UTC m=+0.042483193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:04 np0005486808 podman[426428]: 2025-10-14 09:47:04.415189186 +0000 UTC m=+0.180937321 container init 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:47:04 np0005486808 podman[426428]: 2025-10-14 09:47:04.460407555 +0000 UTC m=+0.226155690 container start 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 05:47:04 np0005486808 podman[426428]: 2025-10-14 09:47:04.466319921 +0000 UTC m=+0.232068056 container attach 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:47:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]: {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    "0": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "devices": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "/dev/loop3"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            ],
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_name": "ceph_lv0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_size": "21470642176",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "name": "ceph_lv0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "tags": {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_name": "ceph",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.crush_device_class": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.encrypted": "0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_id": "0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.vdo": "0"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            },
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "vg_name": "ceph_vg0"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        }
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    ],
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    "1": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "devices": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "/dev/loop4"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            ],
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_name": "ceph_lv1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_size": "21470642176",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "name": "ceph_lv1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "tags": {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_name": "ceph",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.crush_device_class": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.encrypted": "0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_id": "1",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.vdo": "0"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            },
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "vg_name": "ceph_vg1"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        }
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    ],
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    "2": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "devices": [
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "/dev/loop5"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            ],
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_name": "ceph_lv2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_size": "21470642176",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "name": "ceph_lv2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "tags": {
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.cluster_name": "ceph",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.crush_device_class": "",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.encrypted": "0",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osd_id": "2",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:                "ceph.vdo": "0"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            },
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "type": "block",
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:            "vg_name": "ceph_vg2"
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:        }
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]:    ]
Oct 14 05:47:05 np0005486808 sleepy_chatterjee[426444]: }
Oct 14 05:47:05 np0005486808 systemd[1]: libpod-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope: Deactivated successfully.
Oct 14 05:47:05 np0005486808 podman[426428]: 2025-10-14 09:47:05.271859838 +0000 UTC m=+1.037607943 container died 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:47:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-31fae8b01e3b06bc3527ac63c6cccaac573be2b7a5d6c4f23b35cc92371dc279-merged.mount: Deactivated successfully.
Oct 14 05:47:05 np0005486808 podman[426428]: 2025-10-14 09:47:05.343296301 +0000 UTC m=+1.109044406 container remove 00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:47:05 np0005486808 systemd[1]: libpod-conmon-00f2d28e582c5aca90ed63d138367e13aba0c76422ef93915cc9b453e686c449.scope: Deactivated successfully.
Oct 14 05:47:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:47:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:47:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:47:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095429104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:47:05 np0005486808 nova_compute[259627]: 2025-10-14 09:47:05.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:05 np0005486808 nova_compute[259627]: 2025-10-14 09:47:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:05 np0005486808 nova_compute[259627]: 2025-10-14 09:47:05.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:47:05 np0005486808 nova_compute[259627]: 2025-10-14 09:47:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:47:06 np0005486808 nova_compute[259627]: 2025-10-14 09:47:06.001 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.108683813 +0000 UTC m=+0.059730756 container create af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:47:06 np0005486808 systemd[1]: Started libpod-conmon-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope.
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.088066777 +0000 UTC m=+0.039113720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:06 np0005486808 nova_compute[259627]: 2025-10-14 09:47:06.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.221570764 +0000 UTC m=+0.172617677 container init af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.229662302 +0000 UTC m=+0.180709255 container start af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.233753823 +0000 UTC m=+0.184800776 container attach af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:47:06 np0005486808 optimistic_nobel[426624]: 167 167
Oct 14 05:47:06 np0005486808 systemd[1]: libpod-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope: Deactivated successfully.
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.239263408 +0000 UTC m=+0.190310351 container died af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 14 05:47:06 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0569f6ca98b66bd754466065c03f3096a15837ae09ddb7af5f5b17cd6b9120b0-merged.mount: Deactivated successfully.
Oct 14 05:47:06 np0005486808 podman[426608]: 2025-10-14 09:47:06.290004133 +0000 UTC m=+0.241051056 container remove af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:47:06 np0005486808 systemd[1]: libpod-conmon-af2f705ee32dce10ebe7849cc1a1c740343b82ea8b6f7df1a64b1a7eeac981fb.scope: Deactivated successfully.
Oct 14 05:47:06 np0005486808 podman[426649]: 2025-10-14 09:47:06.504473296 +0000 UTC m=+0.076976100 container create f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:47:06 np0005486808 systemd[1]: Started libpod-conmon-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope.
Oct 14 05:47:06 np0005486808 podman[426649]: 2025-10-14 09:47:06.475985257 +0000 UTC m=+0.048488151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:47:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:47:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:47:06 np0005486808 podman[426649]: 2025-10-14 09:47:06.605377752 +0000 UTC m=+0.177880626 container init f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:47:06 np0005486808 podman[426649]: 2025-10-14 09:47:06.619673343 +0000 UTC m=+0.192176137 container start f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:47:06 np0005486808 podman[426649]: 2025-10-14 09:47:06.623684372 +0000 UTC m=+0.196187196 container attach f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:47:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:06 np0005486808 nova_compute[259627]: 2025-10-14 09:47:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.064 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.064 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:47:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:47:07.065 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:47:07 np0005486808 gifted_cori[426665]: {
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_id": 2,
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "type": "bluestore"
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    },
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_id": 1,
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "type": "bluestore"
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    },
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_id": 0,
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:        "type": "bluestore"
Oct 14 05:47:07 np0005486808 gifted_cori[426665]:    }
Oct 14 05:47:07 np0005486808 gifted_cori[426665]: }
Oct 14 05:47:07 np0005486808 systemd[1]: libpod-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Deactivated successfully.
Oct 14 05:47:07 np0005486808 podman[426649]: 2025-10-14 09:47:07.731166459 +0000 UTC m=+1.303669293 container died f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:47:07 np0005486808 systemd[1]: libpod-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Consumed 1.121s CPU time.
Oct 14 05:47:07 np0005486808 nova_compute[259627]: 2025-10-14 09:47:07.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-6f11154e87cfefc54ef2e0b756c703d587308737753ce5fce0ae7008ddbe40e9-merged.mount: Deactivated successfully.
Oct 14 05:47:07 np0005486808 podman[426649]: 2025-10-14 09:47:07.807730268 +0000 UTC m=+1.380233072 container remove f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cori, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:47:07 np0005486808 systemd[1]: libpod-conmon-f6517dd52c581bd5821219b0b734bbc2b34089ef6fe1c8e8016ac4e32b4091df.scope: Deactivated successfully.
Oct 14 05:47:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:47:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:47:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e514fc5c-17b7-4073-81a7-19661a5bc7b7 does not exist
Oct 14 05:47:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 644e508d-eb9e-4192-8fa0-d2073bd38ec8 does not exist
Oct 14 05:47:07 np0005486808 nova_compute[259627]: 2025-10-14 09:47:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:07 np0005486808 nova_compute[259627]: 2025-10-14 09:47:07.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:47:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:09 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:47:09 np0005486808 podman[426760]: 2025-10-14 09:47:09.701964352 +0000 UTC m=+0.093235389 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 14 05:47:09 np0005486808 podman[426759]: 2025-10-14 09:47:09.708192475 +0000 UTC m=+0.101765949 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 14 05:47:09 np0005486808 nova_compute[259627]: 2025-10-14 09:47:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:11 np0005486808 nova_compute[259627]: 2025-10-14 09:47:11.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:12 np0005486808 nova_compute[259627]: 2025-10-14 09:47:12.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:16 np0005486808 nova_compute[259627]: 2025-10-14 09:47:16.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:17 np0005486808 nova_compute[259627]: 2025-10-14 09:47:17.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:21 np0005486808 nova_compute[259627]: 2025-10-14 09:47:21.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:22 np0005486808 nova_compute[259627]: 2025-10-14 09:47:22.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:23 np0005486808 podman[426801]: 2025-10-14 09:47:23.690408207 +0000 UTC m=+0.093456434 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 14 05:47:23 np0005486808 podman[426800]: 2025-10-14 09:47:23.769483638 +0000 UTC m=+0.177381264 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:47:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:26 np0005486808 nova_compute[259627]: 2025-10-14 09:47:26.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 05:47:27 np0005486808 nova_compute[259627]: 2025-10-14 09:47:27.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 14 05:47:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 14 05:47:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 14 05:47:30 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 14 05:47:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 05:47:31 np0005486808 nova_compute[259627]: 2025-10-14 09:47:31.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:47:32 np0005486808 nova_compute[259627]: 2025-10-14 09:47:32.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:47:32
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 14 05:47:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:47:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 14 05:47:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 14 05:47:33 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 14 05:47:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:47:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:47:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 14 05:47:36 np0005486808 nova_compute[259627]: 2025-10-14 09:47:36.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 05:47:37 np0005486808 nova_compute[259627]: 2025-10-14 09:47:37.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 14 05:47:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 14 05:47:38 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 14 05:47:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 05:47:40 np0005486808 podman[426844]: 2025-10-14 09:47:40.678433643 +0000 UTC m=+0.079836470 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 14 05:47:40 np0005486808 podman[426845]: 2025-10-14 09:47:40.6807672 +0000 UTC m=+0.078568199 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 14 05:47:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 14 05:47:41 np0005486808 nova_compute[259627]: 2025-10-14 09:47:41.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:42 np0005486808 nova_compute[259627]: 2025-10-14 09:47:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Oct 14 05:47:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:47:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:47:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 457 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Oct 14 05:47:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 14 05:47:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 14 05:47:44 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 14 05:47:46 np0005486808 nova_compute[259627]: 2025-10-14 09:47:46.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Oct 14 05:47:47 np0005486808 nova_compute[259627]: 2025-10-14 09:47:47.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 05:47:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 05:47:51 np0005486808 nova_compute[259627]: 2025-10-14 09:47:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:52 np0005486808 nova_compute[259627]: 2025-10-14 09:47:52.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 05:47:52 np0005486808 nova_compute[259627]: 2025-10-14 09:47:52.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:47:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:54 np0005486808 podman[426884]: 2025-10-14 09:47:54.65837634 +0000 UTC m=+0.065912359 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:47:54 np0005486808 podman[426883]: 2025-10-14 09:47:54.683630449 +0000 UTC m=+0.090111922 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 14 05:47:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 14 05:47:56 np0005486808 nova_compute[259627]: 2025-10-14 09:47:56.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 14 05:47:57 np0005486808 nova_compute[259627]: 2025-10-14 09:47:57.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:47:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:47:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:47:59 np0005486808 nova_compute[259627]: 2025-10-14 09:47:59.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:01 np0005486808 nova_compute[259627]: 2025-10-14 09:48:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:01 np0005486808 nova_compute[259627]: 2025-10-14 09:48:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:48:02 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:48:02 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525910947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.462 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.650 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.651 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.651 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.652 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.738 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.739 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:02 np0005486808 nova_compute[259627]: 2025-10-14 09:48:02.913 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:48:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:48:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186373855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:48:03 np0005486808 nova_compute[259627]: 2025-10-14 09:48:03.334 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:48:03 np0005486808 nova_compute[259627]: 2025-10-14 09:48:03.342 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:48:03 np0005486808 nova_compute[259627]: 2025-10-14 09:48:03.359 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:48:03 np0005486808 nova_compute[259627]: 2025-10-14 09:48:03.361 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:48:03 np0005486808 nova_compute[259627]: 2025-10-14 09:48:03.362 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:48:04 np0005486808 nova_compute[259627]: 2025-10-14 09:48:04.362 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:48:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:48:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:48:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/185654086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:48:05 np0005486808 nova_compute[259627]: 2025-10-14 09:48:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:05 np0005486808 nova_compute[259627]: 2025-10-14 09:48:05.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:48:05 np0005486808 nova_compute[259627]: 2025-10-14 09:48:05.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:48:06 np0005486808 nova_compute[259627]: 2025-10-14 09:48:06.020 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:48:06 np0005486808 nova_compute[259627]: 2025-10-14 09:48:06.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:07 np0005486808 nova_compute[259627]: 2025-10-14 09:48:07.015 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.065 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:48:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:48:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:48:07 np0005486808 nova_compute[259627]: 2025-10-14 09:48:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:08 np0005486808 nova_compute[259627]: 2025-10-14 09:48:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:09 np0005486808 podman[427139]: 2025-10-14 09:48:09.192401312 +0000 UTC m=+0.100022874 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:48:09 np0005486808 podman[427139]: 2025-10-14 09:48:09.321567582 +0000 UTC m=+0.229189144 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:48:09 np0005486808 nova_compute[259627]: 2025-10-14 09:48:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:09 np0005486808 nova_compute[259627]: 2025-10-14 09:48:09.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:48:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:48:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:48:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:10 np0005486808 nova_compute[259627]: 2025-10-14 09:48:10.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 07fd6601-b22c-44c4-bc05-680132fecab2 does not exist
Oct 14 05:48:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1be47d71-0f27-4eb7-ad7b-00913c128e69 does not exist
Oct 14 05:48:11 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 06a248db-b2ba-45b0-bd58-71e3c0209744 does not exist
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:11 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:48:11 np0005486808 podman[427456]: 2025-10-14 09:48:11.269193827 +0000 UTC m=+0.105782467 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 05:48:11 np0005486808 podman[427457]: 2025-10-14 09:48:11.26890056 +0000 UTC m=+0.105527611 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 14 05:48:11 np0005486808 nova_compute[259627]: 2025-10-14 09:48:11.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:11 np0005486808 podman[427614]: 2025-10-14 09:48:11.880639702 +0000 UTC m=+0.069506276 container create 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:48:11 np0005486808 systemd[1]: Started libpod-conmon-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope.
Oct 14 05:48:11 np0005486808 podman[427614]: 2025-10-14 09:48:11.855550897 +0000 UTC m=+0.044417451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:11 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:11 np0005486808 podman[427614]: 2025-10-14 09:48:11.983750343 +0000 UTC m=+0.172616917 container init 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:48:11 np0005486808 podman[427614]: 2025-10-14 09:48:11.991270777 +0000 UTC m=+0.180137361 container start 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:48:11 np0005486808 podman[427614]: 2025-10-14 09:48:11.995191703 +0000 UTC m=+0.184058307 container attach 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:48:11 np0005486808 happy_herschel[427631]: 167 167
Oct 14 05:48:12 np0005486808 systemd[1]: libpod-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope: Deactivated successfully.
Oct 14 05:48:12 np0005486808 podman[427614]: 2025-10-14 09:48:12.001199401 +0000 UTC m=+0.190065985 container died 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:48:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-92d9bd470852f3821d00abbe72f11bb98c8a0a72c5d8e540e18a3d8530f619d1-merged.mount: Deactivated successfully.
Oct 14 05:48:12 np0005486808 podman[427614]: 2025-10-14 09:48:12.04434112 +0000 UTC m=+0.233207674 container remove 788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:48:12 np0005486808 systemd[1]: libpod-conmon-788054d7ff7bf65e72bd0ff6faadf492eb4289ecd3bf0ee76d33578700474657.scope: Deactivated successfully.
Oct 14 05:48:12 np0005486808 podman[427654]: 2025-10-14 09:48:12.308599624 +0000 UTC m=+0.082321491 container create 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:48:12 np0005486808 podman[427654]: 2025-10-14 09:48:12.278110976 +0000 UTC m=+0.051832903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:12 np0005486808 systemd[1]: Started libpod-conmon-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope.
Oct 14 05:48:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:12 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:12 np0005486808 podman[427654]: 2025-10-14 09:48:12.449624195 +0000 UTC m=+0.223346112 container init 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:48:12 np0005486808 podman[427654]: 2025-10-14 09:48:12.464985962 +0000 UTC m=+0.238707829 container start 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 14 05:48:12 np0005486808 podman[427654]: 2025-10-14 09:48:12.469641736 +0000 UTC m=+0.243363593 container attach 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 14 05:48:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:12 np0005486808 nova_compute[259627]: 2025-10-14 09:48:12.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:13 np0005486808 festive_kowalevski[427671]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:48:13 np0005486808 festive_kowalevski[427671]: --> relative data size: 1.0
Oct 14 05:48:13 np0005486808 festive_kowalevski[427671]: --> All data devices are unavailable
Oct 14 05:48:13 np0005486808 systemd[1]: libpod-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Deactivated successfully.
Oct 14 05:48:13 np0005486808 podman[427654]: 2025-10-14 09:48:13.696956624 +0000 UTC m=+1.470678491 container died 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:48:13 np0005486808 systemd[1]: libpod-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Consumed 1.197s CPU time.
Oct 14 05:48:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fd4c697e719d273fb1e31e6f7db1db66badd3eaad5c1aa6db9eb56f6752ea4dc-merged.mount: Deactivated successfully.
Oct 14 05:48:13 np0005486808 podman[427654]: 2025-10-14 09:48:13.79093178 +0000 UTC m=+1.564653657 container remove 68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:48:13 np0005486808 systemd[1]: libpod-conmon-68fbde338bbb041268f77eca574e89c858672b0f6144a6388169227b989ac9c3.scope: Deactivated successfully.
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.661650428 +0000 UTC m=+0.073026663 container create 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:48:14 np0005486808 systemd[1]: Started libpod-conmon-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope.
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.630885943 +0000 UTC m=+0.042262238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:14 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.752789904 +0000 UTC m=+0.164166199 container init 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.764888571 +0000 UTC m=+0.176264796 container start 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.769589896 +0000 UTC m=+0.180966151 container attach 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:48:14 np0005486808 silly_ishizaka[427872]: 167 167
Oct 14 05:48:14 np0005486808 systemd[1]: libpod-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope: Deactivated successfully.
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.771670887 +0000 UTC m=+0.183047112 container died 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 05:48:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2dc3234ce6d4f37bfd8b066f18cc4920fe82a990b20d209d1b3e98375e16ccca-merged.mount: Deactivated successfully.
Oct 14 05:48:14 np0005486808 podman[427855]: 2025-10-14 09:48:14.828050021 +0000 UTC m=+0.239426246 container remove 01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:48:14 np0005486808 systemd[1]: libpod-conmon-01f35436e1205e4026e8bc9861e64c3633b4b63cd7f226d8480fb7f9e4b9bd42.scope: Deactivated successfully.
Oct 14 05:48:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.041446097 +0000 UTC m=+0.048626904 container create 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:48:15 np0005486808 systemd[1]: Started libpod-conmon-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope.
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.021544479 +0000 UTC m=+0.028725296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:15 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.16995541 +0000 UTC m=+0.177136237 container init 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.187006179 +0000 UTC m=+0.194186976 container start 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.191852268 +0000 UTC m=+0.199033075 container attach 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]: {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    "0": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "devices": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "/dev/loop3"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            ],
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_name": "ceph_lv0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_size": "21470642176",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "name": "ceph_lv0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "tags": {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_name": "ceph",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.crush_device_class": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.encrypted": "0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_id": "0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.vdo": "0"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            },
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "vg_name": "ceph_vg0"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        }
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    ],
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    "1": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "devices": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "/dev/loop4"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            ],
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_name": "ceph_lv1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_size": "21470642176",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "name": "ceph_lv1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "tags": {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_name": "ceph",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.crush_device_class": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.encrypted": "0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_id": "1",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.vdo": "0"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            },
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "vg_name": "ceph_vg1"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        }
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    ],
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    "2": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "devices": [
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "/dev/loop5"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            ],
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_name": "ceph_lv2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_size": "21470642176",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "name": "ceph_lv2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "tags": {
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.cluster_name": "ceph",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.crush_device_class": "",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.encrypted": "0",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osd_id": "2",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:                "ceph.vdo": "0"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            },
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "type": "block",
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:            "vg_name": "ceph_vg2"
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:        }
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]:    ]
Oct 14 05:48:15 np0005486808 inspiring_tu[427911]: }
Oct 14 05:48:15 np0005486808 systemd[1]: libpod-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope: Deactivated successfully.
Oct 14 05:48:15 np0005486808 podman[427894]: 2025-10-14 09:48:15.983060694 +0000 UTC m=+0.990241531 container died 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:48:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-021f0d25db3be19682bd78ae39058efd7040f44704cddb0f59a98a4c82d0c922-merged.mount: Deactivated successfully.
Oct 14 05:48:16 np0005486808 podman[427894]: 2025-10-14 09:48:16.051691809 +0000 UTC m=+1.058872576 container remove 1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:48:16 np0005486808 systemd[1]: libpod-conmon-1f7f59a2a6bc7e382cea76dad8b9e616e2d8849c8b5e559c24274ea3dbb171b6.scope: Deactivated successfully.
Oct 14 05:48:16 np0005486808 nova_compute[259627]: 2025-10-14 09:48:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.838708301 +0000 UTC m=+0.055806890 container create 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:48:16 np0005486808 systemd[1]: Started libpod-conmon-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope.
Oct 14 05:48:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.819794877 +0000 UTC m=+0.036893496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.943196715 +0000 UTC m=+0.160295334 container init 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.951764885 +0000 UTC m=+0.168863504 container start 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.955800354 +0000 UTC m=+0.172898973 container attach 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:48:16 np0005486808 dazzling_turing[428089]: 167 167
Oct 14 05:48:16 np0005486808 podman[428072]: 2025-10-14 09:48:16.963658507 +0000 UTC m=+0.180757126 container died 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:48:16 np0005486808 systemd[1]: libpod-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope: Deactivated successfully.
Oct 14 05:48:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-80113aaae461cfeb15253ec63192479511335354395d349eb35c20322f09ddb6-merged.mount: Deactivated successfully.
Oct 14 05:48:17 np0005486808 podman[428072]: 2025-10-14 09:48:17.009943153 +0000 UTC m=+0.227041752 container remove 3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:48:17 np0005486808 systemd[1]: libpod-conmon-3516f7c881ee529e06cbf8781a1308054dc00f0ade7ee2d10759fda6baba258f.scope: Deactivated successfully.
Oct 14 05:48:17 np0005486808 podman[428113]: 2025-10-14 09:48:17.181973215 +0000 UTC m=+0.043571921 container create e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:48:17 np0005486808 systemd[1]: Started libpod-conmon-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope.
Oct 14 05:48:17 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:48:17 np0005486808 podman[428113]: 2025-10-14 09:48:17.164071725 +0000 UTC m=+0.025670421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:48:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:48:17 np0005486808 podman[428113]: 2025-10-14 09:48:17.282483691 +0000 UTC m=+0.144082457 container init e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:48:17 np0005486808 podman[428113]: 2025-10-14 09:48:17.294613819 +0000 UTC m=+0.156212525 container start e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:48:17 np0005486808 podman[428113]: 2025-10-14 09:48:17.298551835 +0000 UTC m=+0.160150511 container attach e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:48:17 np0005486808 nova_compute[259627]: 2025-10-14 09:48:17.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]: {
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_id": 2,
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "type": "bluestore"
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    },
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_id": 1,
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "type": "bluestore"
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    },
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_id": 0,
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:        "type": "bluestore"
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]:    }
Oct 14 05:48:18 np0005486808 agitated_cerf[428129]: }
Oct 14 05:48:18 np0005486808 systemd[1]: libpod-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Deactivated successfully.
Oct 14 05:48:18 np0005486808 systemd[1]: libpod-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Consumed 1.059s CPU time.
Oct 14 05:48:18 np0005486808 podman[428113]: 2025-10-14 09:48:18.346083132 +0000 UTC m=+1.207681878 container died e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:48:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-63c460a998d8014cab56a82951a9b3439c42790c2de4f2260493172b8e650ebd-merged.mount: Deactivated successfully.
Oct 14 05:48:18 np0005486808 podman[428113]: 2025-10-14 09:48:18.42464777 +0000 UTC m=+1.286246476 container remove e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:48:18 np0005486808 systemd[1]: libpod-conmon-e3e63a8b1a9ce076cc7dfd1430d4aac1bc0d52e592f248487748e33d87c15e06.scope: Deactivated successfully.
Oct 14 05:48:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:48:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:48:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 426223b8-f791-4f97-8486-f84d11baf428 does not exist
Oct 14 05:48:18 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1e207904-f74f-4720-85ac-5e7400eaf33d does not exist
Oct 14 05:48:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:19 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:48:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:21 np0005486808 nova_compute[259627]: 2025-10-14 09:48:21.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:22 np0005486808 nova_compute[259627]: 2025-10-14 09:48:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:25 np0005486808 podman[428229]: 2025-10-14 09:48:25.684439187 +0000 UTC m=+0.090978154 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent)
Oct 14 05:48:25 np0005486808 podman[428228]: 2025-10-14 09:48:25.71433528 +0000 UTC m=+0.120873757 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 14 05:48:26 np0005486808 nova_compute[259627]: 2025-10-14 09:48:26.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:27 np0005486808 nova_compute[259627]: 2025-10-14 09:48:27.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:31 np0005486808 nova_compute[259627]: 2025-10-14 09:48:31.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:48:32
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.mgr']
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:48:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:32 np0005486808 nova_compute[259627]: 2025-10-14 09:48:32.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:48:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:48:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:35 np0005486808 nova_compute[259627]: 2025-10-14 09:48:35.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:36 np0005486808 nova_compute[259627]: 2025-10-14 09:48:36.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:38 np0005486808 nova_compute[259627]: 2025-10-14 09:48:38.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:41 np0005486808 nova_compute[259627]: 2025-10-14 09:48:41.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:41 np0005486808 podman[428271]: 2025-10-14 09:48:41.657710432 +0000 UTC m=+0.073057794 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 14 05:48:41 np0005486808 podman[428270]: 2025-10-14 09:48:41.679182778 +0000 UTC m=+0.088592744 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 14 05:48:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:43 np0005486808 nova_compute[259627]: 2025-10-14 09:48:43.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:48:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:48:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:46 np0005486808 nova_compute[259627]: 2025-10-14 09:48:46.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:48 np0005486808 nova_compute[259627]: 2025-10-14 09:48:48.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:51 np0005486808 nova_compute[259627]: 2025-10-14 09:48:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:53 np0005486808 nova_compute[259627]: 2025-10-14 09:48:53.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:53 np0005486808 nova_compute[259627]: 2025-10-14 09:48:53.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:55 np0005486808 nova_compute[259627]: 2025-10-14 09:48:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:48:55 np0005486808 nova_compute[259627]: 2025-10-14 09:48:55.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:48:56 np0005486808 nova_compute[259627]: 2025-10-14 09:48:56.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:56 np0005486808 podman[428311]: 2025-10-14 09:48:56.713219561 +0000 UTC m=+0.110100023 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 14 05:48:56 np0005486808 podman[428310]: 2025-10-14 09:48:56.734543774 +0000 UTC m=+0.136310345 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:48:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:58 np0005486808 nova_compute[259627]: 2025-10-14 09:48:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:48:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:48:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:48:59 np0005486808 nova_compute[259627]: 2025-10-14 09:48:59.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:01 np0005486808 nova_compute[259627]: 2025-10-14 09:49:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:02 np0005486808 nova_compute[259627]: 2025-10-14 09:49:02.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.020 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.021 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.022 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:49:03 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925532800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.511 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.690 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.691 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3584MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.691 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.778 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.779 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:49:03 np0005486808 nova_compute[259627]: 2025-10-14 09:49:03.793 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:49:04 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:49:04 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921402581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:49:04 np0005486808 nova_compute[259627]: 2025-10-14 09:49:04.269 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:49:04 np0005486808 nova_compute[259627]: 2025-10-14 09:49:04.276 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:49:04 np0005486808 nova_compute[259627]: 2025-10-14 09:49:04.295 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:49:04 np0005486808 nova_compute[259627]: 2025-10-14 09:49:04.296 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:49:04 np0005486808 nova_compute[259627]: 2025-10-14 09:49:04.297 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:49:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:49:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:49:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:49:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1321879941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:49:06 np0005486808 nova_compute[259627]: 2025-10-14 09:49:06.297 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:06 np0005486808 nova_compute[259627]: 2025-10-14 09:49:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:06 np0005486808 nova_compute[259627]: 2025-10-14 09:49:06.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:06 np0005486808 nova_compute[259627]: 2025-10-14 09:49:06.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:49:06 np0005486808 nova_compute[259627]: 2025-10-14 09:49:06.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:49:07 np0005486808 nova_compute[259627]: 2025-10-14 09:49:07.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.066 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:49:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:49:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:49:08 np0005486808 nova_compute[259627]: 2025-10-14 09:49:08.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:08 np0005486808 nova_compute[259627]: 2025-10-14 09:49:08.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:10 np0005486808 nova_compute[259627]: 2025-10-14 09:49:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:11 np0005486808 nova_compute[259627]: 2025-10-14 09:49:11.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:11 np0005486808 nova_compute[259627]: 2025-10-14 09:49:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:11 np0005486808 nova_compute[259627]: 2025-10-14 09:49:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:49:12 np0005486808 podman[428397]: 2025-10-14 09:49:12.664595969 +0000 UTC m=+0.070650335 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 14 05:49:12 np0005486808 podman[428396]: 2025-10-14 09:49:12.687941582 +0000 UTC m=+0.089318453 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 14 05:49:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:12 np0005486808 nova_compute[259627]: 2025-10-14 09:49:12.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:13 np0005486808 nova_compute[259627]: 2025-10-14 09:49:13.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:16 np0005486808 nova_compute[259627]: 2025-10-14 09:49:16.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:16 np0005486808 nova_compute[259627]: 2025-10-14 09:49:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:18 np0005486808 nova_compute[259627]: 2025-10-14 09:49:18.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev dc31a80b-0eba-468c-b2f8-0ea21ba60128 does not exist
Oct 14 05:49:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5c32462b-ec66-44e7-ae3e-cce66fcec746 does not exist
Oct 14 05:49:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 001b3d70-dea1-4784-8520-dc6dca6cb889 does not exist
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:49:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:49:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:49:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.468715698 +0000 UTC m=+0.077888392 container create ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:49:20 np0005486808 systemd[1]: Started libpod-conmon-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope.
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.434356115 +0000 UTC m=+0.043528859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.566407556 +0000 UTC m=+0.175580300 container init ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.580632865 +0000 UTC m=+0.189805519 container start ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.585058013 +0000 UTC m=+0.194230757 container attach ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:49:20 np0005486808 crazy_hertz[428725]: 167 167
Oct 14 05:49:20 np0005486808 systemd[1]: libpod-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope: Deactivated successfully.
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.587719969 +0000 UTC m=+0.196892633 container died ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:49:20 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d64c3d00755456c07a4ab33419e3187951c0eda5c41d86c7adf39c20f1dcdf2b-merged.mount: Deactivated successfully.
Oct 14 05:49:20 np0005486808 podman[428709]: 2025-10-14 09:49:20.636964587 +0000 UTC m=+0.246137241 container remove ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:49:20 np0005486808 systemd[1]: libpod-conmon-ca23e933a49177a6466f1ae9f492af2db095203f57982c15d735a12a9b6b70d3.scope: Deactivated successfully.
Oct 14 05:49:20 np0005486808 podman[428749]: 2025-10-14 09:49:20.890334083 +0000 UTC m=+0.075671698 container create 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 14 05:49:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:20 np0005486808 systemd[1]: Started libpod-conmon-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope.
Oct 14 05:49:20 np0005486808 podman[428749]: 2025-10-14 09:49:20.857638821 +0000 UTC m=+0.042976466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:21 np0005486808 podman[428749]: 2025-10-14 09:49:21.006436672 +0000 UTC m=+0.191774257 container init 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:49:21 np0005486808 podman[428749]: 2025-10-14 09:49:21.024222658 +0000 UTC m=+0.209560263 container start 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:49:21 np0005486808 podman[428749]: 2025-10-14 09:49:21.028670427 +0000 UTC m=+0.214007992 container attach 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:49:21 np0005486808 nova_compute[259627]: 2025-10-14 09:49:21.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:22 np0005486808 vibrant_murdock[428766]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:49:22 np0005486808 vibrant_murdock[428766]: --> relative data size: 1.0
Oct 14 05:49:22 np0005486808 vibrant_murdock[428766]: --> All data devices are unavailable
Oct 14 05:49:22 np0005486808 systemd[1]: libpod-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Deactivated successfully.
Oct 14 05:49:22 np0005486808 podman[428749]: 2025-10-14 09:49:22.199845735 +0000 UTC m=+1.385183380 container died 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:49:22 np0005486808 systemd[1]: libpod-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Consumed 1.135s CPU time.
Oct 14 05:49:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-660f9e0bfcbbb5730655060b2bba22afc8df7eeac98d18e99c17b8ea7580f7c9-merged.mount: Deactivated successfully.
Oct 14 05:49:22 np0005486808 podman[428749]: 2025-10-14 09:49:22.287357812 +0000 UTC m=+1.472695417 container remove 92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:49:22 np0005486808 systemd[1]: libpod-conmon-92986a9a829692487334cbdf9d727fbb5165a1d5ed7873c8cc7213adebae910e.scope: Deactivated successfully.
Oct 14 05:49:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:22 np0005486808 podman[428948]: 2025-10-14 09:49:22.999460505 +0000 UTC m=+0.038804383 container create 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:49:23 np0005486808 systemd[1]: Started libpod-conmon-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope.
Oct 14 05:49:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:22.982616182 +0000 UTC m=+0.021960060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:23.08280207 +0000 UTC m=+0.122146018 container init 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:23.09217526 +0000 UTC m=+0.131519128 container start 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:23.096709951 +0000 UTC m=+0.136053849 container attach 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:49:23 np0005486808 xenodochial_roentgen[428964]: 167 167
Oct 14 05:49:23 np0005486808 systemd[1]: libpod-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope: Deactivated successfully.
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:23.099146381 +0000 UTC m=+0.138490249 container died 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:49:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-970cbae057c94ab4d02ca170ba516a54eb5b753ce4a61ba2e02ec8b7c0739d67-merged.mount: Deactivated successfully.
Oct 14 05:49:23 np0005486808 podman[428948]: 2025-10-14 09:49:23.13699496 +0000 UTC m=+0.176338828 container remove 78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_roentgen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:49:23 np0005486808 systemd[1]: libpod-conmon-78a3dd4a175b82380591c653a784baf46bdfa7f5cdf0bcfec4b5da2bf1f6d683.scope: Deactivated successfully.
Oct 14 05:49:23 np0005486808 nova_compute[259627]: 2025-10-14 09:49:23.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:23 np0005486808 podman[428988]: 2025-10-14 09:49:23.328178561 +0000 UTC m=+0.056711033 container create 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:49:23 np0005486808 systemd[1]: Started libpod-conmon-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope.
Oct 14 05:49:23 np0005486808 podman[428988]: 2025-10-14 09:49:23.305955726 +0000 UTC m=+0.034488168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:23 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:23 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:23 np0005486808 podman[428988]: 2025-10-14 09:49:23.437111494 +0000 UTC m=+0.165644036 container init 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:49:23 np0005486808 podman[428988]: 2025-10-14 09:49:23.452973193 +0000 UTC m=+0.181505695 container start 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:49:23 np0005486808 podman[428988]: 2025-10-14 09:49:23.45731988 +0000 UTC m=+0.185852412 container attach 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]: {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    "0": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "devices": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "/dev/loop3"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            ],
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_name": "ceph_lv0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_size": "21470642176",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "name": "ceph_lv0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "tags": {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_name": "ceph",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.crush_device_class": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.encrypted": "0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_id": "0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.vdo": "0"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            },
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "vg_name": "ceph_vg0"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        }
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    ],
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    "1": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "devices": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "/dev/loop4"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            ],
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_name": "ceph_lv1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_size": "21470642176",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "name": "ceph_lv1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "tags": {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_name": "ceph",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.crush_device_class": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.encrypted": "0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_id": "1",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.vdo": "0"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            },
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "vg_name": "ceph_vg1"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        }
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    ],
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    "2": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "devices": [
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "/dev/loop5"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            ],
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_name": "ceph_lv2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_size": "21470642176",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "name": "ceph_lv2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "tags": {
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.cluster_name": "ceph",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.crush_device_class": "",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.encrypted": "0",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osd_id": "2",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:                "ceph.vdo": "0"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            },
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "type": "block",
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:            "vg_name": "ceph_vg2"
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:        }
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]:    ]
Oct 14 05:49:24 np0005486808 confident_mcnulty[429004]: }
Oct 14 05:49:24 np0005486808 systemd[1]: libpod-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope: Deactivated successfully.
Oct 14 05:49:24 np0005486808 podman[428988]: 2025-10-14 09:49:24.245259174 +0000 UTC m=+0.973791696 container died 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:49:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-339d1da38570b8f39fc746b702e9254390eccb4c4b034eb7931a552503d708ad-merged.mount: Deactivated successfully.
Oct 14 05:49:24 np0005486808 podman[428988]: 2025-10-14 09:49:24.323781479 +0000 UTC m=+1.052313951 container remove 43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:49:24 np0005486808 systemd[1]: libpod-conmon-43e6e534a6b28b30b24091e3e11e286142f536c3654db9e4bf2001a812b9a84b.scope: Deactivated successfully.
Oct 14 05:49:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.206587521 +0000 UTC m=+0.067654361 container create f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:49:25 np0005486808 systemd[1]: Started libpod-conmon-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope.
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.178952093 +0000 UTC m=+0.040018983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.312353696 +0000 UTC m=+0.173420576 container init f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.324285919 +0000 UTC m=+0.185352749 container start f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.32797324 +0000 UTC m=+0.189040070 container attach f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 05:49:25 np0005486808 laughing_edison[429184]: 167 167
Oct 14 05:49:25 np0005486808 systemd[1]: libpod-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope: Deactivated successfully.
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.334417508 +0000 UTC m=+0.195484338 container died f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:49:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-54ff7967ae7b59f4a2654c5f1ba7bca4a52a167b5c036d6b7c4dae6ca451d54c-merged.mount: Deactivated successfully.
Oct 14 05:49:25 np0005486808 podman[429168]: 2025-10-14 09:49:25.382906197 +0000 UTC m=+0.243973007 container remove f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:49:25 np0005486808 systemd[1]: libpod-conmon-f99dce75b6e9af29600bd2acbe1724ca2bb977733eafb3aec6e7f01b65c51187.scope: Deactivated successfully.
Oct 14 05:49:25 np0005486808 podman[429209]: 2025-10-14 09:49:25.594718385 +0000 UTC m=+0.072944801 container create b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:49:25 np0005486808 systemd[1]: Started libpod-conmon-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope.
Oct 14 05:49:25 np0005486808 podman[429209]: 2025-10-14 09:49:25.566598215 +0000 UTC m=+0.044824671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:49:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:49:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:25 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:49:25 np0005486808 podman[429209]: 2025-10-14 09:49:25.697583139 +0000 UTC m=+0.175809615 container init b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:49:25 np0005486808 podman[429209]: 2025-10-14 09:49:25.711565262 +0000 UTC m=+0.189791668 container start b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:49:25 np0005486808 podman[429209]: 2025-10-14 09:49:25.71595815 +0000 UTC m=+0.194184566 container attach b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:49:26 np0005486808 nova_compute[259627]: 2025-10-14 09:49:26.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]: {
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_id": 2,
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "type": "bluestore"
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    },
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_id": 1,
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "type": "bluestore"
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    },
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_id": 0,
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:        "type": "bluestore"
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]:    }
Oct 14 05:49:26 np0005486808 sleepy_chandrasekhar[429225]: }
Oct 14 05:49:26 np0005486808 systemd[1]: libpod-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Deactivated successfully.
Oct 14 05:49:26 np0005486808 systemd[1]: libpod-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Consumed 1.128s CPU time.
Oct 14 05:49:26 np0005486808 podman[429209]: 2025-10-14 09:49:26.834439154 +0000 UTC m=+1.312665590 container died b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:49:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4c2c6fe91aeaad4c14cd6bbd814243a8f0af1c81ef8ef5ac136990497dbaeac9-merged.mount: Deactivated successfully.
Oct 14 05:49:26 np0005486808 podman[429209]: 2025-10-14 09:49:26.896915577 +0000 UTC m=+1.375141963 container remove b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:49:26 np0005486808 systemd[1]: libpod-conmon-b979640a6a4360021636bbdadb5f1998b3ade43d9846ad647ec03b5e5114bdc9.scope: Deactivated successfully.
Oct 14 05:49:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:49:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:49:26 np0005486808 podman[429261]: 2025-10-14 09:49:26.951668351 +0000 UTC m=+0.074285634 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:49:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b53546ac-bacb-4a68-92f8-5a5f08b58239 does not exist
Oct 14 05:49:26 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 59ea3e8c-c5b2-46d2-a68b-8b3c87ee1119 does not exist
Oct 14 05:49:27 np0005486808 podman[429259]: 2025-10-14 09:49:27.031869799 +0000 UTC m=+0.156681046 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 05:49:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:27 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:49:28 np0005486808 nova_compute[259627]: 2025-10-14 09:49:28.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s
Oct 14 05:49:30 np0005486808 nova_compute[259627]: 2025-10-14 09:49:30.995 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:30 np0005486808 nova_compute[259627]: 2025-10-14 09:49:30.995 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:49:31 np0005486808 nova_compute[259627]: 2025-10-14 09:49:31.013 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:49:31 np0005486808 nova_compute[259627]: 2025-10-14 09:49:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 14 05:49:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 14 05:49:31 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:49:32
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms']
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:49:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 307 B/s wr, 3 op/s
Oct 14 05:49:33 np0005486808 nova_compute[259627]: 2025-10-14 09:49:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:49:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:49:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 14 05:49:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 14 05:49:34 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 14 05:49:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 458 KiB data, 987 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 383 B/s wr, 3 op/s
Oct 14 05:49:36 np0005486808 nova_compute[259627]: 2025-10-14 09:49:36.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 14 05:49:38 np0005486808 nova_compute[259627]: 2025-10-14 09:49:38.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Oct 14 05:49:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.6 MiB/s wr, 39 op/s
Oct 14 05:49:41 np0005486808 nova_compute[259627]: 2025-10-14 09:49:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 35 op/s
Oct 14 05:49:43 np0005486808 nova_compute[259627]: 2025-10-14 09:49:43.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:43 np0005486808 podman[429360]: 2025-10-14 09:49:43.709648744 +0000 UTC m=+0.108098864 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:49:43 np0005486808 podman[429361]: 2025-10-14 09:49:43.711600532 +0000 UTC m=+0.108649697 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:49:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:49:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 MiB/s wr, 32 op/s
Oct 14 05:49:46 np0005486808 nova_compute[259627]: 2025-10-14 09:49:46.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Oct 14 05:49:48 np0005486808 nova_compute[259627]: 2025-10-14 09:49:48.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:51 np0005486808 nova_compute[259627]: 2025-10-14 09:49:51.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:53 np0005486808 nova_compute[259627]: 2025-10-14 09:49:53.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:49:53.948 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:49:53 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:49:53.949 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:49:54 np0005486808 nova_compute[259627]: 2025-10-14 09:49:54.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:55 np0005486808 nova_compute[259627]: 2025-10-14 09:49:55.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:49:56 np0005486808 nova_compute[259627]: 2025-10-14 09:49:56.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:57 np0005486808 podman[429401]: 2025-10-14 09:49:57.677218495 +0000 UTC m=+0.081928161 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:49:57 np0005486808 podman[429400]: 2025-10-14 09:49:57.726519355 +0000 UTC m=+0.136744766 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.158420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398158474, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2001, "num_deletes": 254, "total_data_size": 3309233, "memory_usage": 3361280, "flush_reason": "Manual Compaction"}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398179300, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3243289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59129, "largest_seqno": 61129, "table_properties": {"data_size": 3234029, "index_size": 5881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18516, "raw_average_key_size": 20, "raw_value_size": 3215594, "raw_average_value_size": 3518, "num_data_blocks": 261, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435184, "oldest_key_time": 1760435184, "file_creation_time": 1760435398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 20940 microseconds, and 14496 cpu microseconds.
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.179358) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3243289 bytes OK
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.179383) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181405) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181428) EVENT_LOG_v1 {"time_micros": 1760435398181420, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.181450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3300828, prev total WAL file size 3300828, number of live WAL files 2.
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.182980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3167KB)], [140(8097KB)]
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398183072, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11535342, "oldest_snapshot_seqno": -1}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8075 keys, 9812094 bytes, temperature: kUnknown
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398250160, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9812094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9760361, "index_size": 30436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210497, "raw_average_key_size": 26, "raw_value_size": 9618436, "raw_average_value_size": 1191, "num_data_blocks": 1183, "num_entries": 8075, "num_filter_entries": 8075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.250427) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9812094 bytes
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.251861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.7 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 8598, records dropped: 523 output_compression: NoCompression
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.251879) EVENT_LOG_v1 {"time_micros": 1760435398251870, "job": 86, "event": "compaction_finished", "compaction_time_micros": 67179, "compaction_time_cpu_micros": 45652, "output_level": 6, "num_output_files": 1, "total_output_size": 9812094, "num_input_records": 8598, "num_output_records": 8075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398252806, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435398254755, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.182821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:49:58.254820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:49:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:49:58 np0005486808 nova_compute[259627]: 2025-10-14 09:49:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:49:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:49:59 np0005486808 nova_compute[259627]: 2025-10-14 09:49:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 511 B/s wr, 6 op/s
Oct 14 05:50:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 14 05:50:01 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 14 05:50:01 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 14 05:50:01 np0005486808 nova_compute[259627]: 2025-10-14 09:50:01.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:02 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:50:02.952 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:50:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 05:50:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:03 np0005486808 nova_compute[259627]: 2025-10-14 09:50:03.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.1 KiB/s rd, 614 B/s wr, 8 op/s
Oct 14 05:50:04 np0005486808 nova_compute[259627]: 2025-10-14 09:50:04.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.012 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.013 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.014 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614101391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.617 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.618 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3630MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.619 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.619 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.709 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.728 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:50:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514776378' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.745 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.746 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.765 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.788 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:50:05 np0005486808 nova_compute[259627]: 2025-10-14 09:50:05.808 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:50:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:50:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839488715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.250 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.258 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.282 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.285 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.285 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:50:06 np0005486808 nova_compute[259627]: 2025-10-14 09:50:06.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 05:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.067 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.068 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:50:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:50:07.068 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:50:08 np0005486808 nova_compute[259627]: 2025-10-14 09:50:08.286 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 14 05:50:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 14 05:50:08 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 14 05:50:08 np0005486808 nova_compute[259627]: 2025-10-14 09:50:08.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 20 op/s
Oct 14 05:50:08 np0005486808 nova_compute[259627]: 2025-10-14 09:50:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:08 np0005486808 nova_compute[259627]: 2025-10-14 09:50:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:50:08 np0005486808 nova_compute[259627]: 2025-10-14 09:50:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:50:09 np0005486808 nova_compute[259627]: 2025-10-14 09:50:09.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:50:10 np0005486808 nova_compute[259627]: 2025-10-14 09:50:10.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 838 B/s wr, 17 op/s
Oct 14 05:50:11 np0005486808 nova_compute[259627]: 2025-10-14 09:50:11.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:11 np0005486808 nova_compute[259627]: 2025-10-14 09:50:11.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:11 np0005486808 nova_compute[259627]: 2025-10-14 09:50:11.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:50:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 05:50:12 np0005486808 nova_compute[259627]: 2025-10-14 09:50:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:12 np0005486808 nova_compute[259627]: 2025-10-14 09:50:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:13 np0005486808 nova_compute[259627]: 2025-10-14 09:50:13.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:14 np0005486808 podman[429489]: 2025-10-14 09:50:14.667702192 +0000 UTC m=+0.071511316 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:50:14 np0005486808 podman[429488]: 2025-10-14 09:50:14.704462894 +0000 UTC m=+0.113474676 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 14 05:50:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct 14 05:50:16 np0005486808 nova_compute[259627]: 2025-10-14 09:50:16.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:18 np0005486808 nova_compute[259627]: 2025-10-14 09:50:18.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:21 np0005486808 nova_compute[259627]: 2025-10-14 09:50:21.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:23 np0005486808 nova_compute[259627]: 2025-10-14 09:50:23.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:26 np0005486808 nova_compute[259627]: 2025-10-14 09:50:26.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev adcb5ea5-b51e-40cf-91e2-46f429deda08 does not exist
Oct 14 05:50:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6850cd8a-4082-41c8-a009-94c7e25c50e9 does not exist
Oct 14 05:50:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev ad342af6-0a29-4a35-96f1-eb776736fdfe does not exist
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:50:28 np0005486808 podman[429682]: 2025-10-14 09:50:28.219281968 +0000 UTC m=+0.071406313 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:50:28 np0005486808 podman[429681]: 2025-10-14 09:50:28.287952503 +0000 UTC m=+0.137524605 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 14 05:50:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:28 np0005486808 nova_compute[259627]: 2025-10-14 09:50:28.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.735399311 +0000 UTC m=+0.067641699 container create d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:50:28 np0005486808 systemd[1]: Started libpod-conmon-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope.
Oct 14 05:50:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.709221009 +0000 UTC m=+0.041463417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.813213781 +0000 UTC m=+0.145456219 container init d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.82172903 +0000 UTC m=+0.153971428 container start d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.825917473 +0000 UTC m=+0.158159921 container attach d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:50:28 np0005486808 frosty_leavitt[429860]: 167 167
Oct 14 05:50:28 np0005486808 systemd[1]: libpod-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope: Deactivated successfully.
Oct 14 05:50:28 np0005486808 conmon[429860]: conmon d278bd26f90dced9fbe4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope/container/memory.events
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.829113941 +0000 UTC m=+0.161356379 container died d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:50:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c3c0ed4d24f93668d3baf0d258b5f2cce0b5d5d6dc181ffa7d8927bc69236faf-merged.mount: Deactivated successfully.
Oct 14 05:50:28 np0005486808 podman[429844]: 2025-10-14 09:50:28.879984029 +0000 UTC m=+0.212226397 container remove d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 14 05:50:28 np0005486808 systemd[1]: libpod-conmon-d278bd26f90dced9fbe4e5e05d2d3b2917181a2e7fccd4f68d4ced5644db1fb6.scope: Deactivated successfully.
Oct 14 05:50:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:29 np0005486808 podman[429885]: 2025-10-14 09:50:29.058574451 +0000 UTC m=+0.044837781 container create 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:50:29 np0005486808 systemd[1]: Started libpod-conmon-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope.
Oct 14 05:50:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:29 np0005486808 podman[429885]: 2025-10-14 09:50:29.035495775 +0000 UTC m=+0.021759115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:29 np0005486808 podman[429885]: 2025-10-14 09:50:29.134502094 +0000 UTC m=+0.120765404 container init 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:50:29 np0005486808 podman[429885]: 2025-10-14 09:50:29.14083114 +0000 UTC m=+0.127094450 container start 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:50:29 np0005486808 podman[429885]: 2025-10-14 09:50:29.144225623 +0000 UTC m=+0.130488933 container attach 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:50:30 np0005486808 ecstatic_allen[429901]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:50:30 np0005486808 ecstatic_allen[429901]: --> relative data size: 1.0
Oct 14 05:50:30 np0005486808 ecstatic_allen[429901]: --> All data devices are unavailable
Oct 14 05:50:30 np0005486808 systemd[1]: libpod-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Deactivated successfully.
Oct 14 05:50:30 np0005486808 systemd[1]: libpod-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Consumed 1.073s CPU time.
Oct 14 05:50:30 np0005486808 podman[429885]: 2025-10-14 09:50:30.26078083 +0000 UTC m=+1.247044160 container died 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:50:30 np0005486808 systemd[1]: var-lib-containers-storage-overlay-13e491368cc3b31e52cea7bdbae275714108f9a0265248ca96847659dedda8a6-merged.mount: Deactivated successfully.
Oct 14 05:50:30 np0005486808 podman[429885]: 2025-10-14 09:50:30.332822008 +0000 UTC m=+1.319085328 container remove 02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:50:30 np0005486808 systemd[1]: libpod-conmon-02f4da9fad75150294303485843a1d04caf7fec73bc3614a071bdf376835f6d5.scope: Deactivated successfully.
Oct 14 05:50:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:30 np0005486808 podman[430083]: 2025-10-14 09:50:30.995291173 +0000 UTC m=+0.043800705 container create 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:50:31 np0005486808 systemd[1]: Started libpod-conmon-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope.
Oct 14 05:50:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:31.06363174 +0000 UTC m=+0.112141292 container init 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:31.071457302 +0000 UTC m=+0.119966824 container start 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:30.978269406 +0000 UTC m=+0.026778988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:31.07462076 +0000 UTC m=+0.123130302 container attach 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:50:31 np0005486808 quirky_shaw[430100]: 167 167
Oct 14 05:50:31 np0005486808 systemd[1]: libpod-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope: Deactivated successfully.
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:31.076341622 +0000 UTC m=+0.124851194 container died 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:50:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-84bdeb626b9881c4629faed433cdbf7fcffd6e3d7171860d87e8e79b488ce9b0-merged.mount: Deactivated successfully.
Oct 14 05:50:31 np0005486808 podman[430083]: 2025-10-14 09:50:31.119382238 +0000 UTC m=+0.167891800 container remove 96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:50:31 np0005486808 systemd[1]: libpod-conmon-96f3bcd629f9bb62aefa5a5ae66346b69be69e27ab594338cfce63c0ea363b68.scope: Deactivated successfully.
Oct 14 05:50:31 np0005486808 podman[430124]: 2025-10-14 09:50:31.31180109 +0000 UTC m=+0.044576595 container create b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:50:31 np0005486808 systemd[1]: Started libpod-conmon-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope.
Oct 14 05:50:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:31 np0005486808 podman[430124]: 2025-10-14 09:50:31.289891752 +0000 UTC m=+0.022667267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:31 np0005486808 podman[430124]: 2025-10-14 09:50:31.394278993 +0000 UTC m=+0.127054548 container init b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:50:31 np0005486808 podman[430124]: 2025-10-14 09:50:31.406115724 +0000 UTC m=+0.138891199 container start b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:50:31 np0005486808 podman[430124]: 2025-10-14 09:50:31.413755911 +0000 UTC m=+0.146531386 container attach b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:50:31 np0005486808 nova_compute[259627]: 2025-10-14 09:50:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]: {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    "0": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "devices": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "/dev/loop3"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            ],
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_name": "ceph_lv0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_size": "21470642176",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "name": "ceph_lv0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "tags": {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_name": "ceph",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.crush_device_class": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.encrypted": "0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_id": "0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.vdo": "0"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            },
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "vg_name": "ceph_vg0"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        }
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    ],
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    "1": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "devices": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "/dev/loop4"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            ],
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_name": "ceph_lv1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_size": "21470642176",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "name": "ceph_lv1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "tags": {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_name": "ceph",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.crush_device_class": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.encrypted": "0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_id": "1",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.vdo": "0"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            },
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "vg_name": "ceph_vg1"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        }
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    ],
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    "2": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "devices": [
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "/dev/loop5"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            ],
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_name": "ceph_lv2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_size": "21470642176",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "name": "ceph_lv2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "tags": {
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.cluster_name": "ceph",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.crush_device_class": "",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.encrypted": "0",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osd_id": "2",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:                "ceph.vdo": "0"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            },
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "type": "block",
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:            "vg_name": "ceph_vg2"
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:        }
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]:    ]
Oct 14 05:50:32 np0005486808 compassionate_fermat[430141]: }
Oct 14 05:50:32 np0005486808 systemd[1]: libpod-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope: Deactivated successfully.
Oct 14 05:50:32 np0005486808 podman[430124]: 2025-10-14 09:50:32.167520627 +0000 UTC m=+0.900296122 container died b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:50:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-244e858acf0e0040f446d835692700159c534a9db1ec5060f43832858388c779-merged.mount: Deactivated successfully.
Oct 14 05:50:32 np0005486808 podman[430124]: 2025-10-14 09:50:32.228593555 +0000 UTC m=+0.961369030 container remove b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:50:32 np0005486808 systemd[1]: libpod-conmon-b0eae94752ba320dd4ef78621943bd675913ff6d54f91cc593d144ec516650f6.scope: Deactivated successfully.
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.818326035 +0000 UTC m=+0.054593111 container create c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:50:32 np0005486808 systemd[1]: Started libpod-conmon-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope.
Oct 14 05:50:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.791214219 +0000 UTC m=+0.027481355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.895800426 +0000 UTC m=+0.132067502 container init c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.901806963 +0000 UTC m=+0.138073999 container start c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:50:32 np0005486808 mystifying_shirley[430321]: 167 167
Oct 14 05:50:32 np0005486808 systemd[1]: libpod-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope: Deactivated successfully.
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.907500463 +0000 UTC m=+0.143767509 container attach c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.907962814 +0000 UTC m=+0.144229890 container died c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:50:32
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root']
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:50:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c87a1b13522cc304fb0bc0dc53e8323cae2a7b4e2b440353a6b85407efd33028-merged.mount: Deactivated successfully.
Oct 14 05:50:32 np0005486808 podman[430305]: 2025-10-14 09:50:32.960000101 +0000 UTC m=+0.196267157 container remove c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shirley, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:50:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:32 np0005486808 systemd[1]: libpod-conmon-c6f66c0cac9e1989f5fc1893b9eb5022d7a6650409318578b2978bc816de095a.scope: Deactivated successfully.
Oct 14 05:50:33 np0005486808 podman[430344]: 2025-10-14 09:50:33.127352557 +0000 UTC m=+0.044634056 container create a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:50:33 np0005486808 systemd[1]: Started libpod-conmon-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope.
Oct 14 05:50:33 np0005486808 podman[430344]: 2025-10-14 09:50:33.105564743 +0000 UTC m=+0.022846272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:50:33 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:50:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:33 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:50:33 np0005486808 podman[430344]: 2025-10-14 09:50:33.235794588 +0000 UTC m=+0.153076077 container init a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:50:33 np0005486808 podman[430344]: 2025-10-14 09:50:33.244483641 +0000 UTC m=+0.161765130 container start a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:50:33 np0005486808 podman[430344]: 2025-10-14 09:50:33.249976036 +0000 UTC m=+0.167257545 container attach a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:50:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:50:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:50:33 np0005486808 nova_compute[259627]: 2025-10-14 09:50:33.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:34 np0005486808 nice_goodall[430362]: {
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_id": 2,
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "type": "bluestore"
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    },
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_id": 1,
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "type": "bluestore"
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    },
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_id": 0,
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:        "type": "bluestore"
Oct 14 05:50:34 np0005486808 nice_goodall[430362]:    }
Oct 14 05:50:34 np0005486808 nice_goodall[430362]: }
Oct 14 05:50:34 np0005486808 systemd[1]: libpod-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Deactivated successfully.
Oct 14 05:50:34 np0005486808 podman[430344]: 2025-10-14 09:50:34.344974505 +0000 UTC m=+1.262256004 container died a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 14 05:50:34 np0005486808 systemd[1]: libpod-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Consumed 1.103s CPU time.
Oct 14 05:50:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7cd4b157aba3ad117f724a0bad6b2b5b318e7a93c9b58af25824d23cd77b8b64-merged.mount: Deactivated successfully.
Oct 14 05:50:34 np0005486808 podman[430344]: 2025-10-14 09:50:34.395546645 +0000 UTC m=+1.312828144 container remove a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:50:34 np0005486808 systemd[1]: libpod-conmon-a0c9e5a9564a8744ba654586aea4bf63791d50186e060a2d36bb22efb5e10d5d.scope: Deactivated successfully.
Oct 14 05:50:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:50:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:50:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:34 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b7395472-f44c-4cf5-963e-69ca8911f515 does not exist
Oct 14 05:50:34 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 56d0ef9a-6c6f-4067-9265-500c1a8bc48b does not exist
Oct 14 05:50:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:35 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:50:36 np0005486808 nova_compute[259627]: 2025-10-14 09:50:36.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:37 np0005486808 nova_compute[259627]: 2025-10-14 09:50:37.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:38 np0005486808 nova_compute[259627]: 2025-10-14 09:50:38.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:40 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:41 np0005486808 nova_compute[259627]: 2025-10-14 09:50:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:42 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.315002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443315142, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 257, "total_data_size": 743714, "memory_usage": 756872, "flush_reason": "Manual Compaction"}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443323705, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 737304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61130, "largest_seqno": 61772, "table_properties": {"data_size": 733771, "index_size": 1376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8007, "raw_average_key_size": 19, "raw_value_size": 726608, "raw_average_value_size": 1738, "num_data_blocks": 61, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435399, "oldest_key_time": 1760435399, "file_creation_time": 1760435443, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 8747 microseconds, and 5591 cpu microseconds.
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.323805) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 737304 bytes OK
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.323832) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325484) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325504) EVENT_LOG_v1 {"time_micros": 1760435443325497, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.325524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 740228, prev total WAL file size 740228, number of live WAL files 2.
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.326204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373635' seq:0, type:0; will stop at (end)
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(720KB)], [143(9582KB)]
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443326255, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10549398, "oldest_snapshot_seqno": -1}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7963 keys, 10433399 bytes, temperature: kUnknown
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443392006, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10433399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10381039, "index_size": 31319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 209158, "raw_average_key_size": 26, "raw_value_size": 10239676, "raw_average_value_size": 1285, "num_data_blocks": 1219, "num_entries": 7963, "num_filter_entries": 7963, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435443, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.392290) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10433399 bytes
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.393776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(28.5) write-amplify(14.2) OK, records in: 8493, records dropped: 530 output_compression: NoCompression
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.393795) EVENT_LOG_v1 {"time_micros": 1760435443393786, "job": 88, "event": "compaction_finished", "compaction_time_micros": 65855, "compaction_time_cpu_micros": 47612, "output_level": 6, "num_output_files": 1, "total_output_size": 10433399, "num_input_records": 8493, "num_output_records": 7963, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443394126, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435443396272, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.326130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:50:43.396423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:50:43 np0005486808 nova_compute[259627]: 2025-10-14 09:50:43.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:50:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:50:44 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:45 np0005486808 podman[430457]: 2025-10-14 09:50:45.69419107 +0000 UTC m=+0.101075321 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:50:45 np0005486808 podman[430458]: 2025-10-14 09:50:45.705408365 +0000 UTC m=+0.102482215 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 14 05:50:46 np0005486808 nova_compute[259627]: 2025-10-14 09:50:46.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:46 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:48 np0005486808 nova_compute[259627]: 2025-10-14 09:50:48.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:48 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:50 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:51 np0005486808 nova_compute[259627]: 2025-10-14 09:50:51.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:52 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:53 np0005486808 nova_compute[259627]: 2025-10-14 09:50:53.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:54 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:56 np0005486808 nova_compute[259627]: 2025-10-14 09:50:56.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:56 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:50:57 np0005486808 nova_compute[259627]: 2025-10-14 09:50:57.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:50:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:50:58 np0005486808 nova_compute[259627]: 2025-10-14 09:50:58.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:50:58 np0005486808 podman[430499]: 2025-10-14 09:50:58.679919995 +0000 UTC m=+0.087374625 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 14 05:50:58 np0005486808 podman[430498]: 2025-10-14 09:50:58.718496641 +0000 UTC m=+0.130422161 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:50:58 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:00 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:01 np0005486808 nova_compute[259627]: 2025-10-14 09:51:01.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:01 np0005486808 nova_compute[259627]: 2025-10-14 09:51:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:02 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:03 np0005486808 nova_compute[259627]: 2025-10-14 09:51:03.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:04 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:51:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:51:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:51:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1816818552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:51:05 np0005486808 nova_compute[259627]: 2025-10-14 09:51:05.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.009 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.011 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:51:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:51:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794768121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.504 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.682 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.684 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.747 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.748 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:51:06 np0005486808 nova_compute[259627]: 2025-10-14 09:51:06.763 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:51:06 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:51:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:51:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:51:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:51:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991354429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:51:07 np0005486808 nova_compute[259627]: 2025-10-14 09:51:07.210 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:51:07 np0005486808 nova_compute[259627]: 2025-10-14 09:51:07.218 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:51:07 np0005486808 nova_compute[259627]: 2025-10-14 09:51:07.240 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:51:07 np0005486808 nova_compute[259627]: 2025-10-14 09:51:07.243 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:51:07 np0005486808 nova_compute[259627]: 2025-10-14 09:51:07.244 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:51:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:08 np0005486808 nova_compute[259627]: 2025-10-14 09:51:08.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:08 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.245 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.245 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.246 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.246 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.512 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:51:10 np0005486808 nova_compute[259627]: 2025-10-14 09:51:10.513 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:10 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:11 np0005486808 nova_compute[259627]: 2025-10-14 09:51:11.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:12 np0005486808 nova_compute[259627]: 2025-10-14 09:51:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:12 np0005486808 nova_compute[259627]: 2025-10-14 09:51:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:12 np0005486808 nova_compute[259627]: 2025-10-14 09:51:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:51:12 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:13 np0005486808 nova_compute[259627]: 2025-10-14 09:51:13.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:14 np0005486808 nova_compute[259627]: 2025-10-14 09:51:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:51:14 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:16 np0005486808 nova_compute[259627]: 2025-10-14 09:51:16.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:16 np0005486808 podman[430588]: 2025-10-14 09:51:16.657141093 +0000 UTC m=+0.072972151 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:51:16 np0005486808 podman[430589]: 2025-10-14 09:51:16.690996814 +0000 UTC m=+0.091763732 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:51:16 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:18 np0005486808 nova_compute[259627]: 2025-10-14 09:51:18.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:18 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:20 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:21 np0005486808 nova_compute[259627]: 2025-10-14 09:51:21.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:22 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:23 np0005486808 nova_compute[259627]: 2025-10-14 09:51:23.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:24 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:26 np0005486808 nova_compute[259627]: 2025-10-14 09:51:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:26 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:28 np0005486808 nova_compute[259627]: 2025-10-14 09:51:28.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:28 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:29 np0005486808 podman[430629]: 2025-10-14 09:51:29.682827123 +0000 UTC m=+0.078683471 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 14 05:51:29 np0005486808 podman[430628]: 2025-10-14 09:51:29.730181835 +0000 UTC m=+0.135345622 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:51:30 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:31 np0005486808 nova_compute[259627]: 2025-10-14 09:51:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:51:32
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms']
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:51:32 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:51:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:51:33 np0005486808 nova_compute[259627]: 2025-10-14 09:51:33.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:34 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev bf68feab-2c23-4c18-bdd1-c09cf325c287 does not exist
Oct 14 05:51:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a7c64a0c-e529-4489-a6c3-bdaa22ce19f7 does not exist
Oct 14 05:51:35 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 7ffc019e-1bd3-48de-bcf6-54020dfbccfb does not exist
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:51:35 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.015368566 +0000 UTC m=+0.057415899 container create 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:51:36 np0005486808 systemd[1]: Started libpod-conmon-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope.
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:35.992240599 +0000 UTC m=+0.034287942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.131148107 +0000 UTC m=+0.173195430 container init 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.144060894 +0000 UTC m=+0.186108207 container start 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.14754276 +0000 UTC m=+0.189590073 container attach 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:51:36 np0005486808 musing_elbakyan[430962]: 167 167
Oct 14 05:51:36 np0005486808 systemd[1]: libpod-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope: Deactivated successfully.
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.153354632 +0000 UTC m=+0.195401945 container died 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:51:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b20eed22d6fff96061c119972c7f474fce7060950594ff5372b106e81f2359fe-merged.mount: Deactivated successfully.
Oct 14 05:51:36 np0005486808 podman[430945]: 2025-10-14 09:51:36.204151599 +0000 UTC m=+0.246198912 container remove 860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:51:36 np0005486808 systemd[1]: libpod-conmon-860c524e62831ba055d3739fb7f0c2e87ec7a00c6871891d16166bfa01141bff.scope: Deactivated successfully.
Oct 14 05:51:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:51:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:36 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:51:36 np0005486808 podman[430987]: 2025-10-14 09:51:36.444621319 +0000 UTC m=+0.073406402 container create 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:51:36 np0005486808 systemd[1]: Started libpod-conmon-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope.
Oct 14 05:51:36 np0005486808 podman[430987]: 2025-10-14 09:51:36.414843288 +0000 UTC m=+0.043628411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:36 np0005486808 podman[430987]: 2025-10-14 09:51:36.562590544 +0000 UTC m=+0.191375677 container init 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:51:36 np0005486808 nova_compute[259627]: 2025-10-14 09:51:36.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:36 np0005486808 podman[430987]: 2025-10-14 09:51:36.579837687 +0000 UTC m=+0.208622770 container start 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 05:51:36 np0005486808 podman[430987]: 2025-10-14 09:51:36.584272306 +0000 UTC m=+0.213057359 container attach 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:51:36 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:37 np0005486808 jovial_babbage[431004]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:51:37 np0005486808 jovial_babbage[431004]: --> relative data size: 1.0
Oct 14 05:51:37 np0005486808 jovial_babbage[431004]: --> All data devices are unavailable
Oct 14 05:51:37 np0005486808 systemd[1]: libpod-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Deactivated successfully.
Oct 14 05:51:37 np0005486808 systemd[1]: libpod-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Consumed 1.066s CPU time.
Oct 14 05:51:37 np0005486808 podman[430987]: 2025-10-14 09:51:37.673383679 +0000 UTC m=+1.302168772 container died 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:51:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2e14ee7e64e385b89ed9f6e8a5f86e297ad2281056b2209424098a83f2252ef4-merged.mount: Deactivated successfully.
Oct 14 05:51:37 np0005486808 podman[430987]: 2025-10-14 09:51:37.732286714 +0000 UTC m=+1.361071757 container remove 97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_babbage, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:51:37 np0005486808 systemd[1]: libpod-conmon-97a18be93bfc8da935453d7acd4042c20602ba240763682e77b0ff6e12e3fbea.scope: Deactivated successfully.
Oct 14 05:51:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.459453676 +0000 UTC m=+0.037139813 container create 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:51:38 np0005486808 systemd[1]: Started libpod-conmon-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope.
Oct 14 05:51:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.445533344 +0000 UTC m=+0.023219501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.548048769 +0000 UTC m=+0.125734986 container init 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.558717801 +0000 UTC m=+0.136403938 container start 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 14 05:51:38 np0005486808 thirsty_vaughan[431203]: 167 167
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.562225827 +0000 UTC m=+0.139912054 container attach 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:51:38 np0005486808 systemd[1]: libpod-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope: Deactivated successfully.
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.562932324 +0000 UTC m=+0.140618461 container died 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:51:38 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e1758a20d685c22bd7a8c1edc605c6f676b4160a6a795f5b534afa36d91b5671-merged.mount: Deactivated successfully.
Oct 14 05:51:38 np0005486808 podman[431186]: 2025-10-14 09:51:38.594995611 +0000 UTC m=+0.172681768 container remove 1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 14 05:51:38 np0005486808 systemd[1]: libpod-conmon-1e5a20428a0d9b6f41bfbb380bc4037c211708d90ea9aaf761d04092453d178d.scope: Deactivated successfully.
Oct 14 05:51:38 np0005486808 nova_compute[259627]: 2025-10-14 09:51:38.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:38 np0005486808 podman[431227]: 2025-10-14 09:51:38.80974342 +0000 UTC m=+0.059024479 container create e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:51:38 np0005486808 systemd[1]: Started libpod-conmon-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope.
Oct 14 05:51:38 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:38 np0005486808 podman[431227]: 2025-10-14 09:51:38.785215798 +0000 UTC m=+0.034496957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:38 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:38 np0005486808 podman[431227]: 2025-10-14 09:51:38.89489222 +0000 UTC m=+0.144173389 container init e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:51:38 np0005486808 podman[431227]: 2025-10-14 09:51:38.900452166 +0000 UTC m=+0.149733225 container start e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 14 05:51:38 np0005486808 podman[431227]: 2025-10-14 09:51:38.904055154 +0000 UTC m=+0.153336324 container attach e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:51:38 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]: {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    "0": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "devices": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "/dev/loop3"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            ],
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_name": "ceph_lv0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_size": "21470642176",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "name": "ceph_lv0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "tags": {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_name": "ceph",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.crush_device_class": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.encrypted": "0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_id": "0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.vdo": "0"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            },
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "vg_name": "ceph_vg0"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        }
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    ],
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    "1": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "devices": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "/dev/loop4"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            ],
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_name": "ceph_lv1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_size": "21470642176",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "name": "ceph_lv1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "tags": {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_name": "ceph",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.crush_device_class": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.encrypted": "0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_id": "1",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.vdo": "0"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            },
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "vg_name": "ceph_vg1"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        }
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    ],
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    "2": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "devices": [
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "/dev/loop5"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            ],
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_name": "ceph_lv2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_size": "21470642176",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "name": "ceph_lv2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "tags": {
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.cluster_name": "ceph",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.crush_device_class": "",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.encrypted": "0",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osd_id": "2",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:                "ceph.vdo": "0"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            },
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "type": "block",
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:            "vg_name": "ceph_vg2"
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:        }
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]:    ]
Oct 14 05:51:39 np0005486808 determined_pasteur[431244]: }
Oct 14 05:51:39 np0005486808 systemd[1]: libpod-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope: Deactivated successfully.
Oct 14 05:51:39 np0005486808 podman[431227]: 2025-10-14 09:51:39.596056574 +0000 UTC m=+0.845337643 container died e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:51:39 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f588a04b8ce20d26e15db7e721256363b705f6d31d46295adca8587c91964ca7-merged.mount: Deactivated successfully.
Oct 14 05:51:39 np0005486808 podman[431227]: 2025-10-14 09:51:39.659593853 +0000 UTC m=+0.908874922 container remove e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:51:39 np0005486808 systemd[1]: libpod-conmon-e4f00ab08a68c4f261e9a65d8575921b9afc1c8881dfaf2f298517a25b3444aa.scope: Deactivated successfully.
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.357373744 +0000 UTC m=+0.079542432 container create 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:51:40 np0005486808 systemd[1]: Started libpod-conmon-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope.
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.318192672 +0000 UTC m=+0.040361450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.432984139 +0000 UTC m=+0.155152837 container init 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.439374576 +0000 UTC m=+0.161543264 container start 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.443135568 +0000 UTC m=+0.165304316 container attach 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:51:40 np0005486808 frosty_gauss[431423]: 167 167
Oct 14 05:51:40 np0005486808 systemd[1]: libpod-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope: Deactivated successfully.
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.445709951 +0000 UTC m=+0.167878659 container died 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:51:40 np0005486808 systemd[1]: var-lib-containers-storage-overlay-51b539af613f0c954f4ee9f23b6f197e7392270c3ab448eb80ebfaf8824f6342-merged.mount: Deactivated successfully.
Oct 14 05:51:40 np0005486808 podman[431407]: 2025-10-14 09:51:40.491345721 +0000 UTC m=+0.213514399 container remove 89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:51:40 np0005486808 systemd[1]: libpod-conmon-89e2741fa37351e52efde0ebf48137b4a6430a03ea376ccdddf70afcb573d521.scope: Deactivated successfully.
Oct 14 05:51:40 np0005486808 podman[431447]: 2025-10-14 09:51:40.670839866 +0000 UTC m=+0.049787673 container create 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:51:40 np0005486808 systemd[1]: Started libpod-conmon-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope.
Oct 14 05:51:40 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:51:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:40 np0005486808 podman[431447]: 2025-10-14 09:51:40.653752086 +0000 UTC m=+0.032699913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:51:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:40 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:51:40 np0005486808 podman[431447]: 2025-10-14 09:51:40.761802437 +0000 UTC m=+0.140750254 container init 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:51:40 np0005486808 podman[431447]: 2025-10-14 09:51:40.767546458 +0000 UTC m=+0.146494285 container start 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:51:40 np0005486808 podman[431447]: 2025-10-14 09:51:40.771076145 +0000 UTC m=+0.150023962 container attach 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:51:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:41 np0005486808 nova_compute[259627]: 2025-10-14 09:51:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]: {
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_id": 2,
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "type": "bluestore"
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    },
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_id": 1,
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "type": "bluestore"
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    },
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_id": 0,
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:        "type": "bluestore"
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]:    }
Oct 14 05:51:41 np0005486808 affectionate_mclaren[431463]: }
Oct 14 05:51:41 np0005486808 systemd[1]: libpod-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Deactivated successfully.
Oct 14 05:51:41 np0005486808 podman[431447]: 2025-10-14 09:51:41.822149486 +0000 UTC m=+1.201097323 container died 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:51:41 np0005486808 systemd[1]: libpod-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Consumed 1.063s CPU time.
Oct 14 05:51:41 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0cf8d9cd5c601193e89665e3bd368a7be11969636ccc739fc9c169cd4b01302f-merged.mount: Deactivated successfully.
Oct 14 05:51:41 np0005486808 podman[431447]: 2025-10-14 09:51:41.907560531 +0000 UTC m=+1.286508368 container remove 8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:51:41 np0005486808 systemd[1]: libpod-conmon-8de2f7146aff7501494697876ffa6d79233c36d5736ec8834f2477842b1aa6bb.scope: Deactivated successfully.
Oct 14 05:51:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:51:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:51:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 132937b2-df76-4d74-8c39-0a46088189cd does not exist
Oct 14 05:51:41 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 97971904-8cbb-48c0-b9c5-f0d39d43d1c4 does not exist
Oct 14 05:51:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:42 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:43 np0005486808 nova_compute[259627]: 2025-10-14 09:51:43.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:51:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:51:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:46 np0005486808 nova_compute[259627]: 2025-10-14 09:51:46.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:51:46 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1354 writes, 6382 keys, 1354 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s#012Interval WAL: 1354 writes, 1354 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    105.1      0.72              0.30        44    0.016       0      0       0.0       0.0#012  L6      1/0    9.95 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.8    182.7    154.6      2.33              1.30        43    0.054    274K    23K       0.0       0.0#012 Sum      1/0    9.95 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    139.6    142.9      3.04              1.60        87    0.035    274K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5    145.7    150.2      0.45              0.30        12    0.038     50K   3121       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0    182.7    154.6      2.33              1.30        43    0.054    274K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    106.1      0.71              0.30        43    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.074, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 3.0 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5646f3b2b1f0#2 capacity: 304.00 MB usage: 48.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000407 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3116,46.08 MB,15.1586%) FilterBlock(88,764.36 KB,0.245541%) IndexBlock(88,1.26 MB,0.413453%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 14 05:51:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:47 np0005486808 podman[431562]: 2025-10-14 09:51:47.679365243 +0000 UTC m=+0.074381126 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:51:47 np0005486808 podman[431561]: 2025-10-14 09:51:47.688968579 +0000 UTC m=+0.084095684 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 05:51:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:48 np0005486808 nova_compute[259627]: 2025-10-14 09:51:48.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:51 np0005486808 nova_compute[259627]: 2025-10-14 09:51:51.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:53 np0005486808 nova_compute[259627]: 2025-10-14 09:51:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:56 np0005486808 nova_compute[259627]: 2025-10-14 09:51:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:51:58 np0005486808 nova_compute[259627]: 2025-10-14 09:51:58.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:51:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:51:59 np0005486808 nova_compute[259627]: 2025-10-14 09:51:59.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:00 np0005486808 podman[431599]: 2025-10-14 09:52:00.721211581 +0000 UTC m=+0.120918708 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:52:00 np0005486808 podman[431598]: 2025-10-14 09:52:00.735654556 +0000 UTC m=+0.142216661 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:52:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:01 np0005486808 nova_compute[259627]: 2025-10-14 09:52:01.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:03 np0005486808 nova_compute[259627]: 2025-10-14 09:52:03.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:03 np0005486808 nova_compute[259627]: 2025-10-14 09:52:03.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:52:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:52:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:52:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/584146414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:52:06 np0005486808 nova_compute[259627]: 2025-10-14 09:52:06.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:06 np0005486808 nova_compute[259627]: 2025-10-14 09:52:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.066 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.067 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.067 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.068 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.068 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.069 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:52:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:52:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:52:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381485472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.509 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.673 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3611MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:52:07 np0005486808 nova_compute[259627]: 2025-10-14 09:52:07.674 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.040 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.040 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.133 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:52:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:52:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923546474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.569 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.577 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.605 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.608 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.609 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:52:08 np0005486808 nova_compute[259627]: 2025-10-14 09:52:08.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.605 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.606 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.606 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.607 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.679 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:52:11 np0005486808 nova_compute[259627]: 2025-10-14 09:52:11.679 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:12 np0005486808 nova_compute[259627]: 2025-10-14 09:52:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:12 np0005486808 nova_compute[259627]: 2025-10-14 09:52:12.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:52:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:13 np0005486808 nova_compute[259627]: 2025-10-14 09:52:13.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:14 np0005486808 nova_compute[259627]: 2025-10-14 09:52:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:15 np0005486808 nova_compute[259627]: 2025-10-14 09:52:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:16 np0005486808 nova_compute[259627]: 2025-10-14 09:52:16.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:18 np0005486808 podman[431690]: 2025-10-14 09:52:18.67278692 +0000 UTC m=+0.075971245 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:52:18 np0005486808 podman[431689]: 2025-10-14 09:52:18.672818611 +0000 UTC m=+0.078393565 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:52:18 np0005486808 nova_compute[259627]: 2025-10-14 09:52:18.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:21 np0005486808 nova_compute[259627]: 2025-10-14 09:52:21.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:23 np0005486808 nova_compute[259627]: 2025-10-14 09:52:23.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:26 np0005486808 nova_compute[259627]: 2025-10-14 09:52:26.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:28 np0005486808 nova_compute[259627]: 2025-10-14 09:52:28.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:31 np0005486808 nova_compute[259627]: 2025-10-14 09:52:31.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:31 np0005486808 podman[431727]: 2025-10-14 09:52:31.670127775 +0000 UTC m=+0.077212126 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 14 05:52:31 np0005486808 podman[431726]: 2025-10-14 09:52:31.723283779 +0000 UTC m=+0.127976291 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:52:32
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Oct 14 05:52:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:52:33 np0005486808 ceph-mgr[74543]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3625056923
Oct 14 05:52:33 np0005486808 nova_compute[259627]: 2025-10-14 09:52:33.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:34 np0005486808 systemd-logind[799]: New session 54 of user zuul.
Oct 14 05:52:35 np0005486808 systemd[1]: Started Session 54 of User zuul.
Oct 14 05:52:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:36 np0005486808 nova_compute[259627]: 2025-10-14 09:52:36.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:37 np0005486808 nova_compute[259627]: 2025-10-14 09:52:37.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:37.412 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:52:37 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:37.413 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:52:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:38 np0005486808 nova_compute[259627]: 2025-10-14 09:52:38.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:41 np0005486808 systemd[1]: session-54.scope: Deactivated successfully.
Oct 14 05:52:41 np0005486808 systemd-logind[799]: Session 54 logged out. Waiting for processes to exit.
Oct 14 05:52:41 np0005486808 systemd-logind[799]: Removed session 54.
Oct 14 05:52:41 np0005486808 nova_compute[259627]: 2025-10-14 09:52:41.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:41 np0005486808 nova_compute[259627]: 2025-10-14 09:52:41.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev f5eb4a16-5226-404c-b485-710b60932f25 does not exist
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 611ce85b-464b-4b4f-afd1-e207e333c6dc does not exist
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6ac81da6-998a-4b8b-9a5d-dd63ef61f31c does not exist
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:52:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:43 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:52:43.415 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.767704932 +0000 UTC m=+0.067221051 container create c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:52:43 np0005486808 systemd[1]: Started libpod-conmon-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope.
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.744371959 +0000 UTC m=+0.043888098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:43 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:52:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.864522257 +0000 UTC m=+0.164038386 container init c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.873645601 +0000 UTC m=+0.173161760 container start c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.877957367 +0000 UTC m=+0.177473526 container attach c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:52:43 np0005486808 sad_wilson[432313]: 167 167
Oct 14 05:52:43 np0005486808 systemd[1]: libpod-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope: Deactivated successfully.
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.88176295 +0000 UTC m=+0.181279099 container died c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:52:43 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b2225f479086b4b056008d60a18e10e6ffdb1574e910df239925c3b413d92a0e-merged.mount: Deactivated successfully.
Oct 14 05:52:43 np0005486808 podman[432297]: 2025-10-14 09:52:43.936394241 +0000 UTC m=+0.235910400 container remove c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:52:43 np0005486808 systemd[1]: libpod-conmon-c1d098479afb3f76001199b5402e282d73de3fd3123290a03b4957f462d9567e.scope: Deactivated successfully.
Oct 14 05:52:43 np0005486808 nova_compute[259627]: 2025-10-14 09:52:43.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:44 np0005486808 podman[432338]: 2025-10-14 09:52:44.181928536 +0000 UTC m=+0.070972193 container create 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:52:44 np0005486808 systemd[1]: Started libpod-conmon-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope.
Oct 14 05:52:44 np0005486808 podman[432338]: 2025-10-14 09:52:44.154053402 +0000 UTC m=+0.043097069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:44 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:44 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:44 np0005486808 podman[432338]: 2025-10-14 09:52:44.305633981 +0000 UTC m=+0.194677658 container init 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:52:44 np0005486808 podman[432338]: 2025-10-14 09:52:44.32270839 +0000 UTC m=+0.211752037 container start 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 14 05:52:44 np0005486808 podman[432338]: 2025-10-14 09:52:44.326856902 +0000 UTC m=+0.215900559 container attach 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:52:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:45 np0005486808 gifted_wozniak[432354]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:52:45 np0005486808 gifted_wozniak[432354]: --> relative data size: 1.0
Oct 14 05:52:45 np0005486808 gifted_wozniak[432354]: --> All data devices are unavailable
Oct 14 05:52:45 np0005486808 systemd[1]: libpod-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope: Deactivated successfully.
Oct 14 05:52:45 np0005486808 podman[432338]: 2025-10-14 09:52:45.351139824 +0000 UTC m=+1.240183481 container died 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:52:45 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0c0a9c758391a63e8c37d6209efd3276784b80b75803e529b88d705b481a2c1d-merged.mount: Deactivated successfully.
Oct 14 05:52:45 np0005486808 podman[432338]: 2025-10-14 09:52:45.403829597 +0000 UTC m=+1.292873214 container remove 9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:52:45 np0005486808 systemd[1]: libpod-conmon-9d1dbaafc65cd3b1d14dd06b5eb2296aea4feb2d9ca3bcf83f2d28122189ac8f.scope: Deactivated successfully.
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.102671155 +0000 UTC m=+0.069935997 container create 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:52:46 np0005486808 systemd[1]: Started libpod-conmon-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope.
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.071809857 +0000 UTC m=+0.039074749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.195834091 +0000 UTC m=+0.163098943 container init 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.204905033 +0000 UTC m=+0.172169855 container start 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 05:52:46 np0005486808 agitated_satoshi[432554]: 167 167
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.208737807 +0000 UTC m=+0.176002639 container attach 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.210613483 +0000 UTC m=+0.177878305 container died 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:52:46 np0005486808 systemd[1]: libpod-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope: Deactivated successfully.
Oct 14 05:52:46 np0005486808 systemd[1]: var-lib-containers-storage-overlay-bc40e1e071cc8e85e3aa1b091bb46b5e1dd4a4f1734149f923084a4c9ad919d0-merged.mount: Deactivated successfully.
Oct 14 05:52:46 np0005486808 podman[432537]: 2025-10-14 09:52:46.245287994 +0000 UTC m=+0.212552816 container remove 1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 14 05:52:46 np0005486808 systemd[1]: libpod-conmon-1b3377594a4def4191f819999b40dae52a101112a280d6e533155a822108d410.scope: Deactivated successfully.
Oct 14 05:52:46 np0005486808 podman[432577]: 2025-10-14 09:52:46.488367879 +0000 UTC m=+0.074558771 container create 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:52:46 np0005486808 systemd[1]: Started libpod-conmon-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope.
Oct 14 05:52:46 np0005486808 podman[432577]: 2025-10-14 09:52:46.459284865 +0000 UTC m=+0.045475827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:46 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:46 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:46 np0005486808 podman[432577]: 2025-10-14 09:52:46.580343805 +0000 UTC m=+0.166534697 container init 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:52:46 np0005486808 podman[432577]: 2025-10-14 09:52:46.589989812 +0000 UTC m=+0.176180684 container start 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:52:46 np0005486808 podman[432577]: 2025-10-14 09:52:46.593612371 +0000 UTC m=+0.179803273 container attach 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:52:46 np0005486808 nova_compute[259627]: 2025-10-14 09:52:46.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:47 np0005486808 competent_gates[432594]: {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    "0": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "devices": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "/dev/loop3"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            ],
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_name": "ceph_lv0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_size": "21470642176",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "name": "ceph_lv0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "tags": {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_name": "ceph",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.crush_device_class": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.encrypted": "0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_id": "0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.vdo": "0"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            },
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "vg_name": "ceph_vg0"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        }
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    ],
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    "1": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "devices": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "/dev/loop4"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            ],
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_name": "ceph_lv1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_size": "21470642176",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "name": "ceph_lv1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "tags": {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_name": "ceph",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.crush_device_class": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.encrypted": "0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_id": "1",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.vdo": "0"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            },
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "vg_name": "ceph_vg1"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        }
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    ],
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    "2": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "devices": [
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "/dev/loop5"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            ],
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_name": "ceph_lv2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_size": "21470642176",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "name": "ceph_lv2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "tags": {
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.cluster_name": "ceph",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.crush_device_class": "",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.encrypted": "0",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osd_id": "2",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:                "ceph.vdo": "0"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            },
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "type": "block",
Oct 14 05:52:47 np0005486808 competent_gates[432594]:            "vg_name": "ceph_vg2"
Oct 14 05:52:47 np0005486808 competent_gates[432594]:        }
Oct 14 05:52:47 np0005486808 competent_gates[432594]:    ]
Oct 14 05:52:47 np0005486808 competent_gates[432594]: }
Oct 14 05:52:47 np0005486808 systemd[1]: libpod-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope: Deactivated successfully.
Oct 14 05:52:47 np0005486808 podman[432577]: 2025-10-14 09:52:47.37157488 +0000 UTC m=+0.957765812 container died 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:52:47 np0005486808 systemd[1]: var-lib-containers-storage-overlay-84899a1a7e486ee5e1dd68b494c127c9e92af5fea8f95aa736113a412675e3cb-merged.mount: Deactivated successfully.
Oct 14 05:52:47 np0005486808 podman[432577]: 2025-10-14 09:52:47.449557104 +0000 UTC m=+1.035747966 container remove 2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:52:47 np0005486808 systemd[1]: libpod-conmon-2c6be3d038e6886a0979c286628c255441a054f250616621f7e6e7b7d128c9de.scope: Deactivated successfully.
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.17321221 +0000 UTC m=+0.070958202 container create 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:52:48 np0005486808 systemd[1]: Started libpod-conmon-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope.
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.145189013 +0000 UTC m=+0.042935075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.278884873 +0000 UTC m=+0.176630945 container init 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.285808412 +0000 UTC m=+0.183554374 container start 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 14 05:52:48 np0005486808 objective_curie[432770]: 167 167
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.292570988 +0000 UTC m=+0.190317030 container attach 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:52:48 np0005486808 systemd[1]: libpod-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope: Deactivated successfully.
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.294230149 +0000 UTC m=+0.191976111 container died 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:52:48 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0d7bd97cc1c05027765deedbe5239c9ec16b97349867167553784f8c5d4133cd-merged.mount: Deactivated successfully.
Oct 14 05:52:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:48 np0005486808 podman[432753]: 2025-10-14 09:52:48.34277764 +0000 UTC m=+0.240523602 container remove 0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:52:48 np0005486808 systemd[1]: libpod-conmon-0603fe28aac1d9111a31c7db605779a1e42e1b7c0c005e0a97e3654980207bb0.scope: Deactivated successfully.
Oct 14 05:52:48 np0005486808 podman[432794]: 2025-10-14 09:52:48.515650142 +0000 UTC m=+0.043892728 container create f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 14 05:52:48 np0005486808 systemd[1]: Started libpod-conmon-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope.
Oct 14 05:52:48 np0005486808 podman[432794]: 2025-10-14 09:52:48.498820969 +0000 UTC m=+0.027063565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:52:48 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:52:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:48 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:52:48 np0005486808 podman[432794]: 2025-10-14 09:52:48.628439749 +0000 UTC m=+0.156682345 container init f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 14 05:52:48 np0005486808 podman[432794]: 2025-10-14 09:52:48.642669638 +0000 UTC m=+0.170912224 container start f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:52:48 np0005486808 podman[432794]: 2025-10-14 09:52:48.646642716 +0000 UTC m=+0.174885312 container attach f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 05:52:48 np0005486808 nova_compute[259627]: 2025-10-14 09:52:48.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]: {
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_id": 2,
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "type": "bluestore"
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    },
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_id": 1,
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "type": "bluestore"
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    },
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_id": 0,
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:        "type": "bluestore"
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]:    }
Oct 14 05:52:49 np0005486808 quizzical_feynman[432812]: }
Oct 14 05:52:49 np0005486808 podman[432841]: 2025-10-14 09:52:49.659379246 +0000 UTC m=+0.062062744 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:52:49 np0005486808 systemd[1]: libpod-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Deactivated successfully.
Oct 14 05:52:49 np0005486808 systemd[1]: libpod-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Consumed 1.038s CPU time.
Oct 14 05:52:49 np0005486808 podman[432842]: 2025-10-14 09:52:49.683895277 +0000 UTC m=+0.085442357 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid)
Oct 14 05:52:49 np0005486808 podman[432886]: 2025-10-14 09:52:49.719586673 +0000 UTC m=+0.029695679 container died f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:52:49 np0005486808 systemd[1]: var-lib-containers-storage-overlay-766277947d977e711fd44530d3391e0ae10c2473477cfed455c27d8aa99175b5-merged.mount: Deactivated successfully.
Oct 14 05:52:49 np0005486808 podman[432886]: 2025-10-14 09:52:49.77123455 +0000 UTC m=+0.081343556 container remove f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:52:49 np0005486808 systemd[1]: libpod-conmon-f8740e28de4f8d4710c7ab0e5a2e10de6cc63312869956ee0a0135940261dd01.scope: Deactivated successfully.
Oct 14 05:52:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:52:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:52:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev e12c5edf-a088-49f6-a15a-b8fa14e8af3b does not exist
Oct 14 05:52:49 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 33d4238e-a4ba-4319-864d-74579f2e80f8 does not exist
Oct 14 05:52:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:50 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:52:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:51 np0005486808 nova_compute[259627]: 2025-10-14 09:52:51.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:53 np0005486808 nova_compute[259627]: 2025-10-14 09:52:53.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:56 np0005486808 nova_compute[259627]: 2025-10-14 09:52:56.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:52:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:52:58 np0005486808 nova_compute[259627]: 2025-10-14 09:52:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:52:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:01 np0005486808 nova_compute[259627]: 2025-10-14 09:53:01.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:01 np0005486808 nova_compute[259627]: 2025-10-14 09:53:01.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:02 np0005486808 podman[432952]: 2025-10-14 09:53:02.671166699 +0000 UTC m=+0.068922322 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 05:53:02 np0005486808 podman[432951]: 2025-10-14 09:53:02.714503943 +0000 UTC m=+0.116130060 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:04 np0005486808 nova_compute[259627]: 2025-10-14 09:53:04.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:53:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:53:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:53:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090996532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:53:05 np0005486808 nova_compute[259627]: 2025-10-14 09:53:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:06 np0005486808 nova_compute[259627]: 2025-10-14 09:53:06.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:53:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:53:07 np0005486808 nova_compute[259627]: 2025-10-14 09:53:07.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.053 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.054 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.054 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.055 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.056 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:53:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:53:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149712963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.513 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.734 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.735 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.816 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.817 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:53:08 np0005486808 nova_compute[259627]: 2025-10-14 09:53:08.922 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:53:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:53:09 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481508238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.387 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.393 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.414 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.416 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:53:09 np0005486808 nova_compute[259627]: 2025-10-14 09:53:09.417 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:53:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:11 np0005486808 nova_compute[259627]: 2025-10-14 09:53:11.413 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:11 np0005486808 nova_compute[259627]: 2025-10-14 09:53:11.414 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:11 np0005486808 nova_compute[259627]: 2025-10-14 09:53:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:12 np0005486808 nova_compute[259627]: 2025-10-14 09:53:12.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:12 np0005486808 nova_compute[259627]: 2025-10-14 09:53:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:53:12 np0005486808 nova_compute[259627]: 2025-10-14 09:53:12.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:53:13 np0005486808 nova_compute[259627]: 2025-10-14 09:53:13.005 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:53:13 np0005486808 nova_compute[259627]: 2025-10-14 09:53:13.006 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:13 np0005486808 nova_compute[259627]: 2025-10-14 09:53:13.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:53:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:14 np0005486808 nova_compute[259627]: 2025-10-14 09:53:14.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:15 np0005486808 nova_compute[259627]: 2025-10-14 09:53:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:16 np0005486808 nova_compute[259627]: 2025-10-14 09:53:16.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:17 np0005486808 nova_compute[259627]: 2025-10-14 09:53:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:19 np0005486808 nova_compute[259627]: 2025-10-14 09:53:19.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:20 np0005486808 podman[433038]: 2025-10-14 09:53:20.681916498 +0000 UTC m=+0.085142430 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:53:20 np0005486808 podman[433039]: 2025-10-14 09:53:20.695958753 +0000 UTC m=+0.092675315 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:53:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:21 np0005486808 nova_compute[259627]: 2025-10-14 09:53:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:24 np0005486808 nova_compute[259627]: 2025-10-14 09:53:24.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:26 np0005486808 nova_compute[259627]: 2025-10-14 09:53:26.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:29 np0005486808 nova_compute[259627]: 2025-10-14 09:53:29.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:53:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 709 writes, 2213 keys, 709 commit groups, 1.0 writes per commit group, ingest: 1.43 MB, 0.00 MB/s#012Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:53:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:31 np0005486808 nova_compute[259627]: 2025-10-14 09:53:31.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:53:32
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images']
Oct 14 05:53:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:53:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:53:33 np0005486808 podman[433081]: 2025-10-14 09:53:33.654845046 +0000 UTC m=+0.069782094 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:53:33 np0005486808 podman[433080]: 2025-10-14 09:53:33.731892476 +0000 UTC m=+0.142939868 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 14 05:53:34 np0005486808 nova_compute[259627]: 2025-10-14 09:53:34.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:53:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.70 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 762 writes, 2197 keys, 762 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 762 writes, 346 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:53:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:36 np0005486808 nova_compute[259627]: 2025-10-14 09:53:36.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:39 np0005486808 nova_compute[259627]: 2025-10-14 09:53:39.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:53:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s#012Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:53:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 05:53:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:41 np0005486808 nova_compute[259627]: 2025-10-14 09:53:41.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:53:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:53:44 np0005486808 nova_compute[259627]: 2025-10-14 09:53:44.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:46 np0005486808 nova_compute[259627]: 2025-10-14 09:53:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:49 np0005486808 nova_compute[259627]: 2025-10-14 09:53:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:49 np0005486808 systemd-logind[799]: New session 55 of user zuul.
Oct 14 05:53:49 np0005486808 systemd[1]: Started Session 55 of User zuul.
Oct 14 05:53:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:49.712 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 05:53:49 np0005486808 nova_compute[259627]: 2025-10-14 09:53:49.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:49 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:49.714 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.307139) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630307232, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1667, "num_deletes": 251, "total_data_size": 2718819, "memory_usage": 2765888, "flush_reason": "Manual Compaction"}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630429500, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2681686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61773, "largest_seqno": 63439, "table_properties": {"data_size": 2673909, "index_size": 4719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15622, "raw_average_key_size": 19, "raw_value_size": 2658480, "raw_average_value_size": 3395, "num_data_blocks": 210, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435444, "oldest_key_time": 1760435444, "file_creation_time": 1760435630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 123379 microseconds, and 8649 cpu microseconds.
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.430522) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2681686 bytes OK
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.430883) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.469995) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.470098) EVENT_LOG_v1 {"time_micros": 1760435630470083, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.470135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2711675, prev total WAL file size 2711675, number of live WAL files 2.
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.472639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2618KB)], [146(10188KB)]
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630472682, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13115085, "oldest_snapshot_seqno": -1}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8232 keys, 11371204 bytes, temperature: kUnknown
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630655296, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11371204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11316072, "index_size": 33451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 215353, "raw_average_key_size": 26, "raw_value_size": 11168907, "raw_average_value_size": 1356, "num_data_blocks": 1304, "num_entries": 8232, "num_filter_entries": 8232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.655686) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11371204 bytes
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.684041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.8 rd, 62.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 8746, records dropped: 514 output_compression: NoCompression
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.684109) EVENT_LOG_v1 {"time_micros": 1760435630684084, "job": 90, "event": "compaction_finished", "compaction_time_micros": 182727, "compaction_time_cpu_micros": 41766, "output_level": 6, "num_output_files": 1, "total_output_size": 11371204, "num_input_records": 8746, "num_output_records": 8232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630684810, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435630687334, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.472527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:53:50.687391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:53:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:51 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d0d6f2d8-0d72-4d0b-90c4-6324938c56ad does not exist
Oct 14 05:53:51 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6d8a1119-cef5-49fd-b0fb-ac0a2f8030ad does not exist
Oct 14 05:53:51 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 059ee0e8-9259-4e3e-83de-67f241b8a80e does not exist
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:53:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:51 np0005486808 podman[433384]: 2025-10-14 09:53:51.201716216 +0000 UTC m=+0.093134977 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:53:51 np0005486808 podman[433395]: 2025-10-14 09:53:51.203352576 +0000 UTC m=+0.095923915 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:51 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:53:51 np0005486808 systemd[1]: Reloading.
Oct 14 05:53:51 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 05:53:51 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 05:53:51 np0005486808 podman[433583]: 2025-10-14 09:53:51.747241301 +0000 UTC m=+0.064188636 container create cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:53:51 np0005486808 podman[433583]: 2025-10-14 09:53:51.713562335 +0000 UTC m=+0.030509680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:51 np0005486808 nova_compute[259627]: 2025-10-14 09:53:51.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:52 np0005486808 systemd[1]: Started libpod-conmon-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope.
Oct 14 05:53:52 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:52 np0005486808 podman[433583]: 2025-10-14 09:53:52.423644339 +0000 UTC m=+0.740591724 container init cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:53:52 np0005486808 podman[433583]: 2025-10-14 09:53:52.433436819 +0000 UTC m=+0.750384154 container start cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 05:53:52 np0005486808 podman[433583]: 2025-10-14 09:53:52.43797815 +0000 UTC m=+0.754925565 container attach cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:53:52 np0005486808 nice_jennings[433626]: 167 167
Oct 14 05:53:52 np0005486808 systemd[1]: libpod-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope: Deactivated successfully.
Oct 14 05:53:52 np0005486808 podman[433583]: 2025-10-14 09:53:52.443237969 +0000 UTC m=+0.760185354 container died cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 14 05:53:52 np0005486808 systemd[1]: var-lib-containers-storage-overlay-b3191cab1474d733d1bffe39d23696916ddd2a8057099cb9990cfac5afd18e90-merged.mount: Deactivated successfully.
Oct 14 05:53:52 np0005486808 podman[433583]: 2025-10-14 09:53:52.498465244 +0000 UTC m=+0.815412609 container remove cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:53:52 np0005486808 systemd[1]: libpod-conmon-cbbbfc1900035f1e1f0db1d1d6036731ba73aa775a1eafb52f41d580caa6fcc1.scope: Deactivated successfully.
Oct 14 05:53:52 np0005486808 systemd[1]: Reloading.
Oct 14 05:53:52 np0005486808 podman[433652]: 2025-10-14 09:53:52.676529164 +0000 UTC m=+0.044189346 container create 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:53:52 np0005486808 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 14 05:53:52 np0005486808 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 14 05:53:52 np0005486808 podman[433652]: 2025-10-14 09:53:52.656945393 +0000 UTC m=+0.024605575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:52 np0005486808 systemd[1]: Started libpod-conmon-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope.
Oct 14 05:53:52 np0005486808 systemd[1]: Starting Podman API Socket...
Oct 14 05:53:52 np0005486808 systemd[1]: Listening on Podman API Socket.
Oct 14 05:53:53 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:53 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:53 np0005486808 podman[433652]: 2025-10-14 09:53:53.064762539 +0000 UTC m=+0.432422741 container init 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:53:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:53 np0005486808 podman[433652]: 2025-10-14 09:53:53.085091618 +0000 UTC m=+0.452751780 container start 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:53:53 np0005486808 podman[433652]: 2025-10-14 09:53:53.088571923 +0000 UTC m=+0.456232135 container attach 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:53:53 np0005486808 dbus-broker-launch[782]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 14 05:53:53 np0005486808 systemd[1]: podman.socket: Deactivated successfully.
Oct 14 05:53:53 np0005486808 systemd[1]: Closed Podman API Socket.
Oct 14 05:53:53 np0005486808 systemd[1]: Stopping Podman API Socket...
Oct 14 05:53:53 np0005486808 systemd[1]: Starting Podman API Socket...
Oct 14 05:53:53 np0005486808 systemd[1]: Listening on Podman API Socket.
Oct 14 05:53:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:53 np0005486808 systemd-logind[799]: New session 56 of user zuul.
Oct 14 05:53:53 np0005486808 systemd[1]: Started Session 56 of User zuul.
Oct 14 05:53:53 np0005486808 systemd[1]: Starting Podman API Service...
Oct 14 05:53:53 np0005486808 systemd[1]: Started Podman API Service.
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Setting parallel job count to 25"
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Using sqlite as database backend"
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 14 05:53:53 np0005486808 podman[433735]: time="2025-10-14T09:53:53Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 14 05:53:53 np0005486808 podman[433735]: @ - - [14/Oct/2025:09:53:53 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 14 05:53:53 np0005486808 podman[433735]: @ - - [14/Oct/2025:09:53:53 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 29171 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 14 05:53:54 np0005486808 peaceful_wilson[433705]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:53:54 np0005486808 peaceful_wilson[433705]: --> relative data size: 1.0
Oct 14 05:53:54 np0005486808 peaceful_wilson[433705]: --> All data devices are unavailable
Oct 14 05:53:54 np0005486808 systemd[1]: libpod-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Deactivated successfully.
Oct 14 05:53:54 np0005486808 systemd[1]: libpod-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Consumed 1.020s CPU time.
Oct 14 05:53:54 np0005486808 podman[433652]: 2025-10-14 09:53:54.153087913 +0000 UTC m=+1.520748095 container died 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:53:54 np0005486808 systemd[1]: var-lib-containers-storage-overlay-40efb8da77d563d238777b09d665e5e9ee879f5716f281aa05b8b3f68d29b3f2-merged.mount: Deactivated successfully.
Oct 14 05:53:54 np0005486808 nova_compute[259627]: 2025-10-14 09:53:54.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:54 np0005486808 podman[433652]: 2025-10-14 09:53:54.337934609 +0000 UTC m=+1.705594811 container remove 2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:53:54 np0005486808 systemd[1]: libpod-conmon-2fc2ab51c7bb72ad529911136f5543eae4646eebf5fff9555084cb919d279494.scope: Deactivated successfully.
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.038344355 +0000 UTC m=+0.047708861 container create 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:53:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:55 np0005486808 systemd[1]: Started libpod-conmon-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope.
Oct 14 05:53:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.107388459 +0000 UTC m=+0.116752975 container init 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.018388946 +0000 UTC m=+0.027753492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.115922539 +0000 UTC m=+0.125287045 container start 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:53:55 np0005486808 youthful_bohr[433942]: 167 167
Oct 14 05:53:55 np0005486808 systemd[1]: libpod-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope: Deactivated successfully.
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.123780082 +0000 UTC m=+0.133144598 container attach 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.126273403 +0000 UTC m=+0.135637899 container died 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:53:55 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5f63532eccb636bb9b97985424c6bf9c95cd90be2d71946fcb8216ebf91f0bc6-merged.mount: Deactivated successfully.
Oct 14 05:53:55 np0005486808 podman[433926]: 2025-10-14 09:53:55.1669083 +0000 UTC m=+0.176272806 container remove 516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 14 05:53:55 np0005486808 systemd[1]: libpod-conmon-516495ae47fed8716097ec72e699d799a2c9d275c77b160e47fdfd2d445ecb13.scope: Deactivated successfully.
Oct 14 05:53:55 np0005486808 podman[433966]: 2025-10-14 09:53:55.341008622 +0000 UTC m=+0.046231575 container create 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:53:55 np0005486808 systemd[1]: Started libpod-conmon-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope.
Oct 14 05:53:55 np0005486808 podman[433966]: 2025-10-14 09:53:55.318371706 +0000 UTC m=+0.023594709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:55 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:55 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:55 np0005486808 podman[433966]: 2025-10-14 09:53:55.461814256 +0000 UTC m=+0.167037169 container init 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 05:53:55 np0005486808 podman[433966]: 2025-10-14 09:53:55.473926483 +0000 UTC m=+0.179149436 container start 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:53:55 np0005486808 podman[433966]: 2025-10-14 09:53:55.477956482 +0000 UTC m=+0.183179435 container attach 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:53:55 np0005486808 nova_compute[259627]: 2025-10-14 09:53:55.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:53:55 np0005486808 nova_compute[259627]: 2025-10-14 09:53:55.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]: {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    "0": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "devices": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "/dev/loop3"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            ],
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_name": "ceph_lv0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_size": "21470642176",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "name": "ceph_lv0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "tags": {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_name": "ceph",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.crush_device_class": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.encrypted": "0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_id": "0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.vdo": "0"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            },
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "vg_name": "ceph_vg0"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        }
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    ],
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    "1": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "devices": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "/dev/loop4"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            ],
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_name": "ceph_lv1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_size": "21470642176",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "name": "ceph_lv1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "tags": {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_name": "ceph",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.crush_device_class": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.encrypted": "0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_id": "1",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.vdo": "0"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            },
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "vg_name": "ceph_vg1"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        }
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    ],
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    "2": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "devices": [
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "/dev/loop5"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            ],
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_name": "ceph_lv2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_size": "21470642176",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "name": "ceph_lv2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "tags": {
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.cluster_name": "ceph",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.crush_device_class": "",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.encrypted": "0",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osd_id": "2",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:                "ceph.vdo": "0"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            },
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "type": "block",
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:            "vg_name": "ceph_vg2"
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:        }
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]:    ]
Oct 14 05:53:56 np0005486808 adoring_zhukovsky[433982]: }
Oct 14 05:53:56 np0005486808 systemd[1]: libpod-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope: Deactivated successfully.
Oct 14 05:53:56 np0005486808 podman[433966]: 2025-10-14 09:53:56.324108414 +0000 UTC m=+1.029331357 container died 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 14 05:53:56 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0578fc8aafbb3a6b9664a6b5293aa7fd9404e31413c75a012468cd4be36e620c-merged.mount: Deactivated successfully.
Oct 14 05:53:56 np0005486808 podman[433966]: 2025-10-14 09:53:56.377953455 +0000 UTC m=+1.083176368 container remove 09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:53:56 np0005486808 systemd[1]: libpod-conmon-09050cd81f609fe28e67b495bec9f8f3a4274002dec880fb4742e5820e5e5c28.scope: Deactivated successfully.
Oct 14 05:53:56 np0005486808 nova_compute[259627]: 2025-10-14 09:53:56.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.120681749 +0000 UTC m=+0.052727315 container create fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:53:57 np0005486808 systemd[1]: Started libpod-conmon-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope.
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.092330914 +0000 UTC m=+0.024376520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.206226508 +0000 UTC m=+0.138272124 container init fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.216133991 +0000 UTC m=+0.148179517 container start fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.219887273 +0000 UTC m=+0.151932839 container attach fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:53:57 np0005486808 dazzling_almeida[434182]: 167 167
Oct 14 05:53:57 np0005486808 systemd[1]: libpod-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope: Deactivated successfully.
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.225903421 +0000 UTC m=+0.157949007 container died fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:53:57 np0005486808 systemd[1]: var-lib-containers-storage-overlay-928b8fa07419d032aff04863604fe3a1da8d79bfc6b542ca5825fc7713e6df4f-merged.mount: Deactivated successfully.
Oct 14 05:53:57 np0005486808 podman[434166]: 2025-10-14 09:53:57.272619867 +0000 UTC m=+0.204665423 container remove fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:53:57 np0005486808 systemd[1]: libpod-conmon-fda21a459b1df9e8f95fbb71923e7a11dee4f738eb98404a563913925cfecc03.scope: Deactivated successfully.
Oct 14 05:53:57 np0005486808 podman[434206]: 2025-10-14 09:53:57.503150224 +0000 UTC m=+0.070151752 container create 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:53:57 np0005486808 systemd[1]: Started libpod-conmon-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope.
Oct 14 05:53:57 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:53:57 np0005486808 podman[434206]: 2025-10-14 09:53:57.476806718 +0000 UTC m=+0.043808266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:53:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:57 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:53:57 np0005486808 podman[434206]: 2025-10-14 09:53:57.587827022 +0000 UTC m=+0.154828590 container init 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 05:53:57 np0005486808 podman[434206]: 2025-10-14 09:53:57.596743231 +0000 UTC m=+0.163744749 container start 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:53:57 np0005486808 podman[434206]: 2025-10-14 09:53:57.60037322 +0000 UTC m=+0.167374748 container attach 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 05:53:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]: {
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_id": 2,
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "type": "bluestore"
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    },
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_id": 1,
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "type": "bluestore"
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    },
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_id": 0,
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:        "type": "bluestore"
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]:    }
Oct 14 05:53:58 np0005486808 distracted_rhodes[434223]: }
Oct 14 05:53:58 np0005486808 systemd[1]: libpod-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope: Deactivated successfully.
Oct 14 05:53:58 np0005486808 podman[434257]: 2025-10-14 09:53:58.576531802 +0000 UTC m=+0.034326603 container died 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 14 05:53:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-010eaed2ed53d52eb4d0ed0fee0a1150e1b54c9255f6f000707b8e3d18ecca4e-merged.mount: Deactivated successfully.
Oct 14 05:53:58 np0005486808 podman[434257]: 2025-10-14 09:53:58.625872583 +0000 UTC m=+0.083667384 container remove 2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:53:58 np0005486808 systemd[1]: libpod-conmon-2d91b6751589f1ebc20977ee5895cf9ce12266aa376af397d404beec2f39cd73.scope: Deactivated successfully.
Oct 14 05:53:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:53:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:53:58 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 68a6e1c7-fa11-4c3f-848f-d8d5019e06c8 does not exist
Oct 14 05:53:58 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b116c2b8-d403-46b1-ab6e-b77af63b42d1 does not exist
Oct 14 05:53:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:53:59 np0005486808 nova_compute[259627]: 2025-10-14 09:53:59.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:53:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:59 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:53:59 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:53:59.716 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 05:54:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:02 np0005486808 nova_compute[259627]: 2025-10-14 09:54:02.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:02 np0005486808 nova_compute[259627]: 2025-10-14 09:54:02.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:04 np0005486808 nova_compute[259627]: 2025-10-14 09:54:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:04 np0005486808 podman[434323]: 2025-10-14 09:54:04.696465524 +0000 UTC m=+0.095411772 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 05:54:04 np0005486808 podman[434322]: 2025-10-14 09:54:04.742233747 +0000 UTC m=+0.142754413 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 05:54:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:54:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:54:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:54:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690690533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:54:05 np0005486808 nova_compute[259627]: 2025-10-14 09:54:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:07 np0005486808 nova_compute[259627]: 2025-10-14 09:54:07.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.070 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:54:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:54:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:54:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:08 np0005486808 podman[433735]: time="2025-10-14T09:54:08Z" level=info msg="Received shutdown.Stop(), terminating!" PID=433735
Oct 14 05:54:08 np0005486808 systemd[1]: podman.service: Deactivated successfully.
Oct 14 05:54:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:09 np0005486808 nova_compute[259627]: 2025-10-14 09:54:09.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:09 np0005486808 nova_compute[259627]: 2025-10-14 09:54:09.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.013 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.014 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.014 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.015 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:54:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:54:10 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248548850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.542 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.721 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.723 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.799 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.800 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:54:10 np0005486808 nova_compute[259627]: 2025-10-14 09:54:10.836 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:54:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:54:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435957043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:54:11 np0005486808 nova_compute[259627]: 2025-10-14 09:54:11.328 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:54:11 np0005486808 nova_compute[259627]: 2025-10-14 09:54:11.338 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:54:11 np0005486808 nova_compute[259627]: 2025-10-14 09:54:11.357 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:54:11 np0005486808 nova_compute[259627]: 2025-10-14 09:54:11.360 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:54:11 np0005486808 nova_compute[259627]: 2025-10-14 09:54:11.360 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:54:12 np0005486808 nova_compute[259627]: 2025-10-14 09:54:12.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.356 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.357 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:54:13 np0005486808 nova_compute[259627]: 2025-10-14 09:54:13.994 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:54:14 np0005486808 nova_compute[259627]: 2025-10-14 09:54:14.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:14 np0005486808 nova_compute[259627]: 2025-10-14 09:54:14.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:14 np0005486808 nova_compute[259627]: 2025-10-14 09:54:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:54:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:15 np0005486808 nova_compute[259627]: 2025-10-14 09:54:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:17 np0005486808 systemd[1]: session-55.scope: Deactivated successfully.
Oct 14 05:54:17 np0005486808 systemd[1]: session-55.scope: Consumed 1.592s CPU time.
Oct 14 05:54:17 np0005486808 systemd-logind[799]: Session 55 logged out. Waiting for processes to exit.
Oct 14 05:54:17 np0005486808 systemd-logind[799]: Removed session 55.
Oct 14 05:54:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:17 np0005486808 nova_compute[259627]: 2025-10-14 09:54:17.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:17 np0005486808 systemd[1]: session-56.scope: Deactivated successfully.
Oct 14 05:54:17 np0005486808 systemd-logind[799]: Session 56 logged out. Waiting for processes to exit.
Oct 14 05:54:17 np0005486808 systemd-logind[799]: Removed session 56.
Oct 14 05:54:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:18 np0005486808 nova_compute[259627]: 2025-10-14 09:54:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:19 np0005486808 nova_compute[259627]: 2025-10-14 09:54:19.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:21 np0005486808 podman[434465]: 2025-10-14 09:54:21.656143228 +0000 UTC m=+0.059909241 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 14 05:54:21 np0005486808 podman[434464]: 2025-10-14 09:54:21.688996344 +0000 UTC m=+0.094087840 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 14 05:54:22 np0005486808 nova_compute[259627]: 2025-10-14 09:54:22.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:23 np0005486808 nova_compute[259627]: 2025-10-14 09:54:23.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:24 np0005486808 nova_compute[259627]: 2025-10-14 09:54:24.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:27 np0005486808 nova_compute[259627]: 2025-10-14 09:54:27.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:29 np0005486808 nova_compute[259627]: 2025-10-14 09:54:29.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:32 np0005486808 nova_compute[259627]: 2025-10-14 09:54:32.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:54:32
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Oct 14 05:54:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:54:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:54:34 np0005486808 nova_compute[259627]: 2025-10-14 09:54:34.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:35 np0005486808 podman[434505]: 2025-10-14 09:54:35.66495262 +0000 UTC m=+0.067331702 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:54:35 np0005486808 podman[434504]: 2025-10-14 09:54:35.735228845 +0000 UTC m=+0.142625900 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:54:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:37 np0005486808 nova_compute[259627]: 2025-10-14 09:54:37.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:39 np0005486808 nova_compute[259627]: 2025-10-14 09:54:39.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:39 np0005486808 nova_compute[259627]: 2025-10-14 09:54:39.993 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:39 np0005486808 nova_compute[259627]: 2025-10-14 09:54:39.994 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 05:54:40 np0005486808 nova_compute[259627]: 2025-10-14 09:54:40.009 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 05:54:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 05:54:42 np0005486808 nova_compute[259627]: 2025-10-14 09:54:42.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.362153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683362216, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 250, "total_data_size": 831127, "memory_usage": 843360, "flush_reason": "Manual Compaction"}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683370900, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 545268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63440, "largest_seqno": 64112, "table_properties": {"data_size": 542209, "index_size": 966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8193, "raw_average_key_size": 20, "raw_value_size": 535804, "raw_average_value_size": 1349, "num_data_blocks": 44, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435631, "oldest_key_time": 1760435631, "file_creation_time": 1760435683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 8815 microseconds, and 5433 cpu microseconds.
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.370966) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 545268 bytes OK
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.370995) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372388) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372412) EVENT_LOG_v1 {"time_micros": 1760435683372403, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.372438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 827582, prev total WAL file size 827582, number of live WAL files 2.
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.373466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353030' seq:72057594037927935, type:22 .. '6D6772737461740032373531' seq:0, type:0; will stop at (end)
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(532KB)], [149(10MB)]
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683373537, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11916472, "oldest_snapshot_seqno": -1}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8138 keys, 8892252 bytes, temperature: kUnknown
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683437563, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8892252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8841911, "index_size": 28916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20357, "raw_key_size": 213573, "raw_average_key_size": 26, "raw_value_size": 8700468, "raw_average_value_size": 1069, "num_data_blocks": 1117, "num_entries": 8138, "num_filter_entries": 8138, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.437812) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8892252 bytes
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.439276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.9 rd, 138.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(38.2) write-amplify(16.3) OK, records in: 8629, records dropped: 491 output_compression: NoCompression
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.439297) EVENT_LOG_v1 {"time_micros": 1760435683439288, "job": 92, "event": "compaction_finished", "compaction_time_micros": 64100, "compaction_time_cpu_micros": 47976, "output_level": 6, "num_output_files": 1, "total_output_size": 8892252, "num_input_records": 8629, "num_output_records": 8138, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683439545, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435683442216, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.373401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:54:43.442299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:54:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:54:44 np0005486808 nova_compute[259627]: 2025-10-14 09:54:44.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct 14 05:54:46 np0005486808 nova_compute[259627]: 2025-10-14 09:54:46.989 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:54:47 np0005486808 nova_compute[259627]: 2025-10-14 09:54:47.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:54:49 np0005486808 nova_compute[259627]: 2025-10-14 09:54:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 05:54:52 np0005486808 nova_compute[259627]: 2025-10-14 09:54:52.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:52 np0005486808 podman[434548]: 2025-10-14 09:54:52.658920122 +0000 UTC m=+0.075380361 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:54:52 np0005486808 podman[434549]: 2025-10-14 09:54:52.692149727 +0000 UTC m=+0.100547938 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 14 05:54:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 05:54:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:54 np0005486808 nova_compute[259627]: 2025-10-14 09:54:54.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Oct 14 05:54:55 np0005486808 nova_compute[259627]: 2025-10-14 09:54:55.134 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:54:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Oct 14 05:54:57 np0005486808 nova_compute[259627]: 2025-10-14 09:54:57.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:54:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:54:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:54:59 np0005486808 nova_compute[259627]: 2025-10-14 09:54:59.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 36d9e896-86ed-46db-a630-c8a7f90fcc41 does not exist
Oct 14 05:55:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2babdfce-f984-4ddf-be43-ba7a1137ed5a does not exist
Oct 14 05:55:00 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3426db94-0338-4956-a58b-9ec067f83ca6 does not exist
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:55:00 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:55:00 np0005486808 podman[434975]: 2025-10-14 09:55:00.962441204 +0000 UTC m=+0.054969299 container create bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:01 np0005486808 systemd[1]: Started libpod-conmon-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope.
Oct 14 05:55:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:01 np0005486808 podman[434975]: 2025-10-14 09:55:00.934829287 +0000 UTC m=+0.027357402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:01 np0005486808 podman[434975]: 2025-10-14 09:55:01.04213128 +0000 UTC m=+0.134659385 container init bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:55:01 np0005486808 podman[434975]: 2025-10-14 09:55:01.04945771 +0000 UTC m=+0.141985785 container start bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:55:01 np0005486808 systemd[1]: libpod-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope: Deactivated successfully.
Oct 14 05:55:01 np0005486808 hopeful_fermi[434992]: 167 167
Oct 14 05:55:01 np0005486808 podman[434975]: 2025-10-14 09:55:01.05844314 +0000 UTC m=+0.150971265 container attach bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:01 np0005486808 conmon[434992]: conmon bd8f2ea7a94676644c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope/container/memory.events
Oct 14 05:55:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:01 np0005486808 podman[434997]: 2025-10-14 09:55:01.103089676 +0000 UTC m=+0.026500892 container died bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:55:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-897e05dff8e363364d29d3bb0805ef8f2946fd2fb910f3e4e374ba528351c46e-merged.mount: Deactivated successfully.
Oct 14 05:55:01 np0005486808 podman[434997]: 2025-10-14 09:55:01.194217582 +0000 UTC m=+0.117628788 container remove bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:01 np0005486808 systemd[1]: libpod-conmon-bd8f2ea7a94676644c79a04835f2e40c290dbbe8ab93e1e0384701d25d697096.scope: Deactivated successfully.
Oct 14 05:55:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:55:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:01 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:55:01 np0005486808 podman[435019]: 2025-10-14 09:55:01.397753006 +0000 UTC m=+0.060492565 container create a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:01 np0005486808 systemd[1]: Started libpod-conmon-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope.
Oct 14 05:55:01 np0005486808 podman[435019]: 2025-10-14 09:55:01.374893465 +0000 UTC m=+0.037633054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:01 np0005486808 podman[435019]: 2025-10-14 09:55:01.513945067 +0000 UTC m=+0.176684696 container init a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:55:01 np0005486808 podman[435019]: 2025-10-14 09:55:01.53364032 +0000 UTC m=+0.196379909 container start a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:55:01 np0005486808 podman[435019]: 2025-10-14 09:55:01.541042312 +0000 UTC m=+0.203781951 container attach a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:02 np0005486808 nova_compute[259627]: 2025-10-14 09:55:02.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:02 np0005486808 recursing_varahamihira[435035]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:55:02 np0005486808 recursing_varahamihira[435035]: --> relative data size: 1.0
Oct 14 05:55:02 np0005486808 recursing_varahamihira[435035]: --> All data devices are unavailable
Oct 14 05:55:02 np0005486808 systemd[1]: libpod-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Deactivated successfully.
Oct 14 05:55:02 np0005486808 podman[435019]: 2025-10-14 09:55:02.662792086 +0000 UTC m=+1.325531675 container died a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 05:55:02 np0005486808 systemd[1]: libpod-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Consumed 1.082s CPU time.
Oct 14 05:55:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e26ce8b6e92ae489f067ee61a0b590a9277c555446dad7efad6d0d80768a5dc8-merged.mount: Deactivated successfully.
Oct 14 05:55:02 np0005486808 podman[435019]: 2025-10-14 09:55:02.735960652 +0000 UTC m=+1.398700211 container remove a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 05:55:02 np0005486808 systemd[1]: libpod-conmon-a79dc68b19c28fe6868ed33469b331c7e36ac11aafd28afae93e960b22034b60.scope: Deactivated successfully.
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.498047681 +0000 UTC m=+0.047872276 container create ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:55:03 np0005486808 systemd[1]: Started libpod-conmon-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope.
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.477600369 +0000 UTC m=+0.027424954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.593810881 +0000 UTC m=+0.143635456 container init ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.605572789 +0000 UTC m=+0.155397364 container start ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.609755542 +0000 UTC m=+0.159580177 container attach ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:55:03 np0005486808 elegant_shockley[435235]: 167 167
Oct 14 05:55:03 np0005486808 systemd[1]: libpod-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope: Deactivated successfully.
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.612802567 +0000 UTC m=+0.162627122 container died ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:55:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fe206bf02bf46bda20460e6d968fb08ffd8032a0f91cb6e4591ef1eb9f2a8e34-merged.mount: Deactivated successfully.
Oct 14 05:55:03 np0005486808 podman[435218]: 2025-10-14 09:55:03.661202494 +0000 UTC m=+0.211027059 container remove ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:55:03 np0005486808 systemd[1]: libpod-conmon-ed7398186cfae52cee01ed4b8033a5b7e7ca6c2caa28ec95c7b8abbe038a02c6.scope: Deactivated successfully.
Oct 14 05:55:03 np0005486808 podman[435259]: 2025-10-14 09:55:03.856581018 +0000 UTC m=+0.060235639 container create cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:55:03 np0005486808 systemd[1]: Started libpod-conmon-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope.
Oct 14 05:55:03 np0005486808 podman[435259]: 2025-10-14 09:55:03.83381046 +0000 UTC m=+0.037465101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:03 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:03 np0005486808 podman[435259]: 2025-10-14 09:55:03.981193916 +0000 UTC m=+0.184848637 container init cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:55:03 np0005486808 podman[435259]: 2025-10-14 09:55:03.993233801 +0000 UTC m=+0.196888422 container start cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:04 np0005486808 podman[435259]: 2025-10-14 09:55:04.008223489 +0000 UTC m=+0.211878110 container attach cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:55:04 np0005486808 nova_compute[259627]: 2025-10-14 09:55:04.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]: {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    "0": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "devices": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "/dev/loop3"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            ],
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_name": "ceph_lv0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_size": "21470642176",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "name": "ceph_lv0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "tags": {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_name": "ceph",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.crush_device_class": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.encrypted": "0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_id": "0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.vdo": "0"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            },
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "vg_name": "ceph_vg0"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        }
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    ],
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    "1": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "devices": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "/dev/loop4"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            ],
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_name": "ceph_lv1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_size": "21470642176",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "name": "ceph_lv1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "tags": {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_name": "ceph",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.crush_device_class": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.encrypted": "0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_id": "1",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.vdo": "0"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            },
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "vg_name": "ceph_vg1"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        }
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    ],
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    "2": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "devices": [
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "/dev/loop5"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            ],
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_name": "ceph_lv2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_size": "21470642176",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "name": "ceph_lv2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "tags": {
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.cluster_name": "ceph",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.crush_device_class": "",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.encrypted": "0",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osd_id": "2",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:                "ceph.vdo": "0"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            },
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "type": "block",
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:            "vg_name": "ceph_vg2"
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:        }
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]:    ]
Oct 14 05:55:04 np0005486808 romantic_meninsky[435276]: }
Oct 14 05:55:04 np0005486808 systemd[1]: libpod-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope: Deactivated successfully.
Oct 14 05:55:04 np0005486808 podman[435259]: 2025-10-14 09:55:04.796944441 +0000 UTC m=+1.000599092 container died cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:55:04 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fe1029c526330e5684f79de6ab44af819011dc92fa858e7146bc6889858f53ca-merged.mount: Deactivated successfully.
Oct 14 05:55:04 np0005486808 podman[435259]: 2025-10-14 09:55:04.866966929 +0000 UTC m=+1.070621580 container remove cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:55:04 np0005486808 systemd[1]: libpod-conmon-cc903a35889d7c7b1ca5c4379bbbfc04ca5de8aa92697a934f01696a13b6af0b.scope: Deactivated successfully.
Oct 14 05:55:05 np0005486808 nova_compute[259627]: 2025-10-14 09:55:05.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.649291496 +0000 UTC m=+0.057057042 container create 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:55:05 np0005486808 systemd[1]: Started libpod-conmon-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope.
Oct 14 05:55:05 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.627724546 +0000 UTC m=+0.035490182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:55:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:55:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:55:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612376368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.736750531 +0000 UTC m=+0.144516167 container init 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.747932606 +0000 UTC m=+0.155698162 container start 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.751606046 +0000 UTC m=+0.159371692 container attach 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 05:55:05 np0005486808 vigilant_blackburn[435455]: 167 167
Oct 14 05:55:05 np0005486808 systemd[1]: libpod-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope: Deactivated successfully.
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.755336408 +0000 UTC m=+0.163101964 container died 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 05:55:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d25f80195547d13b1fedc54899751c3ed28466c52765cccc15cd9dcd35ebfb9a-merged.mount: Deactivated successfully.
Oct 14 05:55:05 np0005486808 podman[435439]: 2025-10-14 09:55:05.801177962 +0000 UTC m=+0.208943508 container remove 72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 05:55:05 np0005486808 systemd[1]: libpod-conmon-72ed72bad8fbc482ef089e17f28c0a54887b5db4a354e52d06768d84ade6ff3e.scope: Deactivated successfully.
Oct 14 05:55:05 np0005486808 podman[435458]: 2025-10-14 09:55:05.841048401 +0000 UTC m=+0.093167867 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 14 05:55:05 np0005486808 podman[435468]: 2025-10-14 09:55:05.929376068 +0000 UTC m=+0.144897136 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller)
Oct 14 05:55:05 np0005486808 podman[435516]: 2025-10-14 09:55:05.978620356 +0000 UTC m=+0.055199255 container create 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:55:06 np0005486808 systemd[1]: Started libpod-conmon-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope.
Oct 14 05:55:06 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:55:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:06 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:55:06 np0005486808 podman[435516]: 2025-10-14 09:55:05.952784272 +0000 UTC m=+0.029363221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:55:06 np0005486808 podman[435516]: 2025-10-14 09:55:06.057526992 +0000 UTC m=+0.134105891 container init 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 14 05:55:06 np0005486808 podman[435516]: 2025-10-14 09:55:06.067403455 +0000 UTC m=+0.143982354 container start 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:55:06 np0005486808 podman[435516]: 2025-10-14 09:55:06.076108238 +0000 UTC m=+0.152687137 container attach 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:55:06 np0005486808 nova_compute[259627]: 2025-10-14 09:55:06.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.071 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:55:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:55:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:55:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:07 np0005486808 nova_compute[259627]: 2025-10-14 09:55:07.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]: {
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_id": 2,
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "type": "bluestore"
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    },
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_id": 1,
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "type": "bluestore"
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    },
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_id": 0,
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:        "type": "bluestore"
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]:    }
Oct 14 05:55:07 np0005486808 naughty_brattain[435536]: }
Oct 14 05:55:07 np0005486808 systemd[1]: libpod-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Deactivated successfully.
Oct 14 05:55:07 np0005486808 systemd[1]: libpod-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Consumed 1.150s CPU time.
Oct 14 05:55:07 np0005486808 podman[435569]: 2025-10-14 09:55:07.264923539 +0000 UTC m=+0.035073292 container died 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 14 05:55:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-01250ba9ee184f9ef6963586f6c688264c90cf83f61aa959bbf2515e6219613b-merged.mount: Deactivated successfully.
Oct 14 05:55:07 np0005486808 podman[435569]: 2025-10-14 09:55:07.347261999 +0000 UTC m=+0.117411682 container remove 3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 05:55:07 np0005486808 systemd[1]: libpod-conmon-3206d3bfcddd740ecd7c1172f89e365a2c023cf7455eb475e4fa0d2bebf892ee.scope: Deactivated successfully.
Oct 14 05:55:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:55:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:07 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:55:07 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a5971cf0-5dbd-4459-ad6c-d90e68f84833 does not exist
Oct 14 05:55:07 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 14d6076f-6b0b-4e4d-bbcb-41440809bcf4 does not exist
Oct 14 05:55:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:55:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:09 np0005486808 nova_compute[259627]: 2025-10-14 09:55:09.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:10 np0005486808 nova_compute[259627]: 2025-10-14 09:55:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.026 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.027 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.028 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.028 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:55:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:11 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:55:11 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465612663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.515 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.691 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.692 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.785 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.785 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.803 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.831 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.831 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.849 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.884 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 05:55:11 np0005486808 nova_compute[259627]: 2025-10-14 09:55:11.917 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:55:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602668938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.396 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.404 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.436 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.439 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:55:12 np0005486808 nova_compute[259627]: 2025-10-14 09:55:12.439 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:55:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:13 np0005486808 nova_compute[259627]: 2025-10-14 09:55:13.435 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:13 np0005486808 nova_compute[259627]: 2025-10-14 09:55:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:14 np0005486808 nova_compute[259627]: 2025-10-14 09:55:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:14 np0005486808 nova_compute[259627]: 2025-10-14 09:55:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:14 np0005486808 nova_compute[259627]: 2025-10-14 09:55:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:55:14 np0005486808 nova_compute[259627]: 2025-10-14 09:55:14.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:55:15 np0005486808 nova_compute[259627]: 2025-10-14 09:55:15.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:55:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:16 np0005486808 nova_compute[259627]: 2025-10-14 09:55:16.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:16 np0005486808 nova_compute[259627]: 2025-10-14 09:55:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:16 np0005486808 nova_compute[259627]: 2025-10-14 09:55:16.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:55:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:17 np0005486808 nova_compute[259627]: 2025-10-14 09:55:17.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:18 np0005486808 nova_compute[259627]: 2025-10-14 09:55:18.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:55:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:19 np0005486808 nova_compute[259627]: 2025-10-14 09:55:19.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:22 np0005486808 nova_compute[259627]: 2025-10-14 09:55:22.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:23 np0005486808 podman[435678]: 2025-10-14 09:55:23.687077248 +0000 UTC m=+0.090336378 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 14 05:55:23 np0005486808 podman[435679]: 2025-10-14 09:55:23.695421143 +0000 UTC m=+0.102010135 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, tcib_managed=true)
Oct 14 05:55:24 np0005486808 nova_compute[259627]: 2025-10-14 09:55:24.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:27 np0005486808 nova_compute[259627]: 2025-10-14 09:55:27.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:29 np0005486808 nova_compute[259627]: 2025-10-14 09:55:29.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:32 np0005486808 nova_compute[259627]: 2025-10-14 09:55:32.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:55:32
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct 14 05:55:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:55:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:55:34 np0005486808 nova_compute[259627]: 2025-10-14 09:55:34.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:36 np0005486808 podman[435716]: 2025-10-14 09:55:36.708058823 +0000 UTC m=+0.102752782 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:55:36 np0005486808 podman[435715]: 2025-10-14 09:55:36.720422306 +0000 UTC m=+0.119181275 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 14 05:55:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:37 np0005486808 nova_compute[259627]: 2025-10-14 09:55:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:39 np0005486808 nova_compute[259627]: 2025-10-14 09:55:39.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:42 np0005486808 nova_compute[259627]: 2025-10-14 09:55:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:55:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:55:44 np0005486808 nova_compute[259627]: 2025-10-14 09:55:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:47 np0005486808 nova_compute[259627]: 2025-10-14 09:55:47.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:49 np0005486808 nova_compute[259627]: 2025-10-14 09:55:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:52 np0005486808 nova_compute[259627]: 2025-10-14 09:55:52.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:54 np0005486808 podman[435762]: 2025-10-14 09:55:54.686399958 +0000 UTC m=+0.087935368 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 14 05:55:54 np0005486808 podman[435763]: 2025-10-14 09:55:54.686529972 +0000 UTC m=+0.085384667 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 05:55:54 np0005486808 nova_compute[259627]: 2025-10-14 09:55:54.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:57 np0005486808 nova_compute[259627]: 2025-10-14 09:55:57.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:55:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:55:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:55:59 np0005486808 nova_compute[259627]: 2025-10-14 09:55:59.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:02 np0005486808 nova_compute[259627]: 2025-10-14 09:56:02.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:04 np0005486808 nova_compute[259627]: 2025-10-14 09:56:04.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:56:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:56:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488486584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:56:05 np0005486808 nova_compute[259627]: 2025-10-14 09:56:05.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:56:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:56:07.072 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:56:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:07 np0005486808 nova_compute[259627]: 2025-10-14 09:56:07.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:07 np0005486808 podman[435800]: 2025-10-14 09:56:07.207831107 +0000 UTC m=+0.084361261 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 14 05:56:07 np0005486808 podman[435799]: 2025-10-14 09:56:07.254277867 +0000 UTC m=+0.136271475 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:56:07 np0005486808 nova_compute[259627]: 2025-10-14 09:56:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:09 np0005486808 nova_compute[259627]: 2025-10-14 09:56:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:09 np0005486808 podman[436237]: 2025-10-14 09:56:09.961675528 +0000 UTC m=+0.057233235 container create 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 05:56:10 np0005486808 systemd[1]: Started libpod-conmon-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope.
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:09.941578535 +0000 UTC m=+0.037136302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:10.062734838 +0000 UTC m=+0.158292545 container init 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:10.077221293 +0000 UTC m=+0.172779010 container start 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:10.081284483 +0000 UTC m=+0.176842200 container attach 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:56:10 np0005486808 adoring_blackburn[436253]: 167 167
Oct 14 05:56:10 np0005486808 systemd[1]: libpod-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope: Deactivated successfully.
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:10.086887971 +0000 UTC m=+0.182445678 container died 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 14 05:56:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8995959ca183b817d05bcbf236b055b40b8fc69e7c732d2cdd4b70985f08dda9-merged.mount: Deactivated successfully.
Oct 14 05:56:10 np0005486808 podman[436237]: 2025-10-14 09:56:10.137100573 +0000 UTC m=+0.232658240 container remove 08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_blackburn, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:10 np0005486808 systemd[1]: libpod-conmon-08f5e663375151f44a1ef7adc3181d4df432048e40255c0d8f284b738de18dd4.scope: Deactivated successfully.
Oct 14 05:56:10 np0005486808 podman[436278]: 2025-10-14 09:56:10.368572382 +0000 UTC m=+0.055730708 container create feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:56:10 np0005486808 systemd[1]: Started libpod-conmon-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope.
Oct 14 05:56:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:10 np0005486808 podman[436278]: 2025-10-14 09:56:10.342530063 +0000 UTC m=+0.029688469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:10 np0005486808 podman[436278]: 2025-10-14 09:56:10.447993551 +0000 UTC m=+0.135151877 container init feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 05:56:10 np0005486808 podman[436278]: 2025-10-14 09:56:10.454234344 +0000 UTC m=+0.141392650 container start feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:56:10 np0005486808 podman[436278]: 2025-10-14 09:56:10.457612217 +0000 UTC m=+0.144770543 container attach feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 14 05:56:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:11 np0005486808 nova_compute[259627]: 2025-10-14 09:56:11.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:12 np0005486808 great_cori[436294]: [
Oct 14 05:56:12 np0005486808 great_cori[436294]:    {
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "available": false,
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "ceph_device": false,
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "lsm_data": {},
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "lvs": [],
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "path": "/dev/sr0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "rejected_reasons": [
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "Insufficient space (<5GB)",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "Has a FileSystem"
Oct 14 05:56:12 np0005486808 great_cori[436294]:        ],
Oct 14 05:56:12 np0005486808 great_cori[436294]:        "sys_api": {
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "actuators": null,
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "device_nodes": "sr0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "devname": "sr0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "human_readable_size": "482.00 KB",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "id_bus": "ata",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "model": "QEMU DVD-ROM",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "nr_requests": "2",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "parent": "/dev/sr0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "partitions": {},
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "path": "/dev/sr0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "removable": "1",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "rev": "2.5+",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "ro": "0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "rotational": "0",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "sas_address": "",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "sas_device_handle": "",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "scheduler_mode": "mq-deadline",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "sectors": 0,
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "sectorsize": "2048",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "size": 493568.0,
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "support_discard": "2048",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "type": "disk",
Oct 14 05:56:12 np0005486808 great_cori[436294]:            "vendor": "QEMU"
Oct 14 05:56:12 np0005486808 great_cori[436294]:        }
Oct 14 05:56:12 np0005486808 great_cori[436294]:    }
Oct 14 05:56:12 np0005486808 great_cori[436294]: ]
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.045 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.046 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.046 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.047 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.047 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:56:12 np0005486808 systemd[1]: libpod-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Deactivated successfully.
Oct 14 05:56:12 np0005486808 systemd[1]: libpod-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Consumed 1.672s CPU time.
Oct 14 05:56:12 np0005486808 podman[438503]: 2025-10-14 09:56:12.134489983 +0000 UTC m=+0.045099017 container died feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:56:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-9c462afa7987f5ba720ea5a6c316f45e0fa63588a826be33e148db8d71e84df1-merged.mount: Deactivated successfully.
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:12 np0005486808 podman[438503]: 2025-10-14 09:56:12.208997691 +0000 UTC m=+0.119606625 container remove feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cori, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:12 np0005486808 systemd[1]: libpod-conmon-feffa607ad690d1c8d9c66c35d1989f3337628b0f5622e8e32c6acc571dc6fb7.scope: Deactivated successfully.
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 45670223-0286-4f9a-9df7-5432b65b3659 does not exist
Oct 14 05:56:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2d232ea3-188e-4c9c-b268-8959765e6764 does not exist
Oct 14 05:56:12 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev eee15c4e-31e4-4959-ab0f-15c2629977dd does not exist
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:56:12 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794091761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.530 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.745 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.746 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3559MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.746 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.747 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.833 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.834 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:56:12 np0005486808 nova_compute[259627]: 2025-10-14 09:56:12.854 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.085324793 +0000 UTC m=+0.064319409 container create cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:13 np0005486808 systemd[1]: Started libpod-conmon-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope.
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.058114426 +0000 UTC m=+0.037109082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.192188165 +0000 UTC m=+0.171182811 container init cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.202044757 +0000 UTC m=+0.181039363 container start cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.206137268 +0000 UTC m=+0.185131884 container attach cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 05:56:13 np0005486808 wonderful_dijkstra[438714]: 167 167
Oct 14 05:56:13 np0005486808 systemd[1]: libpod-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope: Deactivated successfully.
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.21111361 +0000 UTC m=+0.190108226 container died cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:56:13 np0005486808 systemd[1]: var-lib-containers-storage-overlay-57a04142650eff79b844d7ad90e7f379daac20c5135719a6d1f1ccfb5a4b7c05-merged.mount: Deactivated successfully.
Oct 14 05:56:13 np0005486808 podman[438680]: 2025-10-14 09:56:13.262928161 +0000 UTC m=+0.241922767 container remove cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:56:13 np0005486808 systemd[1]: libpod-conmon-cf5f9f8bca3779541cf908d9966188ececde25b6bbed26cf4876ebfe50b97130.scope: Deactivated successfully.
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143012848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:56:13 np0005486808 nova_compute[259627]: 2025-10-14 09:56:13.314 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:56:13 np0005486808 nova_compute[259627]: 2025-10-14 09:56:13.324 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:56:13 np0005486808 nova_compute[259627]: 2025-10-14 09:56:13.347 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:56:13 np0005486808 nova_compute[259627]: 2025-10-14 09:56:13.352 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:56:13 np0005486808 nova_compute[259627]: 2025-10-14 09:56:13.352 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:56:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:13 np0005486808 podman[438740]: 2025-10-14 09:56:13.481114025 +0000 UTC m=+0.068892062 container create c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 14 05:56:13 np0005486808 systemd[1]: Started libpod-conmon-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope.
Oct 14 05:56:13 np0005486808 podman[438740]: 2025-10-14 09:56:13.452578265 +0000 UTC m=+0.040356352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:13 np0005486808 podman[438740]: 2025-10-14 09:56:13.586578833 +0000 UTC m=+0.174356870 container init c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:56:13 np0005486808 podman[438740]: 2025-10-14 09:56:13.600243258 +0000 UTC m=+0.188021305 container start c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 14 05:56:13 np0005486808 podman[438740]: 2025-10-14 09:56:13.60561028 +0000 UTC m=+0.193388327 container attach c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 14 05:56:14 np0005486808 awesome_jang[438757]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:56:14 np0005486808 awesome_jang[438757]: --> relative data size: 1.0
Oct 14 05:56:14 np0005486808 awesome_jang[438757]: --> All data devices are unavailable
Oct 14 05:56:14 np0005486808 systemd[1]: libpod-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Deactivated successfully.
Oct 14 05:56:14 np0005486808 podman[438740]: 2025-10-14 09:56:14.72146428 +0000 UTC m=+1.309242327 container died c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:56:14 np0005486808 systemd[1]: libpod-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Consumed 1.084s CPU time.
Oct 14 05:56:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-247f258b24f7a3878dd8eaa5a5fe835d8a00ad6f09bb2c21febe53d2294a783d-merged.mount: Deactivated successfully.
Oct 14 05:56:14 np0005486808 podman[438740]: 2025-10-14 09:56:14.792862872 +0000 UTC m=+1.380640919 container remove c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 14 05:56:14 np0005486808 systemd[1]: libpod-conmon-c44d9d61685bc25175561fc2cb288b1b53d62632b76c7b3d858b6274fb58ea53.scope: Deactivated successfully.
Oct 14 05:56:14 np0005486808 nova_compute[259627]: 2025-10-14 09:56:14.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:15 np0005486808 nova_compute[259627]: 2025-10-14 09:56:15.349 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.707473624 +0000 UTC m=+0.065002426 container create 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 14 05:56:15 np0005486808 systemd[1]: Started libpod-conmon-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope.
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.6861222 +0000 UTC m=+0.043651012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:15 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.809581839 +0000 UTC m=+0.167110691 container init 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.822845205 +0000 UTC m=+0.180374007 container start 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.826988916 +0000 UTC m=+0.184517778 container attach 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:56:15 np0005486808 dreamy_wescoff[438957]: 167 167
Oct 14 05:56:15 np0005486808 systemd[1]: libpod-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope: Deactivated successfully.
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.832680226 +0000 UTC m=+0.190209028 container died 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:56:15 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fcf2a79b7a80a839367cc846d14e9852f6a3a499fb14fdc5c9532ab7803cb89e-merged.mount: Deactivated successfully.
Oct 14 05:56:15 np0005486808 podman[438941]: 2025-10-14 09:56:15.888142326 +0000 UTC m=+0.245671128 container remove 18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 14 05:56:15 np0005486808 systemd[1]: libpod-conmon-18f7c01fb03570927b978606d5d3b365111e3e31f6f340e2db99258b4f0e26fd.scope: Deactivated successfully.
Oct 14 05:56:15 np0005486808 nova_compute[259627]: 2025-10-14 09:56:15.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.11816658 +0000 UTC m=+0.058341753 container create e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:56:16 np0005486808 systemd[1]: Started libpod-conmon-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope.
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.092734926 +0000 UTC m=+0.032910179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:16 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.213422307 +0000 UTC m=+0.153597560 container init e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.229559833 +0000 UTC m=+0.169734986 container start e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.233771977 +0000 UTC m=+0.173947230 container attach e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 05:56:16 np0005486808 tender_poincare[438999]: {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    "0": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "devices": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "/dev/loop3"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            ],
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_name": "ceph_lv0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_size": "21470642176",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "name": "ceph_lv0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "tags": {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_name": "ceph",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.crush_device_class": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.encrypted": "0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_id": "0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.vdo": "0"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            },
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "vg_name": "ceph_vg0"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        }
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    ],
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    "1": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "devices": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "/dev/loop4"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            ],
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_name": "ceph_lv1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_size": "21470642176",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "name": "ceph_lv1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "tags": {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_name": "ceph",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.crush_device_class": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.encrypted": "0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_id": "1",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.vdo": "0"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            },
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "vg_name": "ceph_vg1"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        }
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    ],
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    "2": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "devices": [
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "/dev/loop5"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            ],
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_name": "ceph_lv2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_size": "21470642176",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "name": "ceph_lv2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "tags": {
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.cluster_name": "ceph",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.crush_device_class": "",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.encrypted": "0",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osd_id": "2",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:                "ceph.vdo": "0"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            },
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "type": "block",
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:            "vg_name": "ceph_vg2"
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:        }
Oct 14 05:56:16 np0005486808 tender_poincare[438999]:    ]
Oct 14 05:56:16 np0005486808 tender_poincare[438999]: }
Oct 14 05:56:16 np0005486808 nova_compute[259627]: 2025-10-14 09:56:16.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:16 np0005486808 nova_compute[259627]: 2025-10-14 09:56:16.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:56:16 np0005486808 nova_compute[259627]: 2025-10-14 09:56:16.981 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:56:16 np0005486808 systemd[1]: libpod-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope: Deactivated successfully.
Oct 14 05:56:16 np0005486808 podman[438982]: 2025-10-14 09:56:16.983587965 +0000 UTC m=+0.923763158 container died e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:56:16 np0005486808 nova_compute[259627]: 2025-10-14 09:56:16.998 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:56:17 np0005486808 systemd[1]: var-lib-containers-storage-overlay-356b40048ef67cd0cd8813c35e5e625549fc56ca106517e0c170cb2e381778f7-merged.mount: Deactivated successfully.
Oct 14 05:56:17 np0005486808 podman[438982]: 2025-10-14 09:56:17.053040559 +0000 UTC m=+0.993215722 container remove e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:56:17 np0005486808 systemd[1]: libpod-conmon-e198473ca395fea5a849f07b247b39e0369103e673c4dcc55f4aa2f5e9d3b524.scope: Deactivated successfully.
Oct 14 05:56:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:17 np0005486808 nova_compute[259627]: 2025-10-14 09:56:17.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.010456612 +0000 UTC m=+0.066844111 container create 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:56:18 np0005486808 systemd[1]: Started libpod-conmon-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope.
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:17.984496055 +0000 UTC m=+0.040883604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.112593728 +0000 UTC m=+0.168981257 container init 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.12490133 +0000 UTC m=+0.181288819 container start 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.129710018 +0000 UTC m=+0.186097577 container attach 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 14 05:56:18 np0005486808 unruffled_banach[439179]: 167 167
Oct 14 05:56:18 np0005486808 systemd[1]: libpod-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope: Deactivated successfully.
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.132388984 +0000 UTC m=+0.188776473 container died 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 14 05:56:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2cfc7418709413762eee1ec9f9c6970e50fe28c53e7b2af041e45764e8fb9af8-merged.mount: Deactivated successfully.
Oct 14 05:56:18 np0005486808 podman[439163]: 2025-10-14 09:56:18.187645489 +0000 UTC m=+0.244032978 container remove 3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:56:18 np0005486808 systemd[1]: libpod-conmon-3f6509e18ed236da8d1c69bcc1b6b974d460f015cce7f6de8fa23dcba81bdb27.scope: Deactivated successfully.
Oct 14 05:56:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:18 np0005486808 podman[439203]: 2025-10-14 09:56:18.434612499 +0000 UTC m=+0.061306045 container create 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:18 np0005486808 systemd[1]: Started libpod-conmon-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope.
Oct 14 05:56:18 np0005486808 podman[439203]: 2025-10-14 09:56:18.412589279 +0000 UTC m=+0.039282845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:56:18 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:56:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:18 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:56:18 np0005486808 podman[439203]: 2025-10-14 09:56:18.556823458 +0000 UTC m=+0.183517004 container init 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:56:18 np0005486808 podman[439203]: 2025-10-14 09:56:18.569050628 +0000 UTC m=+0.195744154 container start 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:18 np0005486808 podman[439203]: 2025-10-14 09:56:18.573000845 +0000 UTC m=+0.199694451 container attach 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 05:56:18 np0005486808 nova_compute[259627]: 2025-10-14 09:56:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:18 np0005486808 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:18 np0005486808 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:18 np0005486808 nova_compute[259627]: 2025-10-14 09:56:18.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:56:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]: {
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_id": 2,
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "type": "bluestore"
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    },
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_id": 1,
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "type": "bluestore"
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    },
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_id": 0,
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:        "type": "bluestore"
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]:    }
Oct 14 05:56:19 np0005486808 vigorous_bhabha[439220]: }
Oct 14 05:56:19 np0005486808 systemd[1]: libpod-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Deactivated successfully.
Oct 14 05:56:19 np0005486808 podman[439203]: 2025-10-14 09:56:19.677966167 +0000 UTC m=+1.304659723 container died 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:19 np0005486808 systemd[1]: libpod-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Consumed 1.116s CPU time.
Oct 14 05:56:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-5cd8ac26044ad0f1f2cb07355611b140b7e731c43d09d45e4fb6f92ece216d80-merged.mount: Deactivated successfully.
Oct 14 05:56:19 np0005486808 podman[439203]: 2025-10-14 09:56:19.763788473 +0000 UTC m=+1.390481999 container remove 2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bhabha, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:56:19 np0005486808 systemd[1]: libpod-conmon-2991077e458b6e57f6aa7b873ab3146091e46946133dcf540874fc911cf91bc2.scope: Deactivated successfully.
Oct 14 05:56:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:56:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:56:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b675b71b-1932-4ad1-ad0b-6d505662cff8 does not exist
Oct 14 05:56:19 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev aac4e4df-fe79-4d2b-8c03-ab9221ac66a7 does not exist
Oct 14 05:56:19 np0005486808 nova_compute[259627]: 2025-10-14 09:56:19.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:20 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:56:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:22 np0005486808 nova_compute[259627]: 2025-10-14 09:56:22.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:24 np0005486808 nova_compute[259627]: 2025-10-14 09:56:24.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:25 np0005486808 podman[439315]: 2025-10-14 09:56:25.698945473 +0000 UTC m=+0.099956343 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:56:25 np0005486808 podman[439314]: 2025-10-14 09:56:25.708286142 +0000 UTC m=+0.109359674 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:56:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:27 np0005486808 nova_compute[259627]: 2025-10-14 09:56:27.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:29 np0005486808 nova_compute[259627]: 2025-10-14 09:56:29.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:32 np0005486808 nova_compute[259627]: 2025-10-14 09:56:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:56:32
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'backups']
Oct 14 05:56:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:56:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:56:34 np0005486808 nova_compute[259627]: 2025-10-14 09:56:34.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:37 np0005486808 nova_compute[259627]: 2025-10-14 09:56:37.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:37 np0005486808 podman[439352]: 2025-10-14 09:56:37.695996994 +0000 UTC m=+0.093897805 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:56:37 np0005486808 podman[439351]: 2025-10-14 09:56:37.734616372 +0000 UTC m=+0.147262565 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 05:56:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:39 np0005486808 nova_compute[259627]: 2025-10-14 09:56:39.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:39 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:56:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:42 np0005486808 nova_compute[259627]: 2025-10-14 09:56:42.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:56:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:56:44 np0005486808 nova_compute[259627]: 2025-10-14 09:56:44.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:47 np0005486808 nova_compute[259627]: 2025-10-14 09:56:47.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:49 np0005486808 nova_compute[259627]: 2025-10-14 09:56:49.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:49 np0005486808 nova_compute[259627]: 2025-10-14 09:56:49.974 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:56:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:52 np0005486808 nova_compute[259627]: 2025-10-14 09:56:52.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:54 np0005486808 nova_compute[259627]: 2025-10-14 09:56:54.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:56 np0005486808 podman[439400]: 2025-10-14 09:56:56.664494115 +0000 UTC m=+0.074768495 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 14 05:56:56 np0005486808 podman[439401]: 2025-10-14 09:56:56.688785591 +0000 UTC m=+0.085061148 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 14 05:56:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:57 np0005486808 nova_compute[259627]: 2025-10-14 09:56:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:56:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:56:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:56:59 np0005486808 nova_compute[259627]: 2025-10-14 09:56:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:02 np0005486808 nova_compute[259627]: 2025-10-14 09:57:02.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:04 np0005486808 nova_compute[259627]: 2025-10-14 09:57:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:57:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:57:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:57:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131583838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:57:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 14 05:57:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 14 05:57:06 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 14 05:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.073 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.073 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:57:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:57:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:57:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:07 np0005486808 nova_compute[259627]: 2025-10-14 09:57:07.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:07 np0005486808 nova_compute[259627]: 2025-10-14 09:57:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:07 np0005486808 nova_compute[259627]: 2025-10-14 09:57:07.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:08 np0005486808 podman[439442]: 2025-10-14 09:57:08.708005302 +0000 UTC m=+0.111541348 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 14 05:57:08 np0005486808 podman[439441]: 2025-10-14 09:57:08.731669153 +0000 UTC m=+0.140225612 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller)
Oct 14 05:57:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 14 05:57:09 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 14 05:57:09 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 14 05:57:09 np0005486808 nova_compute[259627]: 2025-10-14 09:57:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 05:57:12 np0005486808 nova_compute[259627]: 2025-10-14 09:57:12.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 457 KiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 14 05:57:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 14 05:57:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 14 05:57:13 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 14 05:57:13 np0005486808 nova_compute[259627]: 2025-10-14 09:57:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.016 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.017 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.018 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.018 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:57:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 14 05:57:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 14 05:57:14 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 14 05:57:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:57:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453903008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.517 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.705 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.706 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.891 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.892 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.928 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:57:14 np0005486808 nova_compute[259627]: 2025-10-14 09:57:14.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 457 KiB data, 992 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 6.7 KiB/s wr, 101 op/s
Oct 14 05:57:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:57:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1719453830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:57:15 np0005486808 nova_compute[259627]: 2025-10-14 09:57:15.422 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:57:15 np0005486808 nova_compute[259627]: 2025-10-14 09:57:15.430 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:57:15 np0005486808 nova_compute[259627]: 2025-10-14 09:57:15.458 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:57:15 np0005486808 nova_compute[259627]: 2025-10-14 09:57:15.461 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:57:15 np0005486808 nova_compute[259627]: 2025-10-14 09:57:15.462 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:57:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 14 05:57:17 np0005486808 nova_compute[259627]: 2025-10-14 09:57:17.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:17 np0005486808 nova_compute[259627]: 2025-10-14 09:57:17.458 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:17 np0005486808 nova_compute[259627]: 2025-10-14 09:57:17.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:18 np0005486808 nova_compute[259627]: 2025-10-14 09:57:18.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:18 np0005486808 nova_compute[259627]: 2025-10-14 09:57:18.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:57:18 np0005486808 nova_compute[259627]: 2025-10-14 09:57:18.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:57:19 np0005486808 nova_compute[259627]: 2025-10-14 09:57:18.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:57:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 05:57:19 np0005486808 nova_compute[259627]: 2025-10-14 09:57:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:19 np0005486808 nova_compute[259627]: 2025-10-14 09:57:19.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c7cffe8f-6af6-48b0-bdf8-c398278c0c20 does not exist
Oct 14 05:57:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 29994571-11d2-4cb4-81fd-d73946811fc0 does not exist
Oct 14 05:57:20 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 73deb0b1-1767-415a-8457-4dbc14f236ee does not exist
Oct 14 05:57:20 np0005486808 nova_compute[259627]: 2025-10-14 09:57:20.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:20 np0005486808 nova_compute[259627]: 2025-10-14 09:57:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:57:20 np0005486808 nova_compute[259627]: 2025-10-14 09:57:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:57:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:57:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 14 05:57:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:57:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:21 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.765942773 +0000 UTC m=+0.064616586 container create 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:57:21 np0005486808 systemd[1]: Started libpod-conmon-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope.
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.73766571 +0000 UTC m=+0.036339583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:21 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.875352668 +0000 UTC m=+0.174026491 container init 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.890893979 +0000 UTC m=+0.189567802 container start 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.895840311 +0000 UTC m=+0.194514254 container attach 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:57:21 np0005486808 condescending_cartwright[439818]: 167 167
Oct 14 05:57:21 np0005486808 systemd[1]: libpod-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope: Deactivated successfully.
Oct 14 05:57:21 np0005486808 conmon[439818]: conmon 2491bb4e0f09390cae52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope/container/memory.events
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.900116156 +0000 UTC m=+0.198789979 container died 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 05:57:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2c6a756aed0428c67fd63748e4483b2ea49dff39382edf33381dc37e2638435d-merged.mount: Deactivated successfully.
Oct 14 05:57:21 np0005486808 podman[439801]: 2025-10-14 09:57:21.95041932 +0000 UTC m=+0.249093103 container remove 2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 14 05:57:21 np0005486808 systemd[1]: libpod-conmon-2491bb4e0f09390cae5257715c5cbc0a375499ad29401721aef7955b9d853dfc.scope: Deactivated successfully.
Oct 14 05:57:22 np0005486808 podman[439841]: 2025-10-14 09:57:22.176735023 +0000 UTC m=+0.071152077 container create 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:57:22 np0005486808 systemd[1]: Started libpod-conmon-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope.
Oct 14 05:57:22 np0005486808 podman[439841]: 2025-10-14 09:57:22.149075754 +0000 UTC m=+0.043492858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:22 np0005486808 podman[439841]: 2025-10-14 09:57:22.283875972 +0000 UTC m=+0.178293076 container init 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:57:22 np0005486808 podman[439841]: 2025-10-14 09:57:22.298353157 +0000 UTC m=+0.192770221 container start 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:57:22 np0005486808 podman[439841]: 2025-10-14 09:57:22.302564851 +0000 UTC m=+0.196981975 container attach 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 14 05:57:22 np0005486808 nova_compute[259627]: 2025-10-14 09:57:22.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 14 05:57:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:23 np0005486808 hopeful_spence[439858]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:57:23 np0005486808 hopeful_spence[439858]: --> relative data size: 1.0
Oct 14 05:57:23 np0005486808 hopeful_spence[439858]: --> All data devices are unavailable
Oct 14 05:57:23 np0005486808 systemd[1]: libpod-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Deactivated successfully.
Oct 14 05:57:23 np0005486808 systemd[1]: libpod-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Consumed 1.110s CPU time.
Oct 14 05:57:23 np0005486808 podman[439841]: 2025-10-14 09:57:23.480383071 +0000 UTC m=+1.374800115 container died 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:57:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a346f52c759656d97154a2b780f832eed177ae1e907110ae24cb56f904ade71f-merged.mount: Deactivated successfully.
Oct 14 05:57:23 np0005486808 podman[439841]: 2025-10-14 09:57:23.542820543 +0000 UTC m=+1.437237577 container remove 2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_spence, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 05:57:23 np0005486808 systemd[1]: libpod-conmon-2ccef8d353ed257d718c3bdb702f780f5f2439a26a432cce15ce1199e95f3c7a.scope: Deactivated successfully.
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.283924997 +0000 UTC m=+0.046279167 container create f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 05:57:24 np0005486808 systemd[1]: Started libpod-conmon-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope.
Oct 14 05:57:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.344043222 +0000 UTC m=+0.106397392 container init f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.349700151 +0000 UTC m=+0.112054321 container start f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 05:57:24 np0005486808 quirky_kapitsa[440055]: 167 167
Oct 14 05:57:24 np0005486808 systemd[1]: libpod-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope: Deactivated successfully.
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.355731439 +0000 UTC m=+0.118085629 container attach f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.35617967 +0000 UTC m=+0.118533830 container died f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.264657454 +0000 UTC m=+0.027011674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:24 np0005486808 systemd[1]: var-lib-containers-storage-overlay-0b0d06c83ac7510af06f96c321939a266581e4fe9ce6b11f311aa99461a449db-merged.mount: Deactivated successfully.
Oct 14 05:57:24 np0005486808 podman[440039]: 2025-10-14 09:57:24.388070202 +0000 UTC m=+0.150424372 container remove f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kapitsa, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:57:24 np0005486808 systemd[1]: libpod-conmon-f08992c6b70c5ba815299cc609b0c69fcf5c42b73831abc449bb74559496b087.scope: Deactivated successfully.
Oct 14 05:57:24 np0005486808 podman[440080]: 2025-10-14 09:57:24.61048593 +0000 UTC m=+0.069584248 container create f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 14 05:57:24 np0005486808 systemd[1]: Started libpod-conmon-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope.
Oct 14 05:57:24 np0005486808 podman[440080]: 2025-10-14 09:57:24.580965476 +0000 UTC m=+0.040063854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:24 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:24 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:24 np0005486808 podman[440080]: 2025-10-14 09:57:24.709995252 +0000 UTC m=+0.169093590 container init f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:57:24 np0005486808 podman[440080]: 2025-10-14 09:57:24.723193235 +0000 UTC m=+0.182291563 container start f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 05:57:24 np0005486808 podman[440080]: 2025-10-14 09:57:24.726974438 +0000 UTC m=+0.186072816 container attach f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:57:25 np0005486808 nova_compute[259627]: 2025-10-14 09:57:25.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 11 op/s
Oct 14 05:57:25 np0005486808 jovial_curran[440097]: {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    "0": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "devices": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "/dev/loop3"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            ],
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_name": "ceph_lv0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_size": "21470642176",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "name": "ceph_lv0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "tags": {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_name": "ceph",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.crush_device_class": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.encrypted": "0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_id": "0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.vdo": "0"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            },
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "vg_name": "ceph_vg0"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        }
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    ],
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    "1": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "devices": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "/dev/loop4"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            ],
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_name": "ceph_lv1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_size": "21470642176",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "name": "ceph_lv1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "tags": {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_name": "ceph",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.crush_device_class": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.encrypted": "0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_id": "1",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.vdo": "0"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            },
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "vg_name": "ceph_vg1"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        }
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    ],
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    "2": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "devices": [
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "/dev/loop5"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            ],
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_name": "ceph_lv2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_size": "21470642176",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "name": "ceph_lv2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "tags": {
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.cluster_name": "ceph",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.crush_device_class": "",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.encrypted": "0",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osd_id": "2",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:                "ceph.vdo": "0"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            },
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "type": "block",
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:            "vg_name": "ceph_vg2"
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:        }
Oct 14 05:57:25 np0005486808 jovial_curran[440097]:    ]
Oct 14 05:57:25 np0005486808 jovial_curran[440097]: }
Oct 14 05:57:25 np0005486808 systemd[1]: libpod-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope: Deactivated successfully.
Oct 14 05:57:25 np0005486808 podman[440080]: 2025-10-14 09:57:25.506396063 +0000 UTC m=+0.965494381 container died f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:57:25 np0005486808 systemd[1]: var-lib-containers-storage-overlay-7b92eb56fa3c8cf21152858e1085caceb106e1e2fdaa778ac5340e4e4ba1ecc0-merged.mount: Deactivated successfully.
Oct 14 05:57:25 np0005486808 podman[440080]: 2025-10-14 09:57:25.568705742 +0000 UTC m=+1.027804030 container remove f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:57:25 np0005486808 systemd[1]: libpod-conmon-f7648d6d3d153ecfa0406814477efe672e4d18fdd4501c3044a8a316a815fdaf.scope: Deactivated successfully.
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.382995703 +0000 UTC m=+0.059082541 container create 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:57:26 np0005486808 systemd[1]: Started libpod-conmon-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope.
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.353706074 +0000 UTC m=+0.029792962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.487058826 +0000 UTC m=+0.163145704 container init 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.498693842 +0000 UTC m=+0.174780650 container start 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.502615758 +0000 UTC m=+0.178702636 container attach 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 05:57:26 np0005486808 brave_mestorf[440279]: 167 167
Oct 14 05:57:26 np0005486808 systemd[1]: libpod-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope: Deactivated successfully.
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.50475347 +0000 UTC m=+0.180840328 container died 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:57:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-45ba3f2c0fd4b4883e4cbd62c17d5927abed22d17bc5a3ba03f2be6c4671a8b8-merged.mount: Deactivated successfully.
Oct 14 05:57:26 np0005486808 podman[440262]: 2025-10-14 09:57:26.56219733 +0000 UTC m=+0.238284158 container remove 5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 05:57:26 np0005486808 systemd[1]: libpod-conmon-5608f8d03679b97342c1bb9e8ce9e1f2104ebb288cc887341d412bd3cb0b907e.scope: Deactivated successfully.
Oct 14 05:57:26 np0005486808 podman[440306]: 2025-10-14 09:57:26.816722395 +0000 UTC m=+0.078202530 container create 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 14 05:57:26 np0005486808 systemd[1]: Started libpod-conmon-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope.
Oct 14 05:57:26 np0005486808 podman[440306]: 2025-10-14 09:57:26.786296778 +0000 UTC m=+0.047776973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:57:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:57:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:57:26 np0005486808 podman[440306]: 2025-10-14 09:57:26.941888054 +0000 UTC m=+0.203368189 container init 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 14 05:57:26 np0005486808 podman[440306]: 2025-10-14 09:57:26.949940611 +0000 UTC m=+0.211420716 container start 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:57:26 np0005486808 podman[440306]: 2025-10-14 09:57:26.95353661 +0000 UTC m=+0.215016795 container attach 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:57:26 np0005486808 podman[440323]: 2025-10-14 09:57:26.981208799 +0000 UTC m=+0.114590441 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:57:27 np0005486808 podman[440320]: 2025-10-14 09:57:27.0203348 +0000 UTC m=+0.148132555 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 14 05:57:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.7 MiB/s wr, 1 op/s
Oct 14 05:57:27 np0005486808 nova_compute[259627]: 2025-10-14 09:57:27.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]: {
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_id": 2,
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "type": "bluestore"
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    },
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_id": 1,
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "type": "bluestore"
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    },
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_id": 0,
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:        "type": "bluestore"
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]:    }
Oct 14 05:57:28 np0005486808 optimistic_kalam[440324]: }
Oct 14 05:57:28 np0005486808 systemd[1]: libpod-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Deactivated successfully.
Oct 14 05:57:28 np0005486808 podman[440306]: 2025-10-14 09:57:28.080713609 +0000 UTC m=+1.342193724 container died 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:57:28 np0005486808 systemd[1]: libpod-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Consumed 1.131s CPU time.
Oct 14 05:57:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-29fb9b0ce0cadeb77c909dd2446ced2088593fcf36be74b85951c6c76607413b-merged.mount: Deactivated successfully.
Oct 14 05:57:28 np0005486808 podman[440306]: 2025-10-14 09:57:28.158820505 +0000 UTC m=+1.420300650 container remove 0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 05:57:28 np0005486808 systemd[1]: libpod-conmon-0b20949c878e193099f0f2668227bdb21f4188337351608d27905fb2d35e9af1.scope: Deactivated successfully.
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5bb3f34a-adc8-4e03-9fce-6e85a96b6fde does not exist
Oct 14 05:57:28 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 13ece194-3953-4f17-85d7-5b3caf0ae5b8 does not exist
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:57:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:30 np0005486808 nova_compute[259627]: 2025-10-14 09:57:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.271626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850271718, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1658, "num_deletes": 254, "total_data_size": 2707493, "memory_usage": 2747392, "flush_reason": "Manual Compaction"}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850290290, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2626859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64113, "largest_seqno": 65770, "table_properties": {"data_size": 2619133, "index_size": 4668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16003, "raw_average_key_size": 20, "raw_value_size": 2603579, "raw_average_value_size": 3287, "num_data_blocks": 209, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435684, "oldest_key_time": 1760435684, "file_creation_time": 1760435850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 18716 microseconds, and 12177 cpu microseconds.
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.290353) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2626859 bytes OK
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.290380) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292158) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292181) EVENT_LOG_v1 {"time_micros": 1760435850292174, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.292206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2700289, prev total WAL file size 2700289, number of live WAL files 2.
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.293637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2565KB)], [152(8683KB)]
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850293686, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11519111, "oldest_snapshot_seqno": -1}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8407 keys, 9767274 bytes, temperature: kUnknown
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850343961, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9767274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9714279, "index_size": 30866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 219806, "raw_average_key_size": 26, "raw_value_size": 9567259, "raw_average_value_size": 1138, "num_data_blocks": 1195, "num_entries": 8407, "num_filter_entries": 8407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.344469) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9767274 bytes
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.347330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.9 rd, 193.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 8930, records dropped: 523 output_compression: NoCompression
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.347370) EVENT_LOG_v1 {"time_micros": 1760435850347354, "job": 94, "event": "compaction_finished", "compaction_time_micros": 50536, "compaction_time_cpu_micros": 32624, "output_level": 6, "num_output_files": 1, "total_output_size": 9767274, "num_input_records": 8930, "num_output_records": 8407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850348354, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435850350362, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.293509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:30 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:57:30.350439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:57:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:32 np0005486808 nova_compute[259627]: 2025-10-14 09:57:32.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:57:32
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', 'backups']
Oct 14 05:57:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:57:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:57:35 np0005486808 nova_compute[259627]: 2025-10-14 09:57:35.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:37 np0005486808 nova_compute[259627]: 2025-10-14 09:57:37.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:39 np0005486808 podman[440454]: 2025-10-14 09:57:39.665343972 +0000 UTC m=+0.059083891 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:57:39 np0005486808 podman[440453]: 2025-10-14 09:57:39.687861144 +0000 UTC m=+0.090756967 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 14 05:57:40 np0005486808 nova_compute[259627]: 2025-10-14 09:57:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:42 np0005486808 nova_compute[259627]: 2025-10-14 09:57:42.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:57:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:57:45 np0005486808 nova_compute[259627]: 2025-10-14 09:57:45.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:47 np0005486808 nova_compute[259627]: 2025-10-14 09:57:47.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:50 np0005486808 nova_compute[259627]: 2025-10-14 09:57:50.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:52 np0005486808 nova_compute[259627]: 2025-10-14 09:57:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:55 np0005486808 nova_compute[259627]: 2025-10-14 09:57:55.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:57:57 np0005486808 nova_compute[259627]: 2025-10-14 09:57:57.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:57:57 np0005486808 podman[440498]: 2025-10-14 09:57:57.680618694 +0000 UTC m=+0.090094771 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 05:57:57 np0005486808 podman[440497]: 2025-10-14 09:57:57.683041214 +0000 UTC m=+0.090368919 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 05:57:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:57:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:00 np0005486808 nova_compute[259627]: 2025-10-14 09:58:00.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:02 np0005486808 nova_compute[259627]: 2025-10-14 09:58:02.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:05 np0005486808 nova_compute[259627]: 2025-10-14 09:58:05.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:58:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:58:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:58:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3522386918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.074 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:58:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:58:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:58:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:07 np0005486808 nova_compute[259627]: 2025-10-14 09:58:07.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:07 np0005486808 nova_compute[259627]: 2025-10-14 09:58:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:08 np0005486808 nova_compute[259627]: 2025-10-14 09:58:08.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:10 np0005486808 nova_compute[259627]: 2025-10-14 09:58:10.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 14 05:58:10 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 14 05:58:10 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 14 05:58:10 np0005486808 podman[440539]: 2025-10-14 09:58:10.677625541 +0000 UTC m=+0.076215651 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 14 05:58:10 np0005486808 podman[440538]: 2025-10-14 09:58:10.68978574 +0000 UTC m=+0.104675760 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller)
Oct 14 05:58:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 05:58:12 np0005486808 nova_compute[259627]: 2025-10-14 09:58:12.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Oct 14 05:58:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:13 np0005486808 nova_compute[259627]: 2025-10-14 09:58:13.976 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.004 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.005 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.005 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:58:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:58:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141861825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.463 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.674 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.675 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.766 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:58:14 np0005486808 nova_compute[259627]: 2025-10-14 09:58:14.767 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:58:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 21 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.203 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:58:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805237291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.664 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.670 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.695 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.696 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:58:15 np0005486808 nova_compute[259627]: 2025-10-14 09:58:15.697 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:58:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 14 05:58:17 np0005486808 nova_compute[259627]: 2025-10-14 09:58:17.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:17 np0005486808 nova_compute[259627]: 2025-10-14 09:58:17.693 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 14 05:58:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 14 05:58:18 np0005486808 ceph-mon[74249]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 14 05:58:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 918 B/s wr, 18 op/s
Oct 14 05:58:19 np0005486808 nova_compute[259627]: 2025-10-14 09:58:19.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:20 np0005486808 nova_compute[259627]: 2025-10-14 09:58:20.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:20 np0005486808 nova_compute[259627]: 2025-10-14 09:58:20.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:20 np0005486808 nova_compute[259627]: 2025-10-14 09:58:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:58:20 np0005486808 nova_compute[259627]: 2025-10-14 09:58:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:58:21 np0005486808 nova_compute[259627]: 2025-10-14 09:58:21.006 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:58:21 np0005486808 nova_compute[259627]: 2025-10-14 09:58:21.006 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:21 np0005486808 nova_compute[259627]: 2025-10-14 09:58:21.007 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:58:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 05:58:21 np0005486808 nova_compute[259627]: 2025-10-14 09:58:21.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:21 np0005486808 nova_compute[259627]: 2025-10-14 09:58:21.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:22 np0005486808 nova_compute[259627]: 2025-10-14 09:58:22.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Oct 14 05:58:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 409 B/s rd, 0 B/s wr, 0 op/s
Oct 14 05:58:25 np0005486808 nova_compute[259627]: 2025-10-14 09:58:25.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:27 np0005486808 nova_compute[259627]: 2025-10-14 09:58:27.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:28 np0005486808 podman[440648]: 2025-10-14 09:58:28.634796915 +0000 UTC m=+0.069799204 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:58:28 np0005486808 podman[440649]: 2025-10-14 09:58:28.658257551 +0000 UTC m=+0.092735757 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 14 05:58:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:29 np0005486808 podman[440838]: 2025-10-14 09:58:29.224174277 +0000 UTC m=+0.066518313 container exec c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:58:29 np0005486808 podman[440838]: 2025-10-14 09:58:29.328365273 +0000 UTC m=+0.170709259 container exec_died c954a7df2f1a6d24a4e1b5cb811f1f8d1d148c63159cc1681da56eb4bf63550f (image=quay.io/ceph/ceph:v18, name=ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:30 np0005486808 nova_compute[259627]: 2025-10-14 09:58:30.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:30 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:31 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c206ad64-4dfa-4b4a-a411-ac7855641e45 does not exist
Oct 14 05:58:31 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d6203163-cc42-41c0-bba5-fbc9133f438e does not exist
Oct 14 05:58:31 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev cf1755bd-d123-49f0-b7ab-020332432cbf does not exist
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:31 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 05:58:31 np0005486808 podman[441274]: 2025-10-14 09:58:31.899487761 +0000 UTC m=+0.052859627 container create 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:58:31 np0005486808 systemd[1]: Started libpod-conmon-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope.
Oct 14 05:58:31 np0005486808 podman[441274]: 2025-10-14 09:58:31.873796852 +0000 UTC m=+0.027168748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:32 np0005486808 podman[441274]: 2025-10-14 09:58:32.000849518 +0000 UTC m=+0.154221424 container init 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:58:32 np0005486808 podman[441274]: 2025-10-14 09:58:32.009525171 +0000 UTC m=+0.162897037 container start 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 05:58:32 np0005486808 podman[441274]: 2025-10-14 09:58:32.013059378 +0000 UTC m=+0.166431324 container attach 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 14 05:58:32 np0005486808 dazzling_pike[441290]: 167 167
Oct 14 05:58:32 np0005486808 systemd[1]: libpod-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope: Deactivated successfully.
Oct 14 05:58:32 np0005486808 podman[441274]: 2025-10-14 09:58:32.015442576 +0000 UTC m=+0.168814452 container died 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:58:32 np0005486808 systemd[1]: var-lib-containers-storage-overlay-2446c15a418f30e6087c8e572ae5e5c8fd00dbb4f1e35774b25e0121349fdfdd-merged.mount: Deactivated successfully.
Oct 14 05:58:32 np0005486808 podman[441274]: 2025-10-14 09:58:32.065584716 +0000 UTC m=+0.218956612 container remove 4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 14 05:58:32 np0005486808 systemd[1]: libpod-conmon-4ebc1753df9f3a3e0868e742d71fa157d2e25c6ff01e5a13d0e42bff14886186.scope: Deactivated successfully.
Oct 14 05:58:32 np0005486808 podman[441316]: 2025-10-14 09:58:32.246461785 +0000 UTC m=+0.045664872 container create 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 14 05:58:32 np0005486808 systemd[1]: Started libpod-conmon-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope.
Oct 14 05:58:32 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:32 np0005486808 podman[441316]: 2025-10-14 09:58:32.231122348 +0000 UTC m=+0.030325465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:32 np0005486808 podman[441316]: 2025-10-14 09:58:32.348641992 +0000 UTC m=+0.147845109 container init 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:58:32 np0005486808 podman[441316]: 2025-10-14 09:58:32.364739927 +0000 UTC m=+0.163943014 container start 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 05:58:32 np0005486808 podman[441316]: 2025-10-14 09:58:32.369975785 +0000 UTC m=+0.169178892 container attach 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:58:32 np0005486808 nova_compute[259627]: 2025-10-14 09:58:32.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_09:58:32
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'backups']
Oct 14 05:58:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.432450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913432510, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 778, "num_deletes": 257, "total_data_size": 963838, "memory_usage": 979704, "flush_reason": "Manual Compaction"}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913442985, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 954724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65771, "largest_seqno": 66548, "table_properties": {"data_size": 950682, "index_size": 1757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8899, "raw_average_key_size": 19, "raw_value_size": 942495, "raw_average_value_size": 2031, "num_data_blocks": 78, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760435851, "oldest_key_time": 1760435851, "file_creation_time": 1760435913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 10616 microseconds, and 6426 cpu microseconds.
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.443068) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 954724 bytes OK
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.443094) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445783) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445798) EVENT_LOG_v1 {"time_micros": 1760435913445793, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.445818) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 959887, prev total WAL file size 986375, number of live WAL files 2.
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.446474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373634' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(932KB)], [155(9538KB)]
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913446527, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10721998, "oldest_snapshot_seqno": -1}
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:58:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8342 keys, 10606840 bytes, temperature: kUnknown
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913513374, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10606840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10552825, "index_size": 32078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 219383, "raw_average_key_size": 26, "raw_value_size": 10405387, "raw_average_value_size": 1247, "num_data_blocks": 1245, "num_entries": 8342, "num_filter_entries": 8342, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760435913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.513671) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10606840 bytes
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.515114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.2 rd, 158.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(22.3) write-amplify(11.1) OK, records in: 8871, records dropped: 529 output_compression: NoCompression
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.515143) EVENT_LOG_v1 {"time_micros": 1760435913515131, "job": 96, "event": "compaction_finished", "compaction_time_micros": 66920, "compaction_time_cpu_micros": 41599, "output_level": 6, "num_output_files": 1, "total_output_size": 10606840, "num_input_records": 8871, "num_output_records": 8342, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913515587, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760435913518831, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.446378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-09:58:33.518892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 05:58:33 np0005486808 crazy_chaplygin[441333]: --> passed data devices: 0 physical, 3 LVM
Oct 14 05:58:33 np0005486808 crazy_chaplygin[441333]: --> relative data size: 1.0
Oct 14 05:58:33 np0005486808 crazy_chaplygin[441333]: --> All data devices are unavailable
Oct 14 05:58:33 np0005486808 systemd[1]: libpod-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Deactivated successfully.
Oct 14 05:58:33 np0005486808 systemd[1]: libpod-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Consumed 1.152s CPU time.
Oct 14 05:58:33 np0005486808 podman[441316]: 2025-10-14 09:58:33.618342417 +0000 UTC m=+1.417545524 container died 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:58:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-211cbd54009f2022b89233fc8e481232e770c4e61b79fd8866f73f5dac2c1ce2-merged.mount: Deactivated successfully.
Oct 14 05:58:33 np0005486808 podman[441316]: 2025-10-14 09:58:33.678242067 +0000 UTC m=+1.477445154 container remove 45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 14 05:58:33 np0005486808 systemd[1]: libpod-conmon-45a33e027950c8263c10addadb74623bdd64c3c7bf027ac731aa03945a61d18a.scope: Deactivated successfully.
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.379607986 +0000 UTC m=+0.038716811 container create 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 14 05:58:34 np0005486808 systemd[1]: Started libpod-conmon-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope.
Oct 14 05:58:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.360297772 +0000 UTC m=+0.019406617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.458623305 +0000 UTC m=+0.117732170 container init 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.470574028 +0000 UTC m=+0.129682893 container start 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.474163516 +0000 UTC m=+0.133272381 container attach 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:58:34 np0005486808 systemd[1]: libpod-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope: Deactivated successfully.
Oct 14 05:58:34 np0005486808 hardcore_spence[441531]: 167 167
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.479807085 +0000 UTC m=+0.138915930 container died 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 05:58:34 np0005486808 conmon[441531]: conmon 88f86d6c091f84d8ea7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope/container/memory.events
Oct 14 05:58:34 np0005486808 systemd[1]: var-lib-containers-storage-overlay-efbfea3173a86a501d326ee3058809982db7246e1d5ce28b73826086250a40fd-merged.mount: Deactivated successfully.
Oct 14 05:58:34 np0005486808 podman[441515]: 2025-10-14 09:58:34.524550223 +0000 UTC m=+0.183659068 container remove 88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 14 05:58:34 np0005486808 systemd[1]: libpod-conmon-88f86d6c091f84d8ea7c4edce6598bb49ced49ceb582c730c48b424fb3ef8d26.scope: Deactivated successfully.
Oct 14 05:58:34 np0005486808 podman[441555]: 2025-10-14 09:58:34.730169128 +0000 UTC m=+0.060719111 container create 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 14 05:58:34 np0005486808 systemd[1]: Started libpod-conmon-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope.
Oct 14 05:58:34 np0005486808 podman[441555]: 2025-10-14 09:58:34.699294261 +0000 UTC m=+0.029844294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:34 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:34 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:34 np0005486808 podman[441555]: 2025-10-14 09:58:34.816457805 +0000 UTC m=+0.147007768 container init 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 14 05:58:34 np0005486808 podman[441555]: 2025-10-14 09:58:34.831945265 +0000 UTC m=+0.162495248 container start 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:58:34 np0005486808 podman[441555]: 2025-10-14 09:58:34.835940933 +0000 UTC m=+0.166490976 container attach 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 05:58:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:35 np0005486808 nova_compute[259627]: 2025-10-14 09:58:35.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]: {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    "0": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "devices": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "/dev/loop3"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            ],
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_name": "ceph_lv0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_size": "21470642176",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "name": "ceph_lv0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "tags": {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_name": "ceph",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.crush_device_class": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.encrypted": "0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_id": "0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.vdo": "0"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            },
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "vg_name": "ceph_vg0"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        }
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    ],
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    "1": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "devices": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "/dev/loop4"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            ],
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_name": "ceph_lv1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_size": "21470642176",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "name": "ceph_lv1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "tags": {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_name": "ceph",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.crush_device_class": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.encrypted": "0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_id": "1",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.vdo": "0"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            },
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "vg_name": "ceph_vg1"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        }
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    ],
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    "2": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "devices": [
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "/dev/loop5"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            ],
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_name": "ceph_lv2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_size": "21470642176",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "name": "ceph_lv2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "tags": {
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cephx_lockbox_secret": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.cluster_name": "ceph",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.crush_device_class": "",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.encrypted": "0",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osd_id": "2",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:                "ceph.vdo": "0"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            },
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "type": "block",
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:            "vg_name": "ceph_vg2"
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:        }
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]:    ]
Oct 14 05:58:35 np0005486808 dazzling_mirzakhani[441572]: }
Oct 14 05:58:35 np0005486808 systemd[1]: libpod-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope: Deactivated successfully.
Oct 14 05:58:35 np0005486808 podman[441555]: 2025-10-14 09:58:35.617676224 +0000 UTC m=+0.948226167 container died 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 14 05:58:35 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a272adf2e1336043f8e0c366038d59ab230c3f2e8711fdc018563957f9417955-merged.mount: Deactivated successfully.
Oct 14 05:58:35 np0005486808 podman[441555]: 2025-10-14 09:58:35.67089956 +0000 UTC m=+1.001449503 container remove 3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 05:58:35 np0005486808 systemd[1]: libpod-conmon-3c973b260dbe0698347f42cc29fc7439f1998103d52f6016fcdcf2a698a3ad4b.scope: Deactivated successfully.
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.342203892 +0000 UTC m=+0.038769972 container create eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 14 05:58:36 np0005486808 systemd[1]: Started libpod-conmon-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope.
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.326056236 +0000 UTC m=+0.022622296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.438336771 +0000 UTC m=+0.134902851 container init eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.449520205 +0000 UTC m=+0.146086255 container start eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.453094293 +0000 UTC m=+0.149660343 container attach eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 05:58:36 np0005486808 intelligent_bose[441749]: 167 167
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.457735837 +0000 UTC m=+0.154301887 container died eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 05:58:36 np0005486808 systemd[1]: libpod-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope: Deactivated successfully.
Oct 14 05:58:36 np0005486808 systemd[1]: var-lib-containers-storage-overlay-75708747baacae0dfade7c36300025119f10b1062749d164639b909a524d4571-merged.mount: Deactivated successfully.
Oct 14 05:58:36 np0005486808 podman[441733]: 2025-10-14 09:58:36.524104936 +0000 UTC m=+0.220670986 container remove eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 05:58:36 np0005486808 systemd[1]: libpod-conmon-eb3d3daa0c905f0ab46b8b3ab694896e50109de8af58ef1d23ac60cc3acebef7.scope: Deactivated successfully.
Oct 14 05:58:36 np0005486808 podman[441774]: 2025-10-14 09:58:36.701927479 +0000 UTC m=+0.049354582 container create 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 14 05:58:36 np0005486808 systemd[1]: Started libpod-conmon-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope.
Oct 14 05:58:36 np0005486808 podman[441774]: 2025-10-14 09:58:36.67875292 +0000 UTC m=+0.026180103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 05:58:36 np0005486808 systemd[1]: Started libcrun container.
Oct 14 05:58:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:36 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 05:58:36 np0005486808 podman[441774]: 2025-10-14 09:58:36.794829298 +0000 UTC m=+0.142256421 container init 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 14 05:58:36 np0005486808 podman[441774]: 2025-10-14 09:58:36.810619886 +0000 UTC m=+0.158046999 container start 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:58:36 np0005486808 podman[441774]: 2025-10-14 09:58:36.8140524 +0000 UTC m=+0.161479523 container attach 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 05:58:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:37 np0005486808 nova_compute[259627]: 2025-10-14 09:58:37.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]: {
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_id": 2,
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "type": "bluestore"
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    },
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_id": 1,
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "type": "bluestore"
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    },
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_id": 0,
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:        "type": "bluestore"
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]:    }
Oct 14 05:58:37 np0005486808 trusting_ramanujan[441791]: }
Oct 14 05:58:37 np0005486808 systemd[1]: libpod-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Deactivated successfully.
Oct 14 05:58:37 np0005486808 systemd[1]: libpod-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Consumed 1.038s CPU time.
Oct 14 05:58:37 np0005486808 podman[441774]: 2025-10-14 09:58:37.842218909 +0000 UTC m=+1.189646152 container died 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 05:58:37 np0005486808 systemd[1]: var-lib-containers-storage-overlay-be6ab45100fcaedd5b2b3b50c10278db80a10f338dc1878fbd45e1a37d0d476d-merged.mount: Deactivated successfully.
Oct 14 05:58:37 np0005486808 podman[441774]: 2025-10-14 09:58:37.906999888 +0000 UTC m=+1.254427001 container remove 89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 05:58:37 np0005486808 systemd[1]: libpod-conmon-89ce4021831a298b8a114b4ceed311966b906d5f03c87c3f9ecce2d6fdac258c.scope: Deactivated successfully.
Oct 14 05:58:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 05:58:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:37 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 05:58:37 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 2aeed1d3-ee9b-475c-8232-2b24ba52d1ae does not exist
Oct 14 05:58:37 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 19542914-d189-4b30-9dfe-892eddfaf99c does not exist
Oct 14 05:58:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:38 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 05:58:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:40 np0005486808 nova_compute[259627]: 2025-10-14 09:58:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:41 np0005486808 podman[441888]: 2025-10-14 09:58:41.664137957 +0000 UTC m=+0.082503516 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 14 05:58:41 np0005486808 podman[441889]: 2025-10-14 09:58:41.683751298 +0000 UTC m=+0.090155203 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 14 05:58:42 np0005486808 nova_compute[259627]: 2025-10-14 09:58:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 05:58:43 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 05:58:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:45 np0005486808 nova_compute[259627]: 2025-10-14 09:58:45.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:47 np0005486808 nova_compute[259627]: 2025-10-14 09:58:47.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:50 np0005486808 nova_compute[259627]: 2025-10-14 09:58:50.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:52 np0005486808 nova_compute[259627]: 2025-10-14 09:58:52.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:52 np0005486808 nova_compute[259627]: 2025-10-14 09:58:52.973 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:58:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:55 np0005486808 nova_compute[259627]: 2025-10-14 09:58:55.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:57 np0005486808 nova_compute[259627]: 2025-10-14 09:58:57.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:58:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:58:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:58:59 np0005486808 podman[441936]: 2025-10-14 09:58:59.659857844 +0000 UTC m=+0.070711756 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 05:58:59 np0005486808 podman[441935]: 2025-10-14 09:58:59.669933782 +0000 UTC m=+0.085976751 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 05:59:00 np0005486808 nova_compute[259627]: 2025-10-14 09:59:00.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:59:02 np0005486808 nova_compute[259627]: 2025-10-14 09:59:02.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 05:59:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 05:59:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:59:03 np0005486808 systemd-logind[799]: New session 57 of user zuul.
Oct 14 05:59:03 np0005486808 systemd[1]: Started Session 57 of User zuul.
Oct 14 05:59:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:05 np0005486808 nova_compute[259627]: 2025-10-14 09:59:05.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 05:59:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 05:59:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 05:59:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878180481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 05:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.075 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:59:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 09:59:07.076 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:59:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:07 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.22995 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:07 np0005486808 nova_compute[259627]: 2025-10-14 09:59:07.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:07 np0005486808 nova_compute[259627]: 2025-10-14 09:59:07.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:08 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.22997 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:59:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 05:59:08 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464857549' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 05:59:08 np0005486808 nova_compute[259627]: 2025-10-14 09:59:08.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:08 np0005486808 nova_compute[259627]: 2025-10-14 09:59:08.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 05:59:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:09 np0005486808 nova_compute[259627]: 2025-10-14 09:59:09.998 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:10 np0005486808 nova_compute[259627]: 2025-10-14 09:59:10.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:12 np0005486808 ovs-vsctl[442261]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 14 05:59:12 np0005486808 podman[442284]: 2025-10-14 09:59:12.76927503 +0000 UTC m=+0.127128260 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 14 05:59:12 np0005486808 podman[442283]: 2025-10-14 09:59:12.803749426 +0000 UTC m=+0.161313849 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 05:59:12 np0005486808 nova_compute[259627]: 2025-10-14 09:59:12.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:59:13 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 14 05:59:13 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 14 05:59:13 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 14 05:59:13 np0005486808 nova_compute[259627]: 2025-10-14 09:59:13.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.030 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:59:14 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: cache status {prefix=cache status} (starting...)
Oct 14 05:59:14 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: client ls {prefix=client ls} (starting...)
Oct 14 05:59:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:59:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330810155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:59:14 np0005486808 lvm[442661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.526 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:59:14 np0005486808 lvm[442661]: VG ceph_vg1 finished
Oct 14 05:59:14 np0005486808 lvm[442689]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 05:59:14 np0005486808 lvm[442689]: VG ceph_vg0 finished
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.726 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3458MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.728 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 05:59:14 np0005486808 lvm[442704]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 05:59:14 np0005486808 lvm[442704]: VG ceph_vg2 finished
Oct 14 05:59:14 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23003 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.972 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 05:59:14 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.973 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:14.992 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 14 05:59:15 np0005486808 kernel: block loop4: the capability attribute has been deprecated.
Oct 14 05:59:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 14 05:59:15 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23005 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 05:59:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590939150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.468 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.474 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.509 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.511 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 05:59:15 np0005486808 nova_compute[259627]: 2025-10-14 09:59:15.511 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 14 05:59:15 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 14 05:59:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 14 05:59:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383223823' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 05:59:16 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 14 05:59:16 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23013 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:16 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 05:59:16 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:16.190+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 05:59:16 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455265449' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 05:59:16 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: ops {prefix=ops} (starting...)
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794983232' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 14 05:59:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1679887878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134349536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339085505' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 05:59:17 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: session ls {prefix=session ls} (starting...)
Oct 14 05:59:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:17 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: status {prefix=status} (starting...)
Oct 14 05:59:17 np0005486808 nova_compute[259627]: 2025-10-14 09:59:17.508 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550033903' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 05:59:17 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:17 np0005486808 nova_compute[259627]: 2025-10-14 09:59:17.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:17 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23031 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 05:59:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/692184495' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174512388' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506106079' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887664699' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 14 05:59:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658988925' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 05:59:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 05:59:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621050288' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 05:59:19 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23043 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:19 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 05:59:19 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:19.280+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 05:59:19 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23045 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 14 05:59:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313547730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 05:59:20 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23049 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 14 05:59:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620577048' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 05:59:20 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:20 np0005486808 nova_compute[259627]: 2025-10-14 09:59:20.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 05:59:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838274859' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 05:59:20 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:20 np0005486808 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:20 np0005486808 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:20 np0005486808 nova_compute[259627]: 2025-10-14 09:59:20.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 257712128 unmapped: 58433536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820254 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3cdcb32c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce709000 session 0x55b3cdc6cd20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51400 session 0x55b3ce0e9a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce127c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3cfba3c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4d6000/0x0/0x4ffc00000, data 0x178847b/0x1918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896002 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02b400 session 0x55b3cb999860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbe8c5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4d6000/0x0/0x4ffc00000, data 0x178847b/0x1918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258031616 unmapped: 58114048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbf223c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258039808 unmapped: 58105856 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.666875839s of 16.880754471s, submitted: 48
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cbdc7c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899535 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899667 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258252800 unmapped: 57892864 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899667 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258260992 unmapped: 57884672 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258269184 unmapped: 57876480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x17ac47b/0x193c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966707 data_alloc: 234881024 data_used: 12066816
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258629632 unmapped: 57516032 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3d0b64f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cb999680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3ce0e9860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3ce58a000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.489801407s of 16.518394470s, submitted: 7
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc48c000 session 0x55b3cbf3b4a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb998b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cbe8c960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cbdc7860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3ce5aef00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edd68000/0x0/0x4ffc00000, data 0x1ef44ed/0x2086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026439 data_alloc: 234881024 data_used: 12066816
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 258875392 unmapped: 57270272 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263471104 unmapped: 52674560 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed35d000/0x0/0x4ffc00000, data 0x28ff4ed/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263774208 unmapped: 52371456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5800 session 0x55b3d0b64780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263782400 unmapped: 52363264 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263790592 unmapped: 52355072 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117295 data_alloc: 234881024 data_used: 12820480
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2d5000/0x0/0x4ffc00000, data 0x29874ed/0x2b19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266592256 unmapped: 49553408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.901058197s of 12.385982513s, submitted: 139
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2b6000/0x0/0x4ffc00000, data 0x29a64ed/0x2b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165899 data_alloc: 234881024 data_used: 20168704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266608640 unmapped: 49537024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2b6000/0x0/0x4ffc00000, data 0x29a64ed/0x2b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [0,0,0,1])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267591680 unmapped: 48553984 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207337 data_alloc: 234881024 data_used: 20238336
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecd99000/0x0/0x4ffc00000, data 0x2ec34ed/0x3055000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267624448 unmapped: 48521216 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecd1d000/0x0/0x4ffc00000, data 0x2f3f4ed/0x30d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267853824 unmapped: 48291840 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 267853824 unmapped: 48291840 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cb998960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf616800 session 0x55b3cde57e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51400 session 0x55b3cc7ec1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cfba23c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723741531s of 10.006159782s, submitted: 67
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5800 session 0x55b3ce0e8f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf616800 session 0x55b3ce126960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfead800 session 0x55b3cba943c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269221888 unmapped: 46923776 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610c00 session 0x55b3cde565a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5a25a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244870 data_alloc: 234881024 data_used: 20275200
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eca56000/0x0/0x4ffc00000, data 0x320455f/0x3398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269254656 unmapped: 46891008 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52400 session 0x55b3ccada000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3251004 data_alloc: 234881024 data_used: 20275200
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccacd000 session 0x55b3cbe3e3c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02a800 session 0x55b3ce10c5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269393920 unmapped: 46751744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eca0c000/0x0/0x4ffc00000, data 0x324d582/0x33e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf60e800 session 0x55b3cbf23e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269418496 unmapped: 46727168 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x2557510/0x26ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122116 data_alloc: 234881024 data_used: 15335424
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x2557510/0x26ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 46596096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.288921356s of 14.564796448s, submitted: 78
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269557760 unmapped: 46587904 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269557760 unmapped: 46587904 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122644 data_alloc: 234881024 data_used: 15335424
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270606336 unmapped: 45539328 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee47c000/0x0/0x4ffc00000, data 0x281f510/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273940480 unmapped: 42205184 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192304 data_alloc: 234881024 data_used: 17195008
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee102000/0x0/0x4ffc00000, data 0x2b99510/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee102000/0x0/0x4ffc00000, data 0x2b99510/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.664649963s of 10.022263527s, submitted: 108
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee0e0000/0x0/0x4ffc00000, data 0x2bbb510/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc64400 session 0x55b3cbe3fc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce1263c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273293312 unmapped: 42852352 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191948 data_alloc: 234881024 data_used: 17195008
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3ce5a30e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271327232 unmapped: 44818432 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271327232 unmapped: 44818432 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea21000/0x0/0x4ffc00000, data 0x227747b/0x2407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea21000/0x0/0x4ffc00000, data 0x227747b/0x2407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271335424 unmapped: 44810240 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077816 data_alloc: 234881024 data_used: 12795904
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512fc00 session 0x55b3ce5aed20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeac000 session 0x55b3ccaddc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3cbd82b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2853828 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262897664 unmapped: 53248000 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c400 session 0x55b3cc7ec5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5c00 session 0x55b3ccadc1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4800 session 0x55b3cdc6c3c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d7c00 session 0x55b3cc5990e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.143781662s of 15.404740334s, submitted: 75
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6000 session 0x55b3cb9992c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4800 session 0x55b3cfba3860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf3b5c00 session 0x55b3ce10c780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c400 session 0x55b3d0b641e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3ce58ad20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263184384 unmapped: 52961280 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871810 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3ccaa05a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263192576 unmapped: 52953088 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2872450 data_alloc: 218103808 data_used: 2752512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263200768 unmapped: 52944896 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263200768 unmapped: 52944896 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882050 data_alloc: 218103808 data_used: 4030464
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efcdd000/0x0/0x4ffc00000, data 0xfc147b/0x1151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263208960 unmapped: 52936704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611000 session 0x55b3cbe8dc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60c00 session 0x55b3cde57860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbfc00 session 0x55b3cba95e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3cd3b23c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.898473740s of 12.925726891s, submitted: 7
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3ccadc000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbfc00 session 0x55b3cdc6da40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60c00 session 0x55b3cbe8c000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3cb9414a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611000 session 0x55b3cc7ff860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263249920 unmapped: 52895744 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 262979584 unmapped: 53166080 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057502 data_alloc: 218103808 data_used: 4022272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a4000/0x0/0x4ffc00000, data 0x25f04ed/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce139400 session 0x55b3ce0e8000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263086080 unmapped: 53059584 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a4000/0x0/0x4ffc00000, data 0x25f04ed/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd52000 session 0x55b3ce10cd20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 264192000 unmapped: 51953664 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf612800 session 0x55b3cbe3e5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce127a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263389184 unmapped: 52756480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 263389184 unmapped: 52756480 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3159095 data_alloc: 234881024 data_used: 18636800
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.488649368s of 10.946557045s, submitted: 125
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3159271 data_alloc: 234881024 data_used: 18636800
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 265953280 unmapped: 50192384 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee6a1000/0x0/0x4ffc00000, data 0x25fa4fd/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 266289152 unmapped: 49856512 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270188544 unmapped: 45957120 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3231281 data_alloc: 234881024 data_used: 19087360
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c800 session 0x55b3ce10c3c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd52000 session 0x55b3ce5a32c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce0e8000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce139400 session 0x55b3cb941c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf612800 session 0x55b3d0b65680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed414000/0x0/0x4ffc00000, data 0x388655f/0x3a1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3ce58ab40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270671872 unmapped: 45473792 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3315168 data_alloc: 234881024 data_used: 19079168
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3cdcb3680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.795685768s of 13.264044762s, submitted: 137
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7000 session 0x55b3cdcb2000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270688256 unmapped: 45457408 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3ce0e9e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270704640 unmapped: 45441024 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 269688832 unmapped: 46456832 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370913 data_alloc: 234881024 data_used: 25350144
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed3f3000/0x0/0x4ffc00000, data 0x38a755f/0x3a3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cb999c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3cdc6d680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272424960 unmapped: 43720704 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3cb9981e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100258 data_alloc: 234881024 data_used: 13279232
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.091932297s of 11.313192368s, submitted: 63
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271286272 unmapped: 44859392 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeac8000/0x0/0x4ffc00000, data 0x21d54dd/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272482304 unmapped: 43663360 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee987000/0x0/0x4ffc00000, data 0x23074dd/0x2498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee987000/0x0/0x4ffc00000, data 0x23074dd/0x2498000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124442 data_alloc: 234881024 data_used: 13328384
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124442 data_alloc: 234881024 data_used: 13328384
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273539072 unmapped: 42606592 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124618 data_alloc: 234881024 data_used: 13332480
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee96f000/0x0/0x4ffc00000, data 0x23174dd/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.719812393s of 13.929898262s, submitted: 55
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3d0b64960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f400 session 0x55b3ce5ae5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43278336 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14c00 session 0x55b3cbf221e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3ca000/0x0/0x4ffc00000, data 0x18d347b/0x1a63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbe8de00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70cc00 session 0x55b3cbf23c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2978292 data_alloc: 218103808 data_used: 5054464
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cde57e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270630912 unmapped: 45514752 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbe400 session 0x55b3ce58a1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f400 session 0x55b3cbe8c960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3d0b64780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3cb23e5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce127e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efe50000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925672 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9f3000/0x0/0x4ffc00000, data 0x12ab47b/0x143b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cdc6da40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365673065s of 10.588297844s, submitted: 55
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 270942208 unmapped: 45203456 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962481 data_alloc: 218103808 data_used: 7176192
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962481 data_alloc: 218103808 data_used: 7176192
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef9cf000/0x0/0x4ffc00000, data 0x12cf47b/0x145f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271196160 unmapped: 44949504 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.791116714s of 10.793682098s, submitted: 1
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 273645568 unmapped: 42500096 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274276352 unmapped: 41869312 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeec4000/0x0/0x4ffc00000, data 0x1dcc47b/0x1f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067957 data_alloc: 218103808 data_used: 8536064
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274415616 unmapped: 41730048 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeec4000/0x0/0x4ffc00000, data 0x1dcc47b/0x1f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056245 data_alloc: 218103808 data_used: 8536064
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeecf000/0x0/0x4ffc00000, data 0x1dcf47b/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274096128 unmapped: 42049536 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056245 data_alloc: 218103808 data_used: 8536064
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274104320 unmapped: 42041344 heap: 316145664 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.782558441s of 16.128824234s, submitted: 114
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4c00 session 0x55b3d0b64d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf60fc00 session 0x55b3cbf223c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbdc7860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeecf000/0x0/0x4ffc00000, data 0x1dcf47b/0x1f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3ce10c780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4c00 session 0x55b3ce126960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa52800 session 0x55b3cba95e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce126f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce569860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cd3b32c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3ce0e8b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee83c000/0x0/0x4ffc00000, data 0x24614dd/0x25f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125818 data_alloc: 218103808 data_used: 8536064
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274112512 unmapped: 46235648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125818 data_alloc: 218103808 data_used: 8536064
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4000 session 0x55b3cbdc74a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274120704 unmapped: 46227456 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274128896 unmapped: 46219264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274128896 unmapped: 46219264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.248894691s of 10.399340630s, submitted: 34
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ca5d6800 session 0x55b3ccadb0e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62f000/0x0/0x4ffc00000, data 0x266e4dd/0x27ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274153472 unmapped: 46194688 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274153472 unmapped: 46194688 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143644 data_alloc: 218103808 data_used: 10674176
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62e000/0x0/0x4ffc00000, data 0x266e500/0x2800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 45899776 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3193564 data_alloc: 234881024 data_used: 17571840
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee62e000/0x0/0x4ffc00000, data 0x266e500/0x2800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 274456576 unmapped: 45891584 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857639313s of 10.003479004s, submitted: 19
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278192128 unmapped: 42156032 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278331392 unmapped: 42016768 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285960 data_alloc: 234881024 data_used: 18231296
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edc33000/0x0/0x4ffc00000, data 0x3069500/0x31fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,0,3])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 41238528 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 41238528 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed842000/0x0/0x4ffc00000, data 0x345a500/0x35ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 41205760 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 41205760 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279207936 unmapped: 41140224 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326758 data_alloc: 234881024 data_used: 18632704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cba94000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeadc00 session 0x55b3cdc6cd20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279216128 unmapped: 41132032 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed836000/0x0/0x4ffc00000, data 0x3466500/0x35f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3cb986d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279232512 unmapped: 41115648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cdc6cf00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.617760658s of 11.051534653s, submitted: 118
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce4c00 session 0x55b3cbe8d860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276316160 unmapped: 44032000 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053456 data_alloc: 218103808 data_used: 10018816
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614400 session 0x55b3cbd421e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3e5000/0x0/0x4ffc00000, data 0x18b7500/0x1a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046420 data_alloc: 218103808 data_used: 9908224
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44023808 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef3e5000/0x0/0x4ffc00000, data 0x18b7500/0x1a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,3,2,1])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfeac400 session 0x55b3ccadc1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cbe3e960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d16800 session 0x55b3cb99b860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cb9414a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cfba3680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 43696128 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6c00 session 0x55b3ce58ad20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 43687936 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb56000/0x0/0x4ffc00000, data 0x2145562/0x22d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122233 data_alloc: 218103808 data_used: 9908224
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 43679744 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.100730896s of 23.290626526s, submitted: 66
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 43671552 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eeb54000/0x0/0x4ffc00000, data 0x2146562/0x22d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [0,0,0,0,0,2])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ce5af680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615c00 session 0x55b3cdc6c5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5000 session 0x55b3cfba34a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ce5a34a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5000 session 0x55b3d0b65c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141437 data_alloc: 218103808 data_used: 9908224
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276635648 unmapped: 43712512 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276635648 unmapped: 43712512 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615000 session 0x55b3cc7ec1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3195677 data_alloc: 234881024 data_used: 17453056
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278315008 unmapped: 42033152 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee855000/0x0/0x4ffc00000, data 0x2446562/0x25d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217917 data_alloc: 234881024 data_used: 20598784
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279511040 unmapped: 40837120 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.495318413s of 13.531723976s, submitted: 4
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283041792 unmapped: 37306368 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edf13000/0x0/0x4ffc00000, data 0x2d87562/0x2f1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298665 data_alloc: 234881024 data_used: 21823488
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283328512 unmapped: 37019648 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb38000/0x0/0x4ffc00000, data 0x3163562/0x32f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 34955264 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323501 data_alloc: 234881024 data_used: 21856256
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edb17000/0x0/0x4ffc00000, data 0x3184562/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.379812241s of 12.855323792s, submitted: 129
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3cdcb30e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d000 session 0x55b3cb23e5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 34947072 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3d0b64780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3129701 data_alloc: 234881024 data_used: 13082624
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eed07000/0x0/0x4ffc00000, data 0x1f94500/0x2126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283082752 unmapped: 37265408 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283090944 unmapped: 37257216 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eed07000/0x0/0x4ffc00000, data 0x1f94500/0x2126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283090944 unmapped: 37257216 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce126780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cb9990e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998871 data_alloc: 218103808 data_used: 5861376
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 37249024 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3ccadd680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283115520 unmapped: 37232640 heap: 320348160 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce0e4000 session 0x55b3cba941e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ccadd680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3d0b64780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cdcb30e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3cdc6c5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea70000/0x0/0x4ffc00000, data 0x222d47b/0x23bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3ce58ad20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097520 data_alloc: 218103808 data_used: 5857280
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd850800 session 0x55b3cfba3680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cb9414a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.632447243s of 11.898548126s, submitted: 70
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cb99b860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283164672 unmapped: 41385984 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196983 data_alloc: 234881024 data_used: 17985536
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196983 data_alloc: 234881024 data_used: 17985536
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 40271872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea6f000/0x0/0x4ffc00000, data 0x222d4ae/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.750034332s of 11.771839142s, submitted: 6
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 37773312 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260713 data_alloc: 234881024 data_used: 18665472
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x29f74ae/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x29f74ae/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260729 data_alloc: 234881024 data_used: 18665472
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38445056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5c00 session 0x55b3cbd421e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa51000 session 0x55b3ce5ae5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cbdc7c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x152a47b/0x16ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cbf23860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707800 session 0x55b3cf5acf00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009038 data_alloc: 218103808 data_used: 5857280
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282034176 unmapped: 42516480 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.279493332s of 12.683292389s, submitted: 106
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ccadba40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938558 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279126016 unmapped: 45424640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc598000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3ce10c960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cc7fe780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3ce5694a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707800 session 0x55b3cf5ad860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cbe8c000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cba95e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cd3b32c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3d0b65c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990635 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990635 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef374000/0x0/0x4ffc00000, data 0x15184ed/0x16aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cb941e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3ce10cb40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0800 session 0x55b3d0b645a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.588605881s of 17.711309433s, submitted: 27
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3cde570e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002713 data_alloc: 218103808 data_used: 4149248
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279158784 unmapped: 45391872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042873 data_alloc: 218103808 data_used: 9805824
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.889083862s of 11.892253876s, submitted: 1
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048489 data_alloc: 218103808 data_used: 9805824
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s#012Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.724911690s of 15.823020935s, submitted: 21
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6e800 session 0x55b3cbf234a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70b000 session 0x55b3ccad7c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ccadbe00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,1])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ce1261e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce58bc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065094 data_alloc: 218103808 data_used: 9809920
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3ce5af680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cde57680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3cba94000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ccadb0e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068328 data_alloc: 218103808 data_used: 9809920
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073608 data_alloc: 218103808 data_used: 10457088
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.005136490s of 16.128065109s, submitted: 29
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1cc000/0x0/0x4ffc00000, data 0x16bc56f/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073916 data_alloc: 218103808 data_used: 10457088
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eebb8000/0x0/0x4ffc00000, data 0x1cd156f/0x1e66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279379968 unmapped: 45170688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279257088 unmapped: 45293568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146050 data_alloc: 218103808 data_used: 10694656
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea54000/0x0/0x4ffc00000, data 0x1e2d56f/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522329330s of 11.824616432s, submitted: 88
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea3d000/0x0/0x4ffc00000, data 0x1e4c56f/0x1fe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140686 data_alloc: 218103808 data_used: 10698752
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cdc6cd20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611400 session 0x55b3cb9981e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3cbe8dc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.392840385s of 13.488458633s, submitted: 24
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ce5af4a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3cd3b2f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cd3b30e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.573257446s of 15.623365402s, submitted: 15
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 52625408 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.134544373s of 26.442840576s, submitted: 90
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ccadba40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6fc00 session 0x55b3cbf23860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbdc7c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbd421e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cb99b860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cb9414a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010266 data_alloc: 218103808 data_used: 6356992
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272015360 unmapped: 52535296 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.337787628s of 21.399259567s, submitted: 5
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042632 data_alloc: 218103808 data_used: 6356992
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 48848896 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275709952 unmapped: 48840704 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce5a25a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3d0b64b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3cb986960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce569a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.855088234s of 15.102807999s, submitted: 57
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082882 data_alloc: 218103808 data_used: 6582272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275603456 unmapped: 48947200 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cfba2d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3ce39d0e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3ce1274a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cdc6d0e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce58ab40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275857408 unmapped: 48693248 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126608 data_alloc: 218103808 data_used: 6582272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cbe8c780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126757 data_alloc: 218103808 data_used: 6582272
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.413484573s of 10.376444817s, submitted: 44
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157769 data_alloc: 218103808 data_used: 10874880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158077 data_alloc: 218103808 data_used: 10874880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 05:59:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755925548' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.610369682s of 12.620978355s, submitted: 2
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed285000/0x0/0x4ffc00000, data 0x2467500/0x25f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200861 data_alloc: 218103808 data_used: 10952704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.501975060s of 12.738298416s, submitted: 59
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb987860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15800 session 0x55b3ce126f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ccaa0780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edce3000/0x0/0x4ffc00000, data 0x1a0a47b/0x1b9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cfba3680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c000 session 0x55b3cbdc7680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3d0b641e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.562591553s of 46.725173950s, submitted: 51
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cbe8c1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cc8025a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce10d2c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ce10d0e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3ce5ae000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e800 session 0x55b3ce0e9a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.344988823s of 10.475560188s, submitted: 43
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063134 data_alloc: 218103808 data_used: 2838528
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277938176 unmapped: 50290688 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134014 data_alloc: 234881024 data_used: 12783616
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134494 data_alloc: 234881024 data_used: 12795904
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.812140465s of 12.816347122s, submitted: 1
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 43753472 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3255354 data_alloc: 234881024 data_used: 13996032
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250878 data_alloc: 234881024 data_used: 14000128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.050657272s of 14.411172867s, submitted: 131
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250758 data_alloc: 234881024 data_used: 14000128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02d800 session 0x55b3cba952c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5b800 session 0x55b3ccaa0f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3ce0e83c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5fc00 session 0x55b3cdcb23c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce58b4a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb526000/0x0/0x4ffc00000, data 0x30274dd/0x31b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326714 data_alloc: 234881024 data_used: 14000128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63800 session 0x55b3cb941e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3d0b65a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbd43e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaa0000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326262 data_alloc: 234881024 data_used: 14000128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 47489024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399382 data_alloc: 234881024 data_used: 21020672
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.038463593s of 18.175695419s, submitted: 14
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x302b4dd/0x31bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400218 data_alloc: 234881024 data_used: 21020672
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289325056 unmapped: 43106304 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289488896 unmapped: 42942464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.170224190s of 15.374196053s, submitted: 52
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb095000/0x0/0x4ffc00000, data 0x34b84dd/0x3649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449236 data_alloc: 234881024 data_used: 21671936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3ccadc000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60400 session 0x55b3ccadb4a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3cfba2b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebefe000/0x0/0x4ffc00000, data 0x264f4dd/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259330 data_alloc: 218103808 data_used: 10498048
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5ae1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cde570e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5d400 session 0x55b3cc7ec1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebef2000/0x0/0x4ffc00000, data 0x265b4dd/0x27ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce5a2960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbe8c960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce0f2000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cde57680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.481346130s of 44.651317596s, submitted: 51
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3cba94000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3ccada000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ccad6d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cf5acf00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c400 session 0x55b3cbf22960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282951680 unmapped: 49479680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.536443710s of 19.611005783s, submitted: 14
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 49332224 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285564928 unmapped: 46866432 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095057 data_alloc: 218103808 data_used: 5337088
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0f1000/0x0/0x4ffc00000, data 0x14454ae/0x15d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285655040 unmapped: 46776320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086421 data_alloc: 218103808 data_used: 5337088
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086741 data_alloc: 218103808 data_used: 5345280
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 47439872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.636660576s of 14.872914314s, submitted: 73
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaddc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d800 session 0x55b3ce5a3860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3d0b64d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc7ffc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3ccaddc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157783 data_alloc: 218103808 data_used: 5345280
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cbf22960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198420 data_alloc: 218103808 data_used: 10727424
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 46170112 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214100 data_alloc: 234881024 data_used: 12992512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728010178s of 16.851394653s, submitted: 25
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336672 data_alloc: 234881024 data_used: 14049280
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 36315136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 36978688 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340164 data_alloc: 234881024 data_used: 14286848
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb91f000/0x0/0x4ffc00000, data 0x2c2d4ae/0x2dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337508 data_alloc: 234881024 data_used: 14286848
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ccada000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.508075714s of 14.903322220s, submitted: 136
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15c00 session 0x55b3cb940f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3ccadc1e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e0000/0x0/0x4ffc00000, data 0x146c4ae/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102246 data_alloc: 218103808 data_used: 5394432
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x14734ae/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cc7ff2c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba95c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d2ff1400 session 0x55b3cde57e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.965858459s of 40.134490967s, submitted: 59
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3ce5a32c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce58a5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce39cd20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce126780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cc5990e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cd3b2000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.726144791s of 19.748613358s, submitted: 2
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291405824 unmapped: 41025536 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134708 data_alloc: 218103808 data_used: 5177344
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134868 data_alloc: 218103808 data_used: 5181440
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135188 data_alloc: 218103808 data_used: 5189632
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cba943c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce5aeb40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce568960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58b680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.785821915s of 15.976529121s, submitted: 45
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce58ab40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3ce569a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cbe8da40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cfba3a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58a960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec945000/0x0/0x4ffc00000, data 0x1c0947b/0x1d99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cba95a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292618240 unmapped: 39813120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.082937241s of 16.159908295s, submitted: 8
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec91f000/0x0/0x4ffc00000, data 0x1c2e47b/0x1dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293462016 unmapped: 38969344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba945a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.404949188s of 17.572023392s, submitted: 33
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cc599c20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293502976 unmapped: 38928384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ccadd860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141441 data_alloc: 218103808 data_used: 5251072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293527552 unmapped: 38903808 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70dc00 session 0x55b3cc7ec5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3cbd43a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc44000/0x0/0x4ffc00000, data 0x190a47b/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba952c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.787899017s of 36.916973114s, submitted: 30
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3ce5a21e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb23e5a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3cbe3ed20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3ce58ba40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0x12584dd/0x13e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cbdc7860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ce1274a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce5a30e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba25a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.343187332s of 19.452217102s, submitted: 33
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187434 data_alloc: 218103808 data_used: 5513216
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fa000/0x0/0x4ffc00000, data 0x1e4a4ed/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x1e4e4ed/0x1fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189592 data_alloc: 218103808 data_used: 5500928
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.344091415s of 16.666501999s, submitted: 103
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb99b860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cb986960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cbe8c780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb987860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cdcb25a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb9863c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba2f00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cbe3e960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cbdc6d20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199049 data_alloc: 218103808 data_used: 5505024
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3cfba3e00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291282944 unmapped: 41148416 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199181 data_alloc: 218103808 data_used: 5505024
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202541 data_alloc: 218103808 data_used: 6029312
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.294864655s of 16.384281158s, submitted: 18
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291332096 unmapped: 41099264 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202957 data_alloc: 218103808 data_used: 6066176
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292683776 unmapped: 39747584 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292478976 unmapped: 39952384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292495360 unmapped: 39936000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec1ae000/0x0/0x4ffc00000, data 0x23984ed/0x252a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s#012Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.300401688s of 30.381093979s, submitted: 46
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3cbf234a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63000 session 0x55b3cde56b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb99b2c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194413 data_alloc: 218103808 data_used: 5505024
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3ce0f3860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3cba95a40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3ccadd680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.373273849s of 24.570438385s, submitted: 43
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3d0b64960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cb998960
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cba943c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5000 session 0x55b3cb999680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3ce39d860
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ccadad20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099194 data_alloc: 218103808 data_used: 2686976
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed1fc000/0x0/0x4ffc00000, data 0x1350500/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce39dc20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce126780
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cf5ade00
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.473459244s of 18.660179138s, submitted: 60
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291807232 unmapped: 40624128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291872768 unmapped: 40558592 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.734951019s of 48.040054321s, submitted: 90
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 293 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cba943c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291897344 unmapped: 40534016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061748 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2ec000/0x0/0x4ffc00000, data 0xe5004c/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291938304 unmapped: 40493056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291946496 unmapped: 40484864 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292118528 unmapped: 40312832 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292143104 unmapped: 40288256 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292200448 unmapped: 40230912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.866653442s of 113.942031860s, submitted: 31
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cdd5f800 session 0x55b3ccadd680
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cb99b2c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 33300480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 295 ms_handle_reset con 0x55b3cf4e7400 session 0x55b3cfba25a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e365d/0x376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775878906s of 11.974079132s, submitted: 56
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 296 ms_handle_reset con 0x55b3cdd5cc00 session 0x55b3cc5990e0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954206 data_alloc: 218103808 data_used: 151552
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf54000/0x0/0x4ffc00000, data 0x1e520b/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956988 data_alloc: 218103808 data_used: 151552
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 297 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x1e6c6e/0x37b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.541853905s of 11.648766518s, submitted: 41
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959762 data_alloc: 218103808 data_used: 151552
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 ms_handle_reset con 0x55b3cd84f000 session 0x55b3ce5aed20
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 38494208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 38445056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 38371328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 38363136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 38354944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 38346752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 38322176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.011367798s of 106.076034546s, submitted: 19
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295182336 unmapped: 37249024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 54001664 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 299 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cb940b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed748000/0x0/0x4ffc00000, data 0x9ea3f4/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cbe8c000
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 53927936 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.694004059s of 29.852605820s, submitted: 37
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 301 ms_handle_reset con 0x55b3cf60e000 session 0x55b3cbd825a0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035515 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 53739520 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 53624832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s#012Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: mgrc ms_handle_reset ms_handle_reset con 0x55b3d1d16400
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: mgrc handle_mgr_configure stats_period=5
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 278.443786621s of 278.633789062s, submitted: 64
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 53198848 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 53092352 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 53084160 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 53067776 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 53035008 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 53010432 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 53002240 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 146.330047607s of 146.644271851s, submitted: 90
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 52994048 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 303 ms_handle_reset con 0x55b3cf610400 session 0x55b3d0b64b40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ed73e000/0x0/0x4ffc00000, data 0x9f1130/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2986189 data_alloc: 218103808 data_used: 184320
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 304 ms_handle_reset con 0x55b3cf611400 session 0x55b3cdcb23c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x1f2cbb/0x390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987987 data_alloc: 218103808 data_used: 192512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f471e/0x393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 52895744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 ms_handle_reset con 0x55b3ccad0000 session 0x55b3ce5afa40
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 52862976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 52789248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.713172913s of 64.038635254s, submitted: 108
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 307 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3cc8003c0
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 52658176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 52649984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 52641792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 52609024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 52944896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:21 np0005486808 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 05:59:21 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:21 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 05:59:21 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751310467' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 05:59:21 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23067 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 05:59:21 np0005486808 nova_compute[259627]: 2025-10-14 09:59:21.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:21 np0005486808 nova_compute[259627]: 2025-10-14 09:59:21.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 05:59:21 np0005486808 nova_compute[259627]: 2025-10-14 09:59:21.980 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 05:59:22 np0005486808 nova_compute[259627]: 2025-10-14 09:59:22.038 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 05:59:22 np0005486808 nova_compute[259627]: 2025-10-14 09:59:22.040 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:22 np0005486808 nova_compute[259627]: 2025-10-14 09:59:22.040 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513774428' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 05:59:22 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23071 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580102992' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 05:59:22 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 05:59:22 np0005486808 nova_compute[259627]: 2025-10-14 09:59:22.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 14 05:59:22 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118997230' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 05:59:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 05:59:23 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23083 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 05:59:23 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 05:59:23 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T09:59:23.536+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 05:59:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 14 05:59:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279325912' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410129110' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869743412' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626043090' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256694217' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 14 05:59:24 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678340268' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 05:59:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674093922' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558314836' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 05:59:25 np0005486808 nova_compute[259627]: 2025-10-14 09:59:25.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573561031' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 14 05:59:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914762314' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 68681728 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 68755456 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291969 data_alloc: 218103808 data_used: 8671232
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 68837376 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec185000/0x0/0x4ffc00000, data 0x1fb2b8c/0x2149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.040349960s of 26.117074966s, submitted: 12
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301776896 unmapped: 63053824 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8b55a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb87f000/0x0/0x4ffc00000, data 0x28b8b8c/0x2a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365155 data_alloc: 218103808 data_used: 8671232
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 68362240 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 67911680 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7e0ef00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8b54000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 67518464 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb611000/0x0/0x4ffc00000, data 0x2b20b8c/0x2cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c976a780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbaf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 67493888 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 67485696 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444397 data_alloc: 234881024 data_used: 15724544
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455757 data_alloc: 234881024 data_used: 17338368
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb5f3000/0x0/0x4ffc00000, data 0x2b34baf/0x2ccc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 67870720 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.891954422s of 18.136959076s, submitted: 60
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3512547 data_alloc: 234881024 data_used: 18878464
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb381000/0x0/0x4ffc00000, data 0x2db5baf/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 63881216 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 63758336 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eafb7000/0x0/0x4ffc00000, data 0x317ebaf/0x3316000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb000 session 0x5597c8f4f2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571707 data_alloc: 234881024 data_used: 19238912
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea824000/0x0/0x4ffc00000, data 0x3912baf/0x3aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c976a960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c9fa5860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbba40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8f4ed20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea823000/0x0/0x4ffc00000, data 0x3912bbf/0x3aab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573865 data_alloc: 234881024 data_used: 19247104
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 63709184 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c8baa3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135522842s of 11.417894363s, submitted: 79
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7841e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 68083712 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c6f923c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb773000/0x0/0x4ffc00000, data 0x29c2b9c/0x2b5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432703 data_alloc: 234881024 data_used: 15921152
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb773000/0x0/0x4ffc00000, data 0x29c2b9c/0x2b5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432703 data_alloc: 234881024 data_used: 15921152
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 68067328 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.419453621s of 10.544699669s, submitted: 41
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 67215360 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 67076096 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb19f000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479817 data_alloc: 234881024 data_used: 15953920
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb19f000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298819584 unmapped: 66011136 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c953a400 session 0x5597c8bac3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb000 session 0x5597c8ec2f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478569 data_alloc: 234881024 data_used: 15953920
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c976a000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb1a0000/0x0/0x4ffc00000, data 0x2f96b9c/0x312e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x222eb8c/0x23c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x222eb8c/0x23c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c8b541e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c705b4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332568 data_alloc: 218103808 data_used: 9101312
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297836544 unmapped: 66994176 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.989180565s of 14.203341484s, submitted: 50
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fbb860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236455 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92e1000 session 0x5597c7ec6b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec6000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8bad4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 297852928 unmapped: 66977792 heap: 364830720 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c6f114a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9058780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ec6ea000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb000 session 0x5597c93232c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7185e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8b55c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390711 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3a0000/0x0/0x4ffc00000, data 0x2d96bee/0x2f2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9df10e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c8f4e1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298483712 unmapped: 77897728 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c953a400 session 0x5597c7e0f4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.416601181s of 12.653140068s, submitted: 71
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7184f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298655744 unmapped: 77725696 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 298663936 unmapped: 77717504 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428332 data_alloc: 218103808 data_used: 9043968
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 76832768 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527852 data_alloc: 234881024 data_used: 23109632
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9383860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7840960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fba000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bda000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x2dbabee/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa50e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb03a000/0x0/0x4ffc00000, data 0x30fcbee/0x3294000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 73261056 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.515716553s of 11.578465462s, submitted: 18
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652682 data_alloc: 234881024 data_used: 24137728
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 67682304 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8f4e960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 67100672 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c6c22b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8b54f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeec800 session 0x5597c9f8af00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309288960 unmapped: 67092480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309288960 unmapped: 67092480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 67051520 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4d2000/0x0/0x4ffc00000, data 0x3c5bbee/0x3df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3689852 data_alloc: 234881024 data_used: 26611712
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3684304 data_alloc: 234881024 data_used: 26615808
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 67018752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4ba000/0x0/0x4ffc00000, data 0x3c7cbee/0x3e14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea4ba000/0x0/0x4ffc00000, data 0x3c7cbee/0x3e14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 67010560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.954314232s of 14.283423424s, submitted: 137
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313729024 unmapped: 62652416 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4306bee/0x449e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4306bee/0x449e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,3])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c7841e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3766476 data_alloc: 234881024 data_used: 27471872
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8f4ed20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c9fbba40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314114048 unmapped: 62267392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e000 session 0x5597c976a960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c71841e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7e78960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8ec65a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c8b54960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9692000/0x0/0x4ffc00000, data 0x4aa3bfe/0x4c3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313851904 unmapped: 62529536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e968c000/0x0/0x4ffc00000, data 0x4aa9bfe/0x4c42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313860096 unmapped: 62521344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c9058b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9f8b4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3824399 data_alloc: 234881024 data_used: 27480064
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313860096 unmapped: 62521344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c7840d20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c7e0e1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314023936 unmapped: 62357504 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314040320 unmapped: 62341120 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318554112 unmapped: 57827328 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3882922 data_alloc: 251658240 data_used: 34734080
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9663000/0x0/0x4ffc00000, data 0x4ad0c31/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318578688 unmapped: 57802752 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8d714a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7e0e5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.690410614s of 15.173391342s, submitted: 127
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8ad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x4096c31/0x4231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768586 data_alloc: 251658240 data_used: 31678464
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 57794560 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 57786368 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320577536 unmapped: 55803904 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9776000/0x0/0x4ffc00000, data 0x49bdc31/0x4b58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [0,0,4])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3860752 data_alloc: 251658240 data_used: 33177600
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96cc000/0x0/0x4ffc00000, data 0x4a67c31/0x4c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320937984 unmapped: 55443456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.843324661s of 11.244002342s, submitted: 140
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858116 data_alloc: 251658240 data_used: 33177600
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96cc000/0x0/0x4ffc00000, data 0x4a67c31/0x4c02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96a8000/0x0/0x4ffc00000, data 0x4a8bc31/0x4c26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320946176 unmapped: 55435264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96a8000/0x0/0x4ffc00000, data 0x4a8bc31/0x4c26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3858756 data_alloc: 251658240 data_used: 33239040
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 55427072 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c7e0eb40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f71000 session 0x5597c6c23a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8eb61e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322043904 unmapped: 54337536 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea409000/0x0/0x4ffc00000, data 0x391cbee/0x3ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322052096 unmapped: 54329344 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7841c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c91d70e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.793248177s of 10.007752419s, submitted: 73
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659885 data_alloc: 234881024 data_used: 24469504
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305250304 unmapped: 71131136 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fa45a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edf2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eded20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f71000 session 0x5597c8bda5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c93974a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c7185e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c93232c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c7e0f4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805000 session 0x5597c976a960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7e0fa40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349757 data_alloc: 218103808 data_used: 4214784
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 71008256 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411649 data_alloc: 218103808 data_used: 12939264
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411649 data_alloc: 218103808 data_used: 12939264
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eba6d000/0x0/0x4ffc00000, data 0x22b9bee/0x2451000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 305381376 unmapped: 71000064 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.468908310s of 16.836942673s, submitted: 103
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467737 data_alloc: 218103808 data_used: 13017088
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467753 data_alloc: 218103808 data_used: 13017088
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467753 data_alloc: 218103808 data_used: 13017088
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306585600 unmapped: 69795840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x292abee/0x2ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.922410965s of 16.136686325s, submitted: 54
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8badc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9fba5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c6f11860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9f8a1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7ec6000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 306323456 unmapped: 70057984 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9f8b0e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c79745a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c6c22960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f69c00 session 0x5597c8edfc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb74a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c90583c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c9fbaf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8eb74a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cabd2c00 session 0x5597c6c22960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea6c3000/0x0/0x4ffc00000, data 0x3663bee/0x37fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660219 data_alloc: 218103808 data_used: 13021184
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c79745a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6e000/0x0/0x4ffc00000, data 0x40b6c60/0x4250000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9f8b0e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 68419584 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c7ec6000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660219 data_alloc: 218103808 data_used: 13021184
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9f8a1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7fe000 session 0x5597c6f93860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 307970048 unmapped: 68411392 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec2d20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8f4e000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c91763c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 67567616 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.600818634s of 10.829821587s, submitted: 68
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 308813824 unmapped: 67567616 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739229 data_alloc: 234881024 data_used: 24248320
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 66969600 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823389 data_alloc: 251658240 data_used: 33378304
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 64659456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c6d000/0x0/0x4ffc00000, data 0x40b6c70/0x4251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 313401344 unmapped: 62980096 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 314548224 unmapped: 61833216 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.474565506s of 10.712116241s, submitted: 69
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3904073 data_alloc: 251658240 data_used: 34623488
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319307776 unmapped: 57073664 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319528960 unmapped: 56852480 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e0c000/0x0/0x4ffc00000, data 0x4f11c70/0x50ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321314816 unmapped: 55066624 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 251658240 data_used: 35381248
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d76000/0x0/0x4ffc00000, data 0x4f9ec70/0x5139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 55001088 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7ffc00 session 0x5597c9fba5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9fac1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9f8a5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d76000/0x0/0x4ffc00000, data 0x4f9ec70/0x5139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321241088 unmapped: 55140352 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f81000/0x0/0x4ffc00000, data 0x3da4bee/0x3f3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edde00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321249280 unmapped: 55132160 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.689114571s of 10.230315208s, submitted: 165
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f92f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567598 data_alloc: 234881024 data_used: 17534976
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eac10000/0x0/0x4ffc00000, data 0x2ec8b7c/0x305e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567918 data_alloc: 234881024 data_used: 17543168
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318742528 unmapped: 57638912 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8edc3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db5000 session 0x5597c911cf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c91d7680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c71841e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9e43c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca7ffc00 session 0x5597c7ec7e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c93974a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c8edc000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318865408 unmapped: 57516032 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619336 data_alloc: 234881024 data_used: 17543168
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82f000/0x0/0x4ffc00000, data 0x34f8b8c/0x368f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 57507840 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7840b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619336 data_alloc: 234881024 data_used: 17543168
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318889984 unmapped: 57491456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9fad680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db6400 session 0x5597c6f10000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318889984 unmapped: 57491456 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.707530975s of 16.930496216s, submitted: 49
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fa43c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82d000/0x0/0x4ffc00000, data 0x34f8bbf/0x3691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622459 data_alloc: 234881024 data_used: 17547264
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318898176 unmapped: 57483264 heap: 376381440 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9facf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea82d000/0x0/0x4ffc00000, data 0x34f8bbf/0x3691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c8eddc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fad860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f76c00 session 0x5597c91d65a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c7e0ef00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c8f4e1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c8eb61e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c7975e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c92eb800 session 0x5597c705bc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321101824 unmapped: 58949632 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698042 data_alloc: 234881024 data_used: 17551360
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321101824 unmapped: 58949632 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bda960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c8bdb680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c9322960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c9fa43c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.179798126s of 13.313076019s, submitted: 28
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743774 data_alloc: 234881024 data_used: 23924736
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320069632 unmapped: 59981824 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320864256 unmapped: 59187200 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322822144 unmapped: 57229312 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322822144 unmapped: 57229312 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 57221120 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3805694 data_alloc: 251658240 data_used: 32612352
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 57221120 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 52199424 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9eb1000/0x0/0x4ffc00000, data 0x3e74bbf/0x400d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 51961856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9836000/0x0/0x4ffc00000, data 0x44efbbf/0x4688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872376 data_alloc: 251658240 data_used: 33492992
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9836000/0x0/0x4ffc00000, data 0x44efbbf/0x4688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327499776 unmapped: 52551680 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.605174065s of 10.863492966s, submitted: 86
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330178560 unmapped: 49872896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3948346 data_alloc: 251658240 data_used: 33550336
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330203136 unmapped: 49848320 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x4dc9bbf/0x4f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330235904 unmapped: 49815552 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329572352 unmapped: 50479104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329572352 unmapped: 50479104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44800 session 0x5597c705ad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8ec2b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f5c000/0x0/0x4ffc00000, data 0x4dc9bbf/0x4f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [0,0,1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c705a780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788147 data_alloc: 234881024 data_used: 26333184
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327835648 unmapped: 52215808 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9bfb000/0x0/0x4ffc00000, data 0x412db7c/0x42c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327843840 unmapped: 52207616 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9bfb000/0x0/0x4ffc00000, data 0x412db7c/0x42c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.413324356s of 13.750521660s, submitted: 89
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c9fbbe00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 327843840 unmapped: 52207616 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fa5a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3535180 data_alloc: 218103808 data_used: 12926976
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320954368 unmapped: 59097088 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805800 session 0x5597c8eb7860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9fba780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8ec25a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8ec32c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fbb0e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16000 session 0x5597c8baa3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f92b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9059e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c90592c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf7d000/0x0/0x4ffc00000, data 0x2daab8c/0x2f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf7d000/0x0/0x4ffc00000, data 0x2daab8c/0x2f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551998 data_alloc: 218103808 data_used: 12926976
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320962560 unmapped: 59088896 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9fad680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321110016 unmapped: 58941440 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321118208 unmapped: 58933248 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561874 data_alloc: 234881024 data_used: 14008320
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561874 data_alloc: 234881024 data_used: 14008320
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321126400 unmapped: 58925056 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eaf59000/0x0/0x4ffc00000, data 0x2dceb8c/0x2f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.019678116s of 18.189313889s, submitted: 44
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321773568 unmapped: 58277888 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323788800 unmapped: 56262656 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650710 data_alloc: 234881024 data_used: 14180352
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e939d000/0x0/0x4ffc00000, data 0x37e2b8c/0x3979000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3650710 data_alloc: 234881024 data_used: 14180352
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16c00 session 0x5597c9fa45a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0fe00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323829760 unmapped: 56221696 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e939d000/0x0/0x4ffc00000, data 0x37e2b8c/0x3979000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdb2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323837952 unmapped: 56213504 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915218353s of 12.252222061s, submitted: 89
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8ec6780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c7ec63c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349194 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349194 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91c8400 session 0x5597c8b54780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8b550e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318873600 unmapped: 61177856 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8eb7c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eb13a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [0,0,0,2,9])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8b545a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb400 session 0x5597c9176960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9177e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388097 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388097 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c6c22000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cab16c00 session 0x5597c976be00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317210624 unmapped: 62840832 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9058960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317218816 unmapped: 62832640 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.941007614s of 18.134647369s, submitted: 67
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8bdaf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 317112320 unmapped: 62939136 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316801024 unmapped: 63250432 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399298 data_alloc: 218103808 data_used: 5677056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399298 data_alloc: 218103808 data_used: 5677056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eae62000/0x0/0x4ffc00000, data 0x1d26b7c/0x1ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316612608 unmapped: 63438848 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.857822418s of 11.883068085s, submitted: 7
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 316809216 unmapped: 63242240 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509754 data_alloc: 218103808 data_used: 6983680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea105000/0x0/0x4ffc00000, data 0x2a75b7c/0x2c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319086592 unmapped: 60964864 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516472 data_alloc: 218103808 data_used: 6881280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ed7000/0x0/0x4ffc00000, data 0x2b09b7c/0x2c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s#012Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510628 data_alloc: 218103808 data_used: 6881280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.153414726s of 11.554138184s, submitted: 132
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 59686912 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9395c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c8b54000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c8f4f2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9322f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513901 data_alloc: 218103808 data_used: 6881280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdab40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7ec7e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320315392 unmapped: 67174400 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c6f11c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9fa5a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631519 data_alloc: 218103808 data_used: 6885376
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec32c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8baad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7eda000/0x0/0x4ffc00000, data 0x3b0db8c/0x3ca4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9059e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.693615913s of 12.837650299s, submitted: 26
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdb2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634757 data_alloc: 218103808 data_used: 6885376
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752677 data_alloc: 234881024 data_used: 23490560
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753333 data_alloc: 234881024 data_used: 23494656
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706723213s of 11.763413429s, submitted: 15
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 65896448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326164480 unmapped: 61325312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x4830baf/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868965 data_alloc: 234881024 data_used: 24391680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868737 data_alloc: 234881024 data_used: 24412160
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.880608559s of 13.121785164s, submitted: 92
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bab4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df14a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 14 05:59:26 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12511987' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0e5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8da74a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df0960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.259336472s of 23.626934052s, submitted: 120
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 68509696 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c91d70e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8b552c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c93234a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9fbb0e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.051465988s of 26.344564438s, submitted: 90
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bade00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eb6960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7e0eb40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705ad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9058b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8bc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.328366280s of 10.499547958s, submitted: 49
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465278 data_alloc: 218103808 data_used: 4861952
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 68149248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523198 data_alloc: 218103808 data_used: 13045760
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.949416161s of 10.952005386s, submitted: 1
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554408 data_alloc: 218103808 data_used: 13467648
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322412544 unmapped: 65077248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e9e000/0x0/0x4ffc00000, data 0x2736c11/0x28cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567410 data_alloc: 234881024 data_used: 13811712
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e20000/0x0/0x4ffc00000, data 0x27b4c11/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566358 data_alloc: 234881024 data_used: 13824000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e00000/0x0/0x4ffc00000, data 0x27d5c11/0x296e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.258024216s of 14.533938408s, submitted: 65
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da6000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8da63c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c78414a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c6f934a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x27dbc20/0x2975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568414 data_alloc: 234881024 data_used: 13824000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330170368 unmapped: 57319424 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9394d20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4e780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8f4f860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c7e0f4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623576 data_alloc: 234881024 data_used: 13828096
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050421715s of 10.386352539s, submitted: 13
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c71854a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624501 data_alloc: 234881024 data_used: 13828096
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324468736 unmapped: 63021056 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672021 data_alloc: 234881024 data_used: 20504576
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672373 data_alloc: 234881024 data_used: 20504576
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.206857681s of 13.240119934s, submitted: 8
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331653120 unmapped: 55836672 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b2d000/0x0/0x4ffc00000, data 0x3aa7c20/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784939 data_alloc: 234881024 data_used: 21671936
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8d714a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.424279213s of 12.732007027s, submitted: 100
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8d71e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dfa000/0x0/0x4ffc00000, data 0x27dbc11/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c6489c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dee000/0x0/0x4ffc00000, data 0x27e7c11/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [0,0,1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8f4ed20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8ec32c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c91761e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9fec000 session 0x5597c705bc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.449398041s of 46.750865936s, submitted: 92
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f114a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324829184 unmapped: 62660608 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 62636032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8baa780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9fa43c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8babc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260800 session 0x5597c911cf00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 9494528
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480869 data_alloc: 218103808 data_used: 9551872
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.233131409s of 23.287984848s, submitted: 4
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481685 data_alloc: 218103808 data_used: 9555968
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323919872 unmapped: 63569920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323928064 unmapped: 63561728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507191 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.272737503s of 16.378507614s, submitted: 28
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323747840 unmapped: 63741952 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4f0e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323215360 unmapped: 64274432 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323223552 unmapped: 64266240 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542575 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8ec6f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c8edd4a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8d71a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9fad860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544413 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.227336884s of 20.305019379s, submitted: 9
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324141056 unmapped: 63348736 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e997f000/0x0/0x4ffc00000, data 0x2c8fb9c/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98e9000/0x0/0x4ffc00000, data 0x2d2db9c/0x2ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627635 data_alloc: 218103808 data_used: 13598720
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 62554112 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628135 data_alloc: 218103808 data_used: 13598720
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628455 data_alloc: 218103808 data_used: 13606912
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c705bc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9176b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.985599518s of 14.288364410s, submitted: 86
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705a5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515600 data_alloc: 218103808 data_used: 9674752
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9394f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba5a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca804800 session 0x5597c9322d20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c8fbb000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db4400 session 0x5597c911c780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c7e2c800 session 0x5597c8bacb40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8da7c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9df1680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e792c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f74c00 session 0x5597c8ec6b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.484188080s of 41.605556488s, submitted: 34
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325509120 unmapped: 66715648 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da70e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9f8b860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c7e78960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976a000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c7840b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c90592c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c976be00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8baad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c9323860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319447040 unmapped: 72777728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397739410s of 19.592414856s, submitted: 36
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,1,0,3,1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96e5000/0x0/0x4ffc00000, data 0x2f32b8c/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326262784 unmapped: 65961984 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707751 data_alloc: 234881024 data_used: 19681280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c79745a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9176780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c6f11e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c93230e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.581520081s of 14.833774567s, submitted: 84
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c705ab40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8baad20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976be00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c90592c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c976a000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771905 data_alloc: 234881024 data_used: 19681280
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 65503232 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824155807s of 16.902429581s, submitted: 11
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329990144 unmapped: 62234624 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x38bab9c/0x3a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878863 data_alloc: 234881024 data_used: 28008448
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7659000/0x0/0x4ffc00000, data 0x3e17b9c/0x3faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.783678055s of 12.008990288s, submitted: 48
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c7e78960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df03c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710153 data_alloc: 234881024 data_used: 18591744
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e848f000/0x0/0x4ffc00000, data 0x2fe8b8c/0x317f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8ec65a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9df0b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df0f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9059e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdab40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fa4b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8b550e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.645046234s of 42.893882751s, submitted: 79
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8d70b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7184000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9fb23c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7840000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8eb7e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9fa50e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c8ecab40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e0fa40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8ec7c20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321617920 unmapped: 70606848 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 70598656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321634304 unmapped: 70590464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.752676010s of 19.861513138s, submitted: 33
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 62496768 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681253 data_alloc: 234881024 data_used: 15691776
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.384753227s of 12.701920509s, submitted: 117
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9323a40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c8b552c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8fd0000 session 0x5597c8eddc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fba000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7ec6960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df0000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8d70000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c7974f00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7ec6960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.189133644s of 10.343238831s, submitted: 42
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fba000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330391552 unmapped: 61833216 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748708 data_alloc: 234881024 data_used: 19931136
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749364 data_alloc: 234881024 data_used: 19935232
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708535194s of 11.744414330s, submitted: 9
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,1,1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 57638912 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808274 data_alloc: 234881024 data_used: 20135936
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803170 data_alloc: 234881024 data_used: 20140032
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447299004s of 12.729538918s, submitted: 84
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803346 data_alloc: 234881024 data_used: 20140032
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e05000/0x0/0x4ffc00000, data 0x366fc21/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 57081856 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8da70e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686054 data_alloc: 234881024 data_used: 15679488
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9f8a1e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9f8b860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329187328 unmapped: 63037440 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8f4e780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e882a000/0x0/0x4ffc00000, data 0x2c4db8c/0x2de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.834590912s of 39.179557800s, submitted: 109
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c705a960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x2376b7c/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9f8a3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93223c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9058b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8b554a0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324386816 unmapped: 71516160 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324395008 unmapped: 71507968 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.299053192s of 19.407587051s, submitted: 25
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662080 data_alloc: 234881024 data_used: 14364672
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673472 data_alloc: 234881024 data_used: 14249984
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.285617828s of 17.535190582s, submitted: 85
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa41e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771938 data_alloc: 234881024 data_used: 14249984
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fad2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8b54960
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c911da40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8f4fe00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3829579 data_alloc: 234881024 data_used: 19238912
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839499 data_alloc: 234881024 data_used: 19476480
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.400768280s of 15.509059906s, submitted: 20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870475 data_alloc: 234881024 data_used: 19501056
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 74211328 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917109 data_alloc: 234881024 data_used: 20856832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s#012Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.800569534s of 12.333517075s, submitted: 84
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80f800 session 0x5597c9394d20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df1e00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.377149582s of 18.387834549s, submitted: 2
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643825 data_alloc: 218103808 data_used: 9093120
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 73039872 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e885d000/0x0/0x4ffc00000, data 0x28eab9f/0x2a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705bc20
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c9f8b2c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c90583c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c93941e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bab680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8edc000
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.921491623s of 24.220367432s, submitted: 89
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8d70780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c91d7860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8edc780
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c976af00
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93941e0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544403 data_alloc: 218103808 data_used: 4796416
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 73883648 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331317248 unmapped: 76652544 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8ec6b40
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.342603683s of 18.682430267s, submitted: 72
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 76595200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [1])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.002799988s of 47.298183441s, submitted: 90
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 293 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fbb680
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500843 data_alloc: 218103808 data_used: 4218880
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 05:59:26 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:01:50 np0005486808 rsyslogd[1002]: imjournal: 15302 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 14 06:01:51 np0005486808 nova_compute[259627]: 2025-10-14 10:01:51.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:01:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:01:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:01:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:01:53 np0005486808 nova_compute[259627]: 2025-10-14 10:01:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:01:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:01:56 np0005486808 nova_compute[259627]: 2025-10-14 10:01:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:01:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:01:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 78b288b5-6d3d-4dba-a291-2b06a7f18466 does not exist
Oct 14 06:01:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c48f0050-eaa2-4ac5-9059-ad8213e0dd57 does not exist
Oct 14 06:01:57 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 013a0ff2-75fc-40d8-ba89-2fe0d2dd36b9 does not exist
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:01:57 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:01:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:01:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:01:58 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:01:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.525004445 +0000 UTC m=+0.039252834 container create cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 06:01:58 np0005486808 systemd[1]: Started libpod-conmon-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope.
Oct 14 06:01:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.507199888 +0000 UTC m=+0.021448307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.616270315 +0000 UTC m=+0.130518724 container init cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.623786689 +0000 UTC m=+0.138035108 container start cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.627995812 +0000 UTC m=+0.142244231 container attach cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 06:01:58 np0005486808 admiring_davinci[451241]: 167 167
Oct 14 06:01:58 np0005486808 systemd[1]: libpod-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope: Deactivated successfully.
Oct 14 06:01:58 np0005486808 conmon[451241]: conmon cfbecb2a3c337e314e66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope/container/memory.events
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.634738978 +0000 UTC m=+0.148987357 container died cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:01:58 np0005486808 systemd[1]: var-lib-containers-storage-overlay-444ede3b47554d8af513dd1a3efe1aa333c4f631b8d49957834e38ce01fc3a39-merged.mount: Deactivated successfully.
Oct 14 06:01:58 np0005486808 nova_compute[259627]: 2025-10-14 10:01:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:01:58 np0005486808 podman[451225]: 2025-10-14 10:01:58.684762405 +0000 UTC m=+0.199010784 container remove cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:01:58 np0005486808 systemd[1]: libpod-conmon-cfbecb2a3c337e314e6693d03e3f2c439a61379258f8e5c81e3c2cd43f64072e.scope: Deactivated successfully.
Oct 14 06:01:58 np0005486808 podman[451266]: 2025-10-14 10:01:58.880415796 +0000 UTC m=+0.047505827 container create 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:01:58 np0005486808 systemd[1]: Started libpod-conmon-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope.
Oct 14 06:01:58 np0005486808 podman[451266]: 2025-10-14 10:01:58.858250722 +0000 UTC m=+0.025340743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:01:58 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:01:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:01:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:01:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:01:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:01:58 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 06:01:59 np0005486808 podman[451266]: 2025-10-14 10:01:59.003399404 +0000 UTC m=+0.170489445 container init 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 14 06:01:59 np0005486808 podman[451266]: 2025-10-14 10:01:59.016640059 +0000 UTC m=+0.183730100 container start 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:01:59 np0005486808 podman[451266]: 2025-10-14 10:01:59.020538434 +0000 UTC m=+0.187628525 container attach 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 06:01:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:00 np0005486808 cool_hawking[451282]: --> passed data devices: 0 physical, 3 LVM
Oct 14 06:02:00 np0005486808 cool_hawking[451282]: --> relative data size: 1.0
Oct 14 06:02:00 np0005486808 cool_hawking[451282]: --> All data devices are unavailable
Oct 14 06:02:00 np0005486808 systemd[1]: libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Deactivated successfully.
Oct 14 06:02:00 np0005486808 conmon[451282]: conmon 4e8f5c3883e7b32a0b15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope/container/memory.events
Oct 14 06:02:00 np0005486808 systemd[1]: libpod-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Consumed 1.128s CPU time.
Oct 14 06:02:00 np0005486808 podman[451311]: 2025-10-14 10:02:00.237386041 +0000 UTC m=+0.030541220 container died 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 06:02:00 np0005486808 systemd[1]: var-lib-containers-storage-overlay-176fe31ed43f0f4f138357f67334a00e36f6e602fc04d0995e0b17b1ed336018-merged.mount: Deactivated successfully.
Oct 14 06:02:00 np0005486808 podman[451311]: 2025-10-14 10:02:00.297323422 +0000 UTC m=+0.090478611 container remove 4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 14 06:02:00 np0005486808 systemd[1]: libpod-conmon-4e8f5c3883e7b32a0b159e3038347b725fc32b42113d0f02b20b61ed65bc7b86.scope: Deactivated successfully.
Oct 14 06:02:01 np0005486808 nova_compute[259627]: 2025-10-14 10:02:01.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.242533415 +0000 UTC m=+0.061112061 container create e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:02:01 np0005486808 systemd[1]: Started libpod-conmon-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope.
Oct 14 06:02:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.222978715 +0000 UTC m=+0.041557401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:02:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.341312179 +0000 UTC m=+0.159890865 container init e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.354300798 +0000 UTC m=+0.172879454 container start e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.357915726 +0000 UTC m=+0.176494412 container attach e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 14 06:02:01 np0005486808 tender_spence[451483]: 167 167
Oct 14 06:02:01 np0005486808 systemd[1]: libpod-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope: Deactivated successfully.
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.364004466 +0000 UTC m=+0.182583112 container died e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:02:01 np0005486808 systemd[1]: var-lib-containers-storage-overlay-59c5d8ed88f951f2832a5be149523f643eccec82db4c98eb060da5607b3beb3c-merged.mount: Deactivated successfully.
Oct 14 06:02:01 np0005486808 podman[451467]: 2025-10-14 10:02:01.40533718 +0000 UTC m=+0.223915826 container remove e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 14 06:02:01 np0005486808 systemd[1]: libpod-conmon-e7861d551e5d02817be5275c20a1a50b5ca0ec554ede55b3ce90db6ca8ea9dbf.scope: Deactivated successfully.
Oct 14 06:02:01 np0005486808 podman[451507]: 2025-10-14 10:02:01.600850077 +0000 UTC m=+0.061107260 container create 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:02:01 np0005486808 systemd[1]: Started libpod-conmon-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope.
Oct 14 06:02:01 np0005486808 podman[451507]: 2025-10-14 10:02:01.571590369 +0000 UTC m=+0.031847592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:02:01 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:02:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:01 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:01 np0005486808 podman[451507]: 2025-10-14 10:02:01.718420722 +0000 UTC m=+0.178677895 container init 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:02:01 np0005486808 podman[451507]: 2025-10-14 10:02:01.732292602 +0000 UTC m=+0.192549785 container start 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 06:02:01 np0005486808 podman[451507]: 2025-10-14 10:02:01.736721301 +0000 UTC m=+0.196978474 container attach 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]: {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    "0": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "devices": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "/dev/loop3"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            ],
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_name": "ceph_lv0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_size": "21470642176",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "name": "ceph_lv0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "tags": {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_name": "ceph",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.crush_device_class": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.encrypted": "0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_id": "0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.vdo": "0"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            },
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "vg_name": "ceph_vg0"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        }
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    ],
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    "1": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "devices": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "/dev/loop4"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            ],
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_name": "ceph_lv1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_size": "21470642176",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "name": "ceph_lv1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "tags": {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_name": "ceph",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.crush_device_class": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.encrypted": "0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_id": "1",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.vdo": "0"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            },
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "vg_name": "ceph_vg1"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        }
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    ],
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    "2": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "devices": [
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "/dev/loop5"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            ],
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_name": "ceph_lv2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_size": "21470642176",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "name": "ceph_lv2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "tags": {
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.cluster_name": "ceph",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.crush_device_class": "",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.encrypted": "0",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osd_id": "2",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:                "ceph.vdo": "0"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            },
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "type": "block",
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:            "vg_name": "ceph_vg2"
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:        }
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]:    ]
Oct 14 06:02:02 np0005486808 quizzical_torvalds[451524]: }
Oct 14 06:02:02 np0005486808 systemd[1]: libpod-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope: Deactivated successfully.
Oct 14 06:02:02 np0005486808 podman[451507]: 2025-10-14 10:02:02.640515768 +0000 UTC m=+1.100772951 container died 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:02:02 np0005486808 systemd[1]: var-lib-containers-storage-overlay-aa045b10148767a8f93b2cfd226fff1f166b764d570ca3d5e7641bb93cf16581-merged.mount: Deactivated successfully.
Oct 14 06:02:02 np0005486808 podman[451507]: 2025-10-14 10:02:02.732498625 +0000 UTC m=+1.192755768 container remove 07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 14 06:02:02 np0005486808 systemd[1]: libpod-conmon-07bcfc695d688a9a0feaf4636db4bc38904fb3ef2570a99734edd69cdff2538d.scope: Deactivated successfully.
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.638549416 +0000 UTC m=+0.073190387 container create fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.609626246 +0000 UTC m=+0.044267307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:02:03 np0005486808 nova_compute[259627]: 2025-10-14 10:02:03.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:03 np0005486808 systemd[1]: Started libpod-conmon-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope.
Oct 14 06:02:03 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.80211755 +0000 UTC m=+0.236758541 container init fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.811176112 +0000 UTC m=+0.245817083 container start fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.815200011 +0000 UTC m=+0.249841002 container attach fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 06:02:03 np0005486808 admiring_almeida[451703]: 167 167
Oct 14 06:02:03 np0005486808 systemd[1]: libpod-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope: Deactivated successfully.
Oct 14 06:02:03 np0005486808 conmon[451703]: conmon fc376c4bfe430ea77ef1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope/container/memory.events
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.820666625 +0000 UTC m=+0.255307606 container died fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:02:03 np0005486808 systemd[1]: var-lib-containers-storage-overlay-a03d8a018a6f431e6bb8a77f8830b23d6cff34532932ba57f1ab907b4c3e14c4-merged.mount: Deactivated successfully.
Oct 14 06:02:03 np0005486808 podman[451687]: 2025-10-14 10:02:03.85754607 +0000 UTC m=+0.292187051 container remove fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:02:03 np0005486808 systemd[1]: libpod-conmon-fc376c4bfe430ea77ef1e452ad833ca8b8cfbf1b564df09d577bebbb160383e6.scope: Deactivated successfully.
Oct 14 06:02:04 np0005486808 podman[451726]: 2025-10-14 10:02:04.064344674 +0000 UTC m=+0.058727832 container create 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:02:04 np0005486808 systemd[1]: Started libpod-conmon-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope.
Oct 14 06:02:04 np0005486808 podman[451726]: 2025-10-14 10:02:04.038061159 +0000 UTC m=+0.032444367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:02:04 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:02:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:04 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:02:04 np0005486808 podman[451726]: 2025-10-14 10:02:04.166528451 +0000 UTC m=+0.160911629 container init 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:02:04 np0005486808 podman[451726]: 2025-10-14 10:02:04.180957225 +0000 UTC m=+0.175340383 container start 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 14 06:02:04 np0005486808 podman[451726]: 2025-10-14 10:02:04.184618865 +0000 UTC m=+0.179002043 container attach 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:02:05 np0005486808 great_nightingale[451743]: {
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_id": 2,
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "type": "bluestore"
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    },
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_id": 1,
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "type": "bluestore"
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    },
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_id": 0,
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:        "type": "bluestore"
Oct 14 06:02:05 np0005486808 great_nightingale[451743]:    }
Oct 14 06:02:05 np0005486808 great_nightingale[451743]: }
Oct 14 06:02:05 np0005486808 systemd[1]: libpod-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Deactivated successfully.
Oct 14 06:02:05 np0005486808 podman[451726]: 2025-10-14 10:02:05.249784852 +0000 UTC m=+1.244168020 container died 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:02:05 np0005486808 systemd[1]: libpod-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Consumed 1.058s CPU time.
Oct 14 06:02:05 np0005486808 systemd[1]: var-lib-containers-storage-overlay-56aa3bc4db660e09dd9bb52c7f0b923731c2c8af5c9225eca5115b6d8ed4d166-merged.mount: Deactivated successfully.
Oct 14 06:02:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:05 np0005486808 podman[451726]: 2025-10-14 10:02:05.336057418 +0000 UTC m=+1.330440596 container remove 66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nightingale, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:02:05 np0005486808 systemd[1]: libpod-conmon-66455aabbbde997efff90fc3661e20de13f2631c274087f3aa671244c9f9dd23.scope: Deactivated successfully.
Oct 14 06:02:05 np0005486808 podman[451788]: 2025-10-14 10:02:05.389655374 +0000 UTC m=+0.087654442 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 06:02:05 np0005486808 podman[451777]: 2025-10-14 10:02:05.394269357 +0000 UTC m=+0.090780619 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:02:05 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 4f8f1723-f33f-41c6-bc06-2ec6e87b1c3d does not exist
Oct 14 06:02:05 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev a39f3f38-1e74-404f-9efb-f5594aa40129 does not exist
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 06:02:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/276389487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 06:02:06 np0005486808 nova_compute[259627]: 2025-10-14 10:02:06.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:02:06 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.080 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:02:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:02:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:08 np0005486808 nova_compute[259627]: 2025-10-14 10:02:08.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:09 np0005486808 nova_compute[259627]: 2025-10-14 10:02:09.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:11 np0005486808 nova_compute[259627]: 2025-10-14 10:02:11.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:12 np0005486808 nova_compute[259627]: 2025-10-14 10:02:12.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:13 np0005486808 nova_compute[259627]: 2025-10-14 10:02:13.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:15 np0005486808 nova_compute[259627]: 2025-10-14 10:02:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.006 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.007 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.008 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:02:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381954951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.458 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.655 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.656 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.656 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.657 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.761 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.762 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 06:02:16 np0005486808 nova_compute[259627]: 2025-10-14 10:02:16.973 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:02:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:02:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699986227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:02:17 np0005486808 nova_compute[259627]: 2025-10-14 10:02:17.435 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:02:17 np0005486808 nova_compute[259627]: 2025-10-14 10:02:17.442 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 06:02:17 np0005486808 podman[451920]: 2025-10-14 10:02:17.667787912 +0000 UTC m=+0.067436376 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 14 06:02:17 np0005486808 podman[451919]: 2025-10-14 10:02:17.733808612 +0000 UTC m=+0.129909489 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, tcib_managed=true)
Oct 14 06:02:17 np0005486808 nova_compute[259627]: 2025-10-14 10:02:17.753 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 06:02:17 np0005486808 nova_compute[259627]: 2025-10-14 10:02:17.756 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 06:02:17 np0005486808 nova_compute[259627]: 2025-10-14 10:02:17.756 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:02:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:18 np0005486808 nova_compute[259627]: 2025-10-14 10:02:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:21 np0005486808 nova_compute[259627]: 2025-10-14 10:02:21.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:22 np0005486808 nova_compute[259627]: 2025-10-14 10:02:22.752 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:22 np0005486808 nova_compute[259627]: 2025-10-14 10:02:22.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:23 np0005486808 nova_compute[259627]: 2025-10-14 10:02:23.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:24 np0005486808 nova_compute[259627]: 2025-10-14 10:02:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:24 np0005486808 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:24 np0005486808 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:24 np0005486808 nova_compute[259627]: 2025-10-14 10:02:24.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 06:02:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:25 np0005486808 nova_compute[259627]: 2025-10-14 10:02:25.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:25 np0005486808 nova_compute[259627]: 2025-10-14 10:02:25.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 06:02:25 np0005486808 nova_compute[259627]: 2025-10-14 10:02:25.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 06:02:26 np0005486808 nova_compute[259627]: 2025-10-14 10:02:26.020 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 06:02:26 np0005486808 nova_compute[259627]: 2025-10-14 10:02:26.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:28 np0005486808 nova_compute[259627]: 2025-10-14 10:02:28.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:31 np0005486808 nova_compute[259627]: 2025-10-14 10:02:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:02:32
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'vms', '.mgr']
Oct 14 06:02:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:02:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:02:33 np0005486808 nova_compute[259627]: 2025-10-14 10:02:33.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:35 np0005486808 podman[451963]: 2025-10-14 10:02:35.676005799 +0000 UTC m=+0.085375506 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 14 06:02:35 np0005486808 podman[451964]: 2025-10-14 10:02:35.685245986 +0000 UTC m=+0.087708953 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 14 06:02:36 np0005486808 nova_compute[259627]: 2025-10-14 10:02:36.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:36 np0005486808 nova_compute[259627]: 2025-10-14 10:02:36.740 2 DEBUG oslo_concurrency.processutils [None req-e515abeb-a726-484c-83b4-384859372b90 e3794087b3fd4f06a6c8b885f25679b1 ecc47810d1fb409dbea633329126389e - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:02:36 np0005486808 nova_compute[259627]: 2025-10-14 10:02:36.814 2 DEBUG oslo_concurrency.processutils [None req-e515abeb-a726-484c-83b4-384859372b90 e3794087b3fd4f06a6c8b885f25679b1 ecc47810d1fb409dbea633329126389e - - default default] CMD "env LANG=C uptime" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:02:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:38 np0005486808 nova_compute[259627]: 2025-10-14 10:02:38.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:41 np0005486808 nova_compute[259627]: 2025-10-14 10:02:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:43 np0005486808 nova_compute[259627]: 2025-10-14 10:02:43.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:02:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 06:02:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:44.066 162547 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:4d:cc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:46:39:5f:e7:5d'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 14 06:02:44 np0005486808 nova_compute[259627]: 2025-10-14 10:02:44.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:44 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:44.068 162547 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 14 06:02:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:46 np0005486808 nova_compute[259627]: 2025-10-14 10:02:46.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:48 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:02:48.071 162547 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bb42e45d-8149-4fcf-a722-37b1def68e20, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 14 06:02:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:48 np0005486808 podman[452006]: 2025-10-14 10:02:48.654983594 +0000 UTC m=+0.060732441 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 14 06:02:48 np0005486808 podman[452005]: 2025-10-14 10:02:48.695140029 +0000 UTC m=+0.104225768 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:02:48 np0005486808 nova_compute[259627]: 2025-10-14 10:02:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:51 np0005486808 nova_compute[259627]: 2025-10-14 10:02:51.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:53 np0005486808 nova_compute[259627]: 2025-10-14 10:02:53.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:56 np0005486808 nova_compute[259627]: 2025-10-14 10:02:56.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:57 np0005486808 nova_compute[259627]: 2025-10-14 10:02:57.014 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:02:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:02:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:02:58 np0005486808 nova_compute[259627]: 2025-10-14 10:02:58.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:02:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:01 np0005486808 nova_compute[259627]: 2025-10-14 10:03:01.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:03 np0005486808 nova_compute[259627]: 2025-10-14 10:03:03.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 06:03:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 06:03:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 06:03:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071089384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 06:03:05 np0005486808 podman[452099]: 2025-10-14 10:03:05.865253754 +0000 UTC m=+0.089731353 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 06:03:05 np0005486808 podman[452098]: 2025-10-14 10:03:05.904639491 +0000 UTC m=+0.127287235 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 06:03:06 np0005486808 nova_compute[259627]: 2025-10-14 10:03:06.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 5bda8706-2b8f-4cb2-814e-88274263bb91 does not exist
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 06:03:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev d9ade782-1d6b-40b3-8cc3-6a327ea7b15e does not exist
Oct 14 06:03:06 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 3841ed4d-640d-432c-9626-91bafeba6352 does not exist
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:03:06 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.081 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.082 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:03:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:03:07.082 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:03:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:03:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:07 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.643339992 +0000 UTC m=+0.061266774 container create 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:03:07 np0005486808 systemd[1]: Started libpod-conmon-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope.
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.621812444 +0000 UTC m=+0.039739256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:07 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.747196851 +0000 UTC m=+0.165123673 container init 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.753323021 +0000 UTC m=+0.171249793 container start 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.756235473 +0000 UTC m=+0.174162275 container attach 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 14 06:03:07 np0005486808 quirky_bhaskara[452381]: 167 167
Oct 14 06:03:07 np0005486808 systemd[1]: libpod-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope: Deactivated successfully.
Oct 14 06:03:07 np0005486808 conmon[452381]: conmon 31cbd544ebf9bc5a5770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope/container/memory.events
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.766178067 +0000 UTC m=+0.184104929 container died 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:03:07 np0005486808 systemd[1]: var-lib-containers-storage-overlay-65cff447a940b3fa7385b07799ee655bcf608c9624f8db0ecb51515bd7c91dca-merged.mount: Deactivated successfully.
Oct 14 06:03:07 np0005486808 podman[452364]: 2025-10-14 10:03:07.817609409 +0000 UTC m=+0.235536181 container remove 31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 14 06:03:07 np0005486808 systemd[1]: libpod-conmon-31cbd544ebf9bc5a57704f2f479d58a5f4c4c18ea91fe335e27fbf9d55cb5317.scope: Deactivated successfully.
Oct 14 06:03:08 np0005486808 podman[452404]: 2025-10-14 10:03:08.049384186 +0000 UTC m=+0.080559198 container create 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 14 06:03:08 np0005486808 systemd[1]: Started libpod-conmon-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope.
Oct 14 06:03:08 np0005486808 podman[452404]: 2025-10-14 10:03:08.016255393 +0000 UTC m=+0.047430455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:08 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:08 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:08 np0005486808 podman[452404]: 2025-10-14 10:03:08.165855964 +0000 UTC m=+0.197031036 container init 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 14 06:03:08 np0005486808 podman[452404]: 2025-10-14 10:03:08.179130439 +0000 UTC m=+0.210305461 container start 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 06:03:08 np0005486808 podman[452404]: 2025-10-14 10:03:08.183724262 +0000 UTC m=+0.214899304 container attach 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.505451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188505535, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1206, "num_deletes": 252, "total_data_size": 1837439, "memory_usage": 1865784, "flush_reason": "Manual Compaction"}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188516272, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1071301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68178, "largest_seqno": 69383, "table_properties": {"data_size": 1066920, "index_size": 1904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11598, "raw_average_key_size": 20, "raw_value_size": 1057341, "raw_average_value_size": 1888, "num_data_blocks": 87, "num_entries": 560, "num_filter_entries": 560, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436063, "oldest_key_time": 1760436063, "file_creation_time": 1760436188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 10898 microseconds, and 7269 cpu microseconds.
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.516350) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1071301 bytes OK
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.516383) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518370) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518393) EVENT_LOG_v1 {"time_micros": 1760436188518385, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.518417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1831937, prev total WAL file size 1831937, number of live WAL files 2.
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.519645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373530' seq:72057594037927935, type:22 .. '6D6772737461740033303033' seq:0, type:0; will stop at (end)
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1046KB)], [161(10MB)]
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188519743, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12269929, "oldest_snapshot_seqno": -1}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8769 keys, 9688450 bytes, temperature: kUnknown
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188590257, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9688450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9634304, "index_size": 31121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 229659, "raw_average_key_size": 26, "raw_value_size": 9482077, "raw_average_value_size": 1081, "num_data_blocks": 1204, "num_entries": 8769, "num_filter_entries": 8769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.590699) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9688450 bytes
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.592381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.5 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(20.5) write-amplify(9.0) OK, records in: 9228, records dropped: 459 output_compression: NoCompression
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.592410) EVENT_LOG_v1 {"time_micros": 1760436188592396, "job": 100, "event": "compaction_finished", "compaction_time_micros": 70729, "compaction_time_cpu_micros": 52769, "output_level": 6, "num_output_files": 1, "total_output_size": 9688450, "num_input_records": 9228, "num_output_records": 8769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188592872, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436188596392, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.519454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:08 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:03:08.596491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:03:09 np0005486808 nova_compute[259627]: 2025-10-14 10:03:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:09 np0005486808 bold_morse[452421]: --> passed data devices: 0 physical, 3 LVM
Oct 14 06:03:09 np0005486808 bold_morse[452421]: --> relative data size: 1.0
Oct 14 06:03:09 np0005486808 bold_morse[452421]: --> All data devices are unavailable
Oct 14 06:03:09 np0005486808 systemd[1]: libpod-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Deactivated successfully.
Oct 14 06:03:09 np0005486808 systemd[1]: libpod-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Consumed 1.222s CPU time.
Oct 14 06:03:09 np0005486808 podman[452404]: 2025-10-14 10:03:09.457650481 +0000 UTC m=+1.488825503 container died 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:03:09 np0005486808 systemd[1]: var-lib-containers-storage-overlay-c2ef8e548b6167b0d4ba59a98109212483010dabbb1f3b67da98cf3f1cc630f5-merged.mount: Deactivated successfully.
Oct 14 06:03:09 np0005486808 podman[452404]: 2025-10-14 10:03:09.548583542 +0000 UTC m=+1.579758534 container remove 1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 06:03:09 np0005486808 systemd[1]: libpod-conmon-1c4c5aba426fffe5c2d8b08aea77b6fc5f5f55afb201d037f862cefdd010b263.scope: Deactivated successfully.
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.417549824 +0000 UTC m=+0.067321493 container create 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:03:10 np0005486808 systemd[1]: Started libpod-conmon-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope.
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.392491729 +0000 UTC m=+0.042263438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.51354925 +0000 UTC m=+0.163320999 container init 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.524831357 +0000 UTC m=+0.174603036 container start 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.530437844 +0000 UTC m=+0.180209593 container attach 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:03:10 np0005486808 agitated_maxwell[452621]: 167 167
Oct 14 06:03:10 np0005486808 systemd[1]: libpod-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope: Deactivated successfully.
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.535753575 +0000 UTC m=+0.185525254 container died 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 06:03:10 np0005486808 systemd[1]: var-lib-containers-storage-overlay-65996502d6f46db087260891a1446ae62c0fbcab4fea22ca0344f0c1eda3d314-merged.mount: Deactivated successfully.
Oct 14 06:03:10 np0005486808 podman[452604]: 2025-10-14 10:03:10.597630823 +0000 UTC m=+0.247402522 container remove 0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 14 06:03:10 np0005486808 systemd[1]: libpod-conmon-0ee00363c717f828b0c7949d7c0b3338ded4727b2f2942b8b8d85d3c8569e80c.scope: Deactivated successfully.
Oct 14 06:03:10 np0005486808 podman[452644]: 2025-10-14 10:03:10.812922685 +0000 UTC m=+0.055741398 container create 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:03:10 np0005486808 systemd[1]: Started libpod-conmon-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope.
Oct 14 06:03:10 np0005486808 podman[452644]: 2025-10-14 10:03:10.787562273 +0000 UTC m=+0.030380996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:10 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:10 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:10 np0005486808 podman[452644]: 2025-10-14 10:03:10.932416067 +0000 UTC m=+0.175234790 container init 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:03:10 np0005486808 podman[452644]: 2025-10-14 10:03:10.942918255 +0000 UTC m=+0.185736958 container start 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:03:10 np0005486808 podman[452644]: 2025-10-14 10:03:10.947094297 +0000 UTC m=+0.189912990 container attach 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 14 06:03:10 np0005486808 nova_compute[259627]: 2025-10-14 10:03:10.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:11 np0005486808 nova_compute[259627]: 2025-10-14 10:03:11.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]: {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    "0": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "devices": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "/dev/loop3"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            ],
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_name": "ceph_lv0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_size": "21470642176",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "name": "ceph_lv0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "tags": {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_name": "ceph",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.crush_device_class": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.encrypted": "0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_id": "0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.vdo": "0"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            },
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "vg_name": "ceph_vg0"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        }
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    ],
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    "1": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "devices": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "/dev/loop4"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            ],
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_name": "ceph_lv1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_size": "21470642176",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "name": "ceph_lv1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "tags": {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_name": "ceph",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.crush_device_class": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.encrypted": "0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_id": "1",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.vdo": "0"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            },
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "vg_name": "ceph_vg1"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        }
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    ],
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    "2": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "devices": [
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "/dev/loop5"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            ],
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_name": "ceph_lv2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_size": "21470642176",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "name": "ceph_lv2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "tags": {
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.cluster_name": "ceph",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.crush_device_class": "",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.encrypted": "0",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osd_id": "2",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:                "ceph.vdo": "0"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            },
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "type": "block",
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:            "vg_name": "ceph_vg2"
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:        }
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]:    ]
Oct 14 06:03:11 np0005486808 vigilant_antonelli[452660]: }
Oct 14 06:03:11 np0005486808 systemd[1]: libpod-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope: Deactivated successfully.
Oct 14 06:03:11 np0005486808 podman[452644]: 2025-10-14 10:03:11.773433462 +0000 UTC m=+1.016252155 container died 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:03:11 np0005486808 systemd[1]: var-lib-containers-storage-overlay-94b6a3492da3e6b4acf0f96493c6cccdf35cdecdf092c38f0c80470434eabc98-merged.mount: Deactivated successfully.
Oct 14 06:03:11 np0005486808 podman[452644]: 2025-10-14 10:03:11.861083513 +0000 UTC m=+1.103902226 container remove 44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 14 06:03:11 np0005486808 systemd[1]: libpod-conmon-44980fc3bcccbf66fa9b5512e0fbb14674c93f53154a0c047868d91ccd508722.scope: Deactivated successfully.
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.759478617 +0000 UTC m=+0.053341310 container create f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:03:12 np0005486808 systemd[1]: Started libpod-conmon-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope.
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.734428192 +0000 UTC m=+0.028290895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:12 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.85781044 +0000 UTC m=+0.151673143 container init f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.870513341 +0000 UTC m=+0.164376014 container start f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.874325005 +0000 UTC m=+0.168187718 container attach f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:03:12 np0005486808 fervent_lederberg[452840]: 167 167
Oct 14 06:03:12 np0005486808 systemd[1]: libpod-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope: Deactivated successfully.
Oct 14 06:03:12 np0005486808 conmon[452840]: conmon f707afa867d08e486cca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope/container/memory.events
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.882001403 +0000 UTC m=+0.175864076 container died f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:03:12 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f807311e660bc10eab05573163bb4710f7a7bb301178dd44c3dedada4c69bcf2-merged.mount: Deactivated successfully.
Oct 14 06:03:12 np0005486808 podman[452823]: 2025-10-14 10:03:12.939607417 +0000 UTC m=+0.233470090 container remove f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:03:12 np0005486808 systemd[1]: libpod-conmon-f707afa867d08e486ccaa9f17e13cf45b643516e48c328566d99e8c2efbe06f4.scope: Deactivated successfully.
Oct 14 06:03:13 np0005486808 podman[452865]: 2025-10-14 10:03:13.151404224 +0000 UTC m=+0.053642187 container create 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 06:03:13 np0005486808 systemd[1]: Started libpod-conmon-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope.
Oct 14 06:03:13 np0005486808 podman[452865]: 2025-10-14 10:03:13.121308995 +0000 UTC m=+0.023547008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:03:13 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:03:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:13 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:03:13 np0005486808 podman[452865]: 2025-10-14 10:03:13.245578674 +0000 UTC m=+0.147816657 container init 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:03:13 np0005486808 podman[452865]: 2025-10-14 10:03:13.253160141 +0000 UTC m=+0.155398084 container start 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 14 06:03:13 np0005486808 podman[452865]: 2025-10-14 10:03:13.257333243 +0000 UTC m=+0.159571226 container attach 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:03:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:13 np0005486808 nova_compute[259627]: 2025-10-14 10:03:13.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:14 np0005486808 nova_compute[259627]: 2025-10-14 10:03:14.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]: {
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_id": 2,
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "type": "bluestore"
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    },
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_id": 1,
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "type": "bluestore"
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    },
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_id": 0,
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:        "type": "bluestore"
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]:    }
Oct 14 06:03:14 np0005486808 intelligent_nobel[452883]: }
Oct 14 06:03:14 np0005486808 systemd[1]: libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Deactivated successfully.
Oct 14 06:03:14 np0005486808 systemd[1]: libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Consumed 1.066s CPU time.
Oct 14 06:03:14 np0005486808 conmon[452883]: conmon 1b73a7d42a998cf278b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope/container/memory.events
Oct 14 06:03:14 np0005486808 podman[452865]: 2025-10-14 10:03:14.31885592 +0000 UTC m=+1.221093903 container died 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 14 06:03:14 np0005486808 systemd[1]: var-lib-containers-storage-overlay-322b800a13fe791ec9a3fdc9aeb4a2719a79421d4830559ef40fa9b74c7cc902-merged.mount: Deactivated successfully.
Oct 14 06:03:14 np0005486808 podman[452865]: 2025-10-14 10:03:14.40361467 +0000 UTC m=+1.305852613 container remove 1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 14 06:03:14 np0005486808 systemd[1]: libpod-conmon-1b73a7d42a998cf278b14dd54f7d675daef11d72976ec5637648a29b33147077.scope: Deactivated successfully.
Oct 14 06:03:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 06:03:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:14 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 06:03:14 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6e613fdc-ee09-4708-8d69-0cc86e6d2928 does not exist
Oct 14 06:03:14 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 0f7f61a2-4491-43d6-8205-feb5a5f3d197 does not exist
Oct 14 06:03:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:15 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:03:15 np0005486808 nova_compute[259627]: 2025-10-14 10:03:15.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.010 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.012 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:03:16 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:03:16 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/403887356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.508 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.715 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.716 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.716 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.717 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.862 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.863 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 06:03:16 np0005486808 nova_compute[259627]: 2025-10-14 10:03:16.900 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:03:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:03:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237319424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:03:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:17 np0005486808 nova_compute[259627]: 2025-10-14 10:03:17.347 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:03:17 np0005486808 nova_compute[259627]: 2025-10-14 10:03:17.352 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 06:03:17 np0005486808 nova_compute[259627]: 2025-10-14 10:03:17.372 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 06:03:17 np0005486808 nova_compute[259627]: 2025-10-14 10:03:17.373 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 06:03:17 np0005486808 nova_compute[259627]: 2025-10-14 10:03:17.374 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:03:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:19 np0005486808 nova_compute[259627]: 2025-10-14 10:03:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:19 np0005486808 podman[453025]: 2025-10-14 10:03:19.639741708 +0000 UTC m=+0.056407165 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 14 06:03:19 np0005486808 podman[453024]: 2025-10-14 10:03:19.665941081 +0000 UTC m=+0.082608408 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 14 06:03:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:21 np0005486808 nova_compute[259627]: 2025-10-14 10:03:21.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:24 np0005486808 nova_compute[259627]: 2025-10-14 10:03:24.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:24 np0005486808 nova_compute[259627]: 2025-10-14 10:03:24.369 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:24 np0005486808 nova_compute[259627]: 2025-10-14 10:03:24.370 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:24 np0005486808 nova_compute[259627]: 2025-10-14 10:03:24.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:25 np0005486808 nova_compute[259627]: 2025-10-14 10:03:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:25 np0005486808 nova_compute[259627]: 2025-10-14 10:03:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 06:03:25 np0005486808 nova_compute[259627]: 2025-10-14 10:03:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 06:03:26 np0005486808 nova_compute[259627]: 2025-10-14 10:03:25.999 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 06:03:26 np0005486808 nova_compute[259627]: 2025-10-14 10:03:25.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:26 np0005486808 nova_compute[259627]: 2025-10-14 10:03:26.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:26 np0005486808 nova_compute[259627]: 2025-10-14 10:03:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:03:26 np0005486808 nova_compute[259627]: 2025-10-14 10:03:26.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 06:03:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:29 np0005486808 nova_compute[259627]: 2025-10-14 10:03:29.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:03:29 np0005486808 ceph-osd[87348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 46K writes, 185K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 420 writes, 907 keys, 420 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 420 writes, 198 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e30bc271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct 14 06:03:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:31 np0005486808 nova_compute[259627]: 2025-10-14 10:03:31.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:03:32
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Oct 14 06:03:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:03:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:03:34 np0005486808 nova_compute[259627]: 2025-10-14 10:03:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:03:34 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 46K writes, 179K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.69 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 339 writes, 970 keys, 339 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s#012Interval WAL: 339 writes, 156 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5597c56af1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct 14 06:03:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:36 np0005486808 nova_compute[259627]: 2025-10-14 10:03:36.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:36 np0005486808 podman[453070]: 2025-10-14 10:03:36.658542008 +0000 UTC m=+0.069248611 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 14 06:03:36 np0005486808 podman[453069]: 2025-10-14 10:03:36.690368209 +0000 UTC m=+0.100983989 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 14 06:03:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:39 np0005486808 nova_compute[259627]: 2025-10-14 10:03:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:03:39 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 39K writes, 151K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 421 writes, 898 keys, 421 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 421 writes, 192 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct 14 06:03:40 np0005486808 ceph-mgr[74543]: [devicehealth INFO root] Check health
Oct 14 06:03:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:41 np0005486808 nova_compute[259627]: 2025-10-14 10:03:41.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:03:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 06:03:44 np0005486808 nova_compute[259627]: 2025-10-14 10:03:44.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:46 np0005486808 nova_compute[259627]: 2025-10-14 10:03:46.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:49 np0005486808 nova_compute[259627]: 2025-10-14 10:03:49.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:50 np0005486808 podman[453110]: 2025-10-14 10:03:50.698687923 +0000 UTC m=+0.090272127 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 14 06:03:50 np0005486808 podman[453109]: 2025-10-14 10:03:50.785240576 +0000 UTC m=+0.184522028 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 14 06:03:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:51 np0005486808 nova_compute[259627]: 2025-10-14 10:03:51.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:54 np0005486808 nova_compute[259627]: 2025-10-14 10:03:54.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:56 np0005486808 nova_compute[259627]: 2025-10-14 10:03:56.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:03:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:03:59 np0005486808 nova_compute[259627]: 2025-10-14 10:03:59.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:03:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:01 np0005486808 nova_compute[259627]: 2025-10-14 10:04:01.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:04 np0005486808 nova_compute[259627]: 2025-10-14 10:04:04.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 06:04:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 06:04:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 06:04:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1892538747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 06:04:06 np0005486808 nova_compute[259627]: 2025-10-14 10:04:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.083 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:04:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:04:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:04:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:07 np0005486808 podman[453154]: 2025-10-14 10:04:07.704664618 +0000 UTC m=+0.106743820 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 14 06:04:07 np0005486808 podman[453155]: 2025-10-14 10:04:07.715895604 +0000 UTC m=+0.117588566 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 14 06:04:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:09 np0005486808 nova_compute[259627]: 2025-10-14 10:04:09.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:11 np0005486808 nova_compute[259627]: 2025-10-14 10:04:11.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:11 np0005486808 nova_compute[259627]: 2025-10-14 10:04:11.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:13 np0005486808 nova_compute[259627]: 2025-10-14 10:04:13.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:14 np0005486808 nova_compute[259627]: 2025-10-14 10:04:14.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:14 np0005486808 nova_compute[259627]: 2025-10-14 10:04:14.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:14 np0005486808 nova_compute[259627]: 2025-10-14 10:04:14.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 14 06:04:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev aea33ff0-cb7b-4953-8761-7128a895bfdb does not exist
Oct 14 06:04:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 135b1aad-a505-4619-a6dc-042935bfd271 does not exist
Oct 14 06:04:15 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 40a3bd80-d172-473a-ac02-18c2a0d3d31e does not exist
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:04:15 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:04:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:04:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:16 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:04:16 np0005486808 nova_compute[259627]: 2025-10-14 10:04:16.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.699491203 +0000 UTC m=+0.051826172 container create 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 14 06:04:16 np0005486808 systemd[1]: Started libpod-conmon-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope.
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.677698109 +0000 UTC m=+0.030033088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:16 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.821188219 +0000 UTC m=+0.173523238 container init 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.834326392 +0000 UTC m=+0.186661361 container start 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.840188076 +0000 UTC m=+0.192523095 container attach 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 14 06:04:16 np0005486808 wizardly_jones[453483]: 167 167
Oct 14 06:04:16 np0005486808 systemd[1]: libpod-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope: Deactivated successfully.
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.844587104 +0000 UTC m=+0.196922073 container died 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 14 06:04:16 np0005486808 systemd[1]: var-lib-containers-storage-overlay-4b3af7d6a66a7ee95d14aff5124d8dc639a3937747dc9988e1a7f905da03d45d-merged.mount: Deactivated successfully.
Oct 14 06:04:16 np0005486808 podman[453467]: 2025-10-14 10:04:16.912530121 +0000 UTC m=+0.264865080 container remove 7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 14 06:04:16 np0005486808 systemd[1]: libpod-conmon-7f65f1ac529fae2862a5470c9144a38e2c2b1b473f40fbca2730c179fecc9671.scope: Deactivated successfully.
Oct 14 06:04:16 np0005486808 nova_compute[259627]: 2025-10-14 10:04:16.994 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.041 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.042 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.043 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.043 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.044 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:04:17 np0005486808 podman[453509]: 2025-10-14 10:04:17.160919556 +0000 UTC m=+0.068713107 container create 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:04:17 np0005486808 podman[453509]: 2025-10-14 10:04:17.138995518 +0000 UTC m=+0.046789119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:17 np0005486808 systemd[1]: Started libpod-conmon-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope.
Oct 14 06:04:17 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:17 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:17 np0005486808 podman[453509]: 2025-10-14 10:04:17.388958861 +0000 UTC m=+0.296752392 container init 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 06:04:17 np0005486808 podman[453509]: 2025-10-14 10:04:17.399344946 +0000 UTC m=+0.307138457 container start 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:04:17 np0005486808 podman[453509]: 2025-10-14 10:04:17.402338769 +0000 UTC m=+0.310132280 container attach 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 14 06:04:17 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:04:17 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473578652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.528 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.717 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.718 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.719 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.719 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.820 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.820 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 06:04:17 np0005486808 nova_compute[259627]: 2025-10-14 10:04:17.901 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:04:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:04:18 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328658697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:04:18 np0005486808 nova_compute[259627]: 2025-10-14 10:04:18.419 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:04:18 np0005486808 nova_compute[259627]: 2025-10-14 10:04:18.428 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 06:04:18 np0005486808 nova_compute[259627]: 2025-10-14 10:04:18.456 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 06:04:18 np0005486808 nova_compute[259627]: 2025-10-14 10:04:18.458 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 06:04:18 np0005486808 nova_compute[259627]: 2025-10-14 10:04:18.459 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:04:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:18 np0005486808 ecstatic_antonelli[453545]: --> passed data devices: 0 physical, 3 LVM
Oct 14 06:04:18 np0005486808 ecstatic_antonelli[453545]: --> relative data size: 1.0
Oct 14 06:04:18 np0005486808 ecstatic_antonelli[453545]: --> All data devices are unavailable
Oct 14 06:04:18 np0005486808 systemd[1]: libpod-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Deactivated successfully.
Oct 14 06:04:18 np0005486808 systemd[1]: libpod-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Consumed 1.151s CPU time.
Oct 14 06:04:18 np0005486808 podman[453598]: 2025-10-14 10:04:18.656718508 +0000 UTC m=+0.034100797 container died 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:04:18 np0005486808 systemd[1]: var-lib-containers-storage-overlay-51b500ac3b89b32482e83afdb7fb167d0580e0099fb0e5aae3cb56ec44110312-merged.mount: Deactivated successfully.
Oct 14 06:04:18 np0005486808 podman[453598]: 2025-10-14 10:04:18.72035986 +0000 UTC m=+0.097742149 container remove 71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 14 06:04:18 np0005486808 systemd[1]: libpod-conmon-71f7303f91fac7f775f778f6ad68547e3b33dd5223a4317d34a78076a64afd76.scope: Deactivated successfully.
Oct 14 06:04:19 np0005486808 nova_compute[259627]: 2025-10-14 10:04:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.702623241 +0000 UTC m=+0.073932565 container create 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:04:19 np0005486808 systemd[1]: Started libpod-conmon-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope.
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.673277181 +0000 UTC m=+0.044586565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:19 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.808801206 +0000 UTC m=+0.180110610 container init 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.822208655 +0000 UTC m=+0.193517999 container start 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.827613428 +0000 UTC m=+0.198922822 container attach 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:04:19 np0005486808 gifted_shockley[453773]: 167 167
Oct 14 06:04:19 np0005486808 systemd[1]: libpod-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope: Deactivated successfully.
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.832952159 +0000 UTC m=+0.204261573 container died 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 14 06:04:19 np0005486808 systemd[1]: var-lib-containers-storage-overlay-ac3d15f72b30f668d6ece58f2c982a163fe7ce15519410ba04894aaf76c7bdda-merged.mount: Deactivated successfully.
Oct 14 06:04:19 np0005486808 podman[453757]: 2025-10-14 10:04:19.890932972 +0000 UTC m=+0.262242316 container remove 4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:04:19 np0005486808 systemd[1]: libpod-conmon-4ce44223058d5a1d48b430bedd5659f90d9628942587144739bf34777cf6897b.scope: Deactivated successfully.
Oct 14 06:04:20 np0005486808 podman[453797]: 2025-10-14 10:04:20.15414906 +0000 UTC m=+0.080050045 container create 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:04:20 np0005486808 podman[453797]: 2025-10-14 10:04:20.119283715 +0000 UTC m=+0.045184760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:20 np0005486808 systemd[1]: Started libpod-conmon-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope.
Oct 14 06:04:20 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:20 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:20 np0005486808 podman[453797]: 2025-10-14 10:04:20.289786219 +0000 UTC m=+0.215687254 container init 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 14 06:04:20 np0005486808 podman[453797]: 2025-10-14 10:04:20.29797712 +0000 UTC m=+0.223878115 container start 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 14 06:04:20 np0005486808 podman[453797]: 2025-10-14 10:04:20.30289394 +0000 UTC m=+0.228794965 container attach 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]: {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    "0": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "devices": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "/dev/loop3"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            ],
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_name": "ceph_lv0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_size": "21470642176",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "name": "ceph_lv0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "tags": {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_name": "ceph",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.crush_device_class": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.encrypted": "0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_id": "0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.vdo": "0"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            },
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "vg_name": "ceph_vg0"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        }
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    ],
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    "1": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "devices": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "/dev/loop4"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            ],
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_name": "ceph_lv1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_size": "21470642176",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "name": "ceph_lv1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "tags": {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_name": "ceph",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.crush_device_class": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.encrypted": "0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_id": "1",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.vdo": "0"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            },
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "vg_name": "ceph_vg1"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        }
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    ],
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    "2": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "devices": [
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "/dev/loop5"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            ],
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_name": "ceph_lv2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_size": "21470642176",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "name": "ceph_lv2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "tags": {
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.cluster_name": "ceph",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.crush_device_class": "",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.encrypted": "0",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osd_id": "2",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:                "ceph.vdo": "0"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            },
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "type": "block",
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:            "vg_name": "ceph_vg2"
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:        }
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]:    ]
Oct 14 06:04:21 np0005486808 sweet_mestorf[453814]: }
Oct 14 06:04:21 np0005486808 systemd[1]: libpod-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope: Deactivated successfully.
Oct 14 06:04:21 np0005486808 podman[453797]: 2025-10-14 10:04:21.094531735 +0000 UTC m=+1.020432760 container died 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 14 06:04:21 np0005486808 systemd[1]: var-lib-containers-storage-overlay-d729dff7b9464595975151b32e3a0be459594ab0c4d7f356b3b6d3bc11d17773-merged.mount: Deactivated successfully.
Oct 14 06:04:21 np0005486808 podman[453797]: 2025-10-14 10:04:21.182097513 +0000 UTC m=+1.107998518 container remove 72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:04:21 np0005486808 systemd[1]: libpod-conmon-72b4cca9c47bfd338b6d29be17868db349f0c2d48182aef5a799354d75dc087d.scope: Deactivated successfully.
Oct 14 06:04:21 np0005486808 podman[453832]: 2025-10-14 10:04:21.243423138 +0000 UTC m=+0.098141309 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 06:04:21 np0005486808 podman[453824]: 2025-10-14 10:04:21.274197323 +0000 UTC m=+0.129019866 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 06:04:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:21 np0005486808 nova_compute[259627]: 2025-10-14 10:04:21.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.07284596 +0000 UTC m=+0.051346371 container create 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 14 06:04:22 np0005486808 systemd[1]: Started libpod-conmon-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope.
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.050482851 +0000 UTC m=+0.028983272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.180958703 +0000 UTC m=+0.159459124 container init 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.193420349 +0000 UTC m=+0.171920740 container start 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.196932135 +0000 UTC m=+0.175432616 container attach 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 14 06:04:22 np0005486808 beautiful_chaplygin[454037]: 167 167
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.202975553 +0000 UTC m=+0.181475974 container died 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 14 06:04:22 np0005486808 systemd[1]: libpod-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope: Deactivated successfully.
Oct 14 06:04:22 np0005486808 systemd[1]: var-lib-containers-storage-overlay-832c41bdbe28087245f471933ba113b255b82dd6299565ea2aaf44b42a9de570-merged.mount: Deactivated successfully.
Oct 14 06:04:22 np0005486808 podman[454021]: 2025-10-14 10:04:22.260435483 +0000 UTC m=+0.238935904 container remove 421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 14 06:04:22 np0005486808 systemd[1]: libpod-conmon-421f1bdcec635be98eefd884acf9de5f9c62e77a5d4f8308c9d1a615d73197b0.scope: Deactivated successfully.
Oct 14 06:04:22 np0005486808 podman[454061]: 2025-10-14 10:04:22.521237642 +0000 UTC m=+0.075865342 container create 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:04:22 np0005486808 systemd[1]: Started libpod-conmon-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope.
Oct 14 06:04:22 np0005486808 podman[454061]: 2025-10-14 10:04:22.49629618 +0000 UTC m=+0.050923900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:04:22 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:04:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:22 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:04:22 np0005486808 podman[454061]: 2025-10-14 10:04:22.634960303 +0000 UTC m=+0.189587993 container init 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 14 06:04:22 np0005486808 podman[454061]: 2025-10-14 10:04:22.644657381 +0000 UTC m=+0.199285081 container start 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:04:22 np0005486808 podman[454061]: 2025-10-14 10:04:22.64830768 +0000 UTC m=+0.202935430 container attach 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 14 06:04:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]: {
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_id": 2,
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "type": "bluestore"
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    },
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_id": 1,
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "type": "bluestore"
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    },
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_id": 0,
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:        "type": "bluestore"
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]:    }
Oct 14 06:04:23 np0005486808 angry_montalcini[454077]: }
Oct 14 06:04:23 np0005486808 systemd[1]: libpod-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Deactivated successfully.
Oct 14 06:04:23 np0005486808 podman[454061]: 2025-10-14 10:04:23.802107361 +0000 UTC m=+1.356735091 container died 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 14 06:04:23 np0005486808 systemd[1]: libpod-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Consumed 1.160s CPU time.
Oct 14 06:04:23 np0005486808 systemd[1]: var-lib-containers-storage-overlay-f764af867b9521c5548facea7bcc9a04473b5d4cd320c63e599cddf59ff25b0c-merged.mount: Deactivated successfully.
Oct 14 06:04:23 np0005486808 podman[454061]: 2025-10-14 10:04:23.878935186 +0000 UTC m=+1.433562876 container remove 9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:04:23 np0005486808 systemd[1]: libpod-conmon-9def52d3f29e889ddf9b2e1a85f58ac9aff58e496b2a7db7d679c1dd8b1258f3.scope: Deactivated successfully.
Oct 14 06:04:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 06:04:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 06:04:23 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:23 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 294f3f21-aeed-4e7f-b661-d32f0f5b6460 does not exist
Oct 14 06:04:23 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev b590e88d-8401-480c-8fcd-11fd61bb6c35 does not exist
Oct 14 06:04:24 np0005486808 nova_compute[259627]: 2025-10-14 10:04:24.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:24 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:24 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:04:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:25 np0005486808 nova_compute[259627]: 2025-10-14 10:04:25.438 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:25 np0005486808 nova_compute[259627]: 2025-10-14 10:04:25.439 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:25 np0005486808 nova_compute[259627]: 2025-10-14 10:04:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:25 np0005486808 nova_compute[259627]: 2025-10-14 10:04:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 06:04:25 np0005486808 nova_compute[259627]: 2025-10-14 10:04:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 06:04:26 np0005486808 nova_compute[259627]: 2025-10-14 10:04:26.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 06:04:26 np0005486808 nova_compute[259627]: 2025-10-14 10:04:26.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:26 np0005486808 nova_compute[259627]: 2025-10-14 10:04:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:26 np0005486808 nova_compute[259627]: 2025-10-14 10:04:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:27 np0005486808 nova_compute[259627]: 2025-10-14 10:04:27.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:27 np0005486808 nova_compute[259627]: 2025-10-14 10:04:27.979 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 06:04:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:29 np0005486808 nova_compute[259627]: 2025-10-14 10:04:29.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.979 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.980 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.981 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.981 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.982 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:04:30 np0005486808 nova_compute[259627]: 2025-10-14 10:04:30.983 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.033 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.045 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.045 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.046 2 WARNING nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Removable base files: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963 /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/342c3cf69558783c61e2fc446ea836becb687963#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.046 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/31b09e724e39ad6100a7d39b565399944ae3b6cf#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.047 2 DEBUG nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.047 2 INFO nova.virt.libvirt.imagecache [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct 14 06:04:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:31 np0005486808 nova_compute[259627]: 2025-10-14 10:04:31.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:04:32
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 14 06:04:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 06:04:32 np0005486808 nova_compute[259627]: 2025-10-14 10:04:32.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:04:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:04:34 np0005486808 nova_compute[259627]: 2025-10-14 10:04:34.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:36 np0005486808 nova_compute[259627]: 2025-10-14 10:04:36.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:38 np0005486808 podman[454175]: 2025-10-14 10:04:38.69830861 +0000 UTC m=+0.098539279 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 14 06:04:38 np0005486808 podman[454174]: 2025-10-14 10:04:38.727299642 +0000 UTC m=+0.128122865 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 14 06:04:39 np0005486808 nova_compute[259627]: 2025-10-14 10:04:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:04:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 06:04:41 np0005486808 nova_compute[259627]: 2025-10-14 10:04:41.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:43 np0005486808 nova_compute[259627]: 2025-10-14 10:04:42.999 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:43 np0005486808 nova_compute[259627]: 2025-10-14 10:04:43.000 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 14 06:04:43 np0005486808 nova_compute[259627]: 2025-10-14 10:04:43.022 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 14 06:04:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct 14 06:04:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:04:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 06:04:44 np0005486808 nova_compute[259627]: 2025-10-14 10:04:44.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 14 06:04:46 np0005486808 nova_compute[259627]: 2025-10-14 10:04:46.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 06:04:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:49 np0005486808 nova_compute[259627]: 2025-10-14 10:04:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 06:04:51 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 14 06:04:51 np0005486808 podman[454216]: 2025-10-14 10:04:51.66245133 +0000 UTC m=+0.067500777 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 14 06:04:51 np0005486808 nova_compute[259627]: 2025-10-14 10:04:51.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:51 np0005486808 podman[454215]: 2025-10-14 10:04:51.755111514 +0000 UTC m=+0.156716826 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 14 06:04:53 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 06:04:53 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:54 np0005486808 nova_compute[259627]: 2025-10-14 10:04:54.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:55 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.476401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296476450, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1090, "num_deletes": 251, "total_data_size": 1621822, "memory_usage": 1649472, "flush_reason": "Manual Compaction"}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296486488, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1595659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69384, "largest_seqno": 70473, "table_properties": {"data_size": 1590326, "index_size": 2792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11218, "raw_average_key_size": 19, "raw_value_size": 1579719, "raw_average_value_size": 2771, "num_data_blocks": 125, "num_entries": 570, "num_filter_entries": 570, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436188, "oldest_key_time": 1760436188, "file_creation_time": 1760436296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 10149 microseconds, and 4778 cpu microseconds.
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.486545) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1595659 bytes OK
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.486572) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488467) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488492) EVENT_LOG_v1 {"time_micros": 1760436296488484, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.488516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1616773, prev total WAL file size 1616773, number of live WAL files 2.
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.489534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1558KB)], [164(9461KB)]
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296489630, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11284109, "oldest_snapshot_seqno": -1}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8825 keys, 9493849 bytes, temperature: kUnknown
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296554898, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9493849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9439517, "index_size": 31179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22085, "raw_key_size": 231450, "raw_average_key_size": 26, "raw_value_size": 9286389, "raw_average_value_size": 1052, "num_data_blocks": 1200, "num_entries": 8825, "num_filter_entries": 8825, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.555354) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9493849 bytes
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.556990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 145.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(13.0) write-amplify(5.9) OK, records in: 9339, records dropped: 514 output_compression: NoCompression
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.557028) EVENT_LOG_v1 {"time_micros": 1760436296557006, "job": 102, "event": "compaction_finished", "compaction_time_micros": 65413, "compaction_time_cpu_micros": 46387, "output_level": 6, "num_output_files": 1, "total_output_size": 9493849, "num_input_records": 9339, "num_output_records": 8825, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296557814, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436296561346, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.489386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:04:56.561418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:04:56 np0005486808 nova_compute[259627]: 2025-10-14 10:04:56.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:56 np0005486808 nova_compute[259627]: 2025-10-14 10:04:56.996 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:04:57 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 14 06:04:58 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:04:59 np0005486808 nova_compute[259627]: 2025-10-14 10:04:59.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:04:59 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:01 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:01 np0005486808 nova_compute[259627]: 2025-10-14 10:05:01.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:02 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:03 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:03 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:04 np0005486808 nova_compute[259627]: 2025-10-14 10:05:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:05 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 14 06:05:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 14 06:05:05 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 14 06:05:05 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873002386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 14 06:05:06 np0005486808 nova_compute[259627]: 2025-10-14 10:05:06.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.084 162547 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:05:07 np0005486808 ovn_metadata_agent[162542]: 2025-10-14 10:05:07.085 162547 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:05:07 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:08 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:09 np0005486808 nova_compute[259627]: 2025-10-14 10:05:09.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:09 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:09 np0005486808 podman[454261]: 2025-10-14 10:05:09.684399303 +0000 UTC m=+0.089113997 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 14 06:05:09 np0005486808 podman[454262]: 2025-10-14 10:05:09.68466442 +0000 UTC m=+0.080345153 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 14 06:05:11 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:11 np0005486808 nova_compute[259627]: 2025-10-14 10:05:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:12 np0005486808 nova_compute[259627]: 2025-10-14 10:05:12.135 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:13 np0005486808 nova_compute[259627]: 2025-10-14 10:05:13.001 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:13 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:13 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:14 np0005486808 nova_compute[259627]: 2025-10-14 10:05:14.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:15 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:15 np0005486808 nova_compute[259627]: 2025-10-14 10:05:15.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:16 np0005486808 nova_compute[259627]: 2025-10-14 10:05:16.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:17 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:18 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:18 np0005486808 nova_compute[259627]: 2025-10-14 10:05:18.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.011 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.012 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:05:19 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:19 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:05:19 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398986526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.490 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.724 2 WARNING nova.virt.libvirt.driver [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.725 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.726 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.726 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.817 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.818 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.836 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing inventories for resource provider 92105e1d-1743-46e3-a494-858b4331398a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.877 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating ProviderTree inventory for provider 92105e1d-1743-46e3-a494-858b4331398a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.878 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Updating inventory in ProviderTree for provider 92105e1d-1743-46e3-a494-858b4331398a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.898 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing aggregate associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.945 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Refreshing trait associations for resource provider 92105e1d-1743-46e3-a494-858b4331398a, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 14 06:05:19 np0005486808 nova_compute[259627]: 2025-10-14 10:05:19.962 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 14 06:05:20 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 14 06:05:20 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444473545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 14 06:05:20 np0005486808 nova_compute[259627]: 2025-10-14 10:05:20.417 2 DEBUG oslo_concurrency.processutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 14 06:05:20 np0005486808 nova_compute[259627]: 2025-10-14 10:05:20.425 2 DEBUG nova.compute.provider_tree [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed in ProviderTree for provider: 92105e1d-1743-46e3-a494-858b4331398a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 14 06:05:20 np0005486808 nova_compute[259627]: 2025-10-14 10:05:20.447 2 DEBUG nova.scheduler.client.report [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Inventory has not changed for provider 92105e1d-1743-46e3-a494-858b4331398a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 14 06:05:20 np0005486808 nova_compute[259627]: 2025-10-14 10:05:20.449 2 DEBUG nova.compute.resource_tracker [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 14 06:05:20 np0005486808 nova_compute[259627]: 2025-10-14 10:05:20.450 2 DEBUG oslo_concurrency.lockutils [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 14 06:05:21 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:21 np0005486808 nova_compute[259627]: 2025-10-14 10:05:21.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:22 np0005486808 podman[454344]: 2025-10-14 10:05:22.687243016 +0000 UTC m=+0.091566218 container health_status 850759ea3b88ee701494bcf1122a9a76c691c8bac45f2554207c058a816d2d41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:081710f3f67a74adb03d6d8f527f6ef01828243c2be24ca57436de2be8618576', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 14 06:05:22 np0005486808 podman[454343]: 2025-10-14 10:05:22.691292596 +0000 UTC m=+0.104476685 container health_status 71d1a553a8c8216d3cf34395ae03ef0e40e4613e131f44dc2b59fc8a74bf9f47 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:1f610ed4ebf657334da87dfd95b3dc5299fb3540ec1433ae3db34f0f247d8abf', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 14 06:05:23 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:23 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:24 np0005486808 nova_compute[259627]: 2025-10-14 10:05:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 86f787f9-97c4-4758-b166-862fcac6909c does not exist
Oct 14 06:05:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev c7422c25-4969-489d-99a7-b748fb7d800b does not exist
Oct 14 06:05:25 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 1c1140b8-94c9-406e-9fbc-2e8049ff2f45 does not exist
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:25 np0005486808 nova_compute[259627]: 2025-10-14 10:05:25.446 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:25 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.869223082 +0000 UTC m=+0.071898965 container create d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:05:25 np0005486808 systemd[1]: Started libpod-conmon-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope.
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.840243071 +0000 UTC m=+0.042919084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:25 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.96449269 +0000 UTC m=+0.167168613 container init d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.977288834 +0000 UTC m=+0.179964727 container start d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:05:25 np0005486808 nova_compute[259627]: 2025-10-14 10:05:25.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:25 np0005486808 nova_compute[259627]: 2025-10-14 10:05:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 14 06:05:25 np0005486808 nova_compute[259627]: 2025-10-14 10:05:25.978 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.982085161 +0000 UTC m=+0.184761114 container attach d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:05:25 np0005486808 angry_euler[454674]: 167 167
Oct 14 06:05:25 np0005486808 systemd[1]: libpod-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope: Deactivated successfully.
Oct 14 06:05:25 np0005486808 conmon[454674]: conmon d8883d62b5b613c18216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope/container/memory.events
Oct 14 06:05:25 np0005486808 podman[454658]: 2025-10-14 10:05:25.984997493 +0000 UTC m=+0.187673396 container died d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 14 06:05:26 np0005486808 nova_compute[259627]: 2025-10-14 10:05:26.008 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 14 06:05:26 np0005486808 systemd[1]: var-lib-containers-storage-overlay-596d4f495d072a7009a61d5cfb7f80f763b723d3212b5754f90c5ebc6cf853e7-merged.mount: Deactivated successfully.
Oct 14 06:05:26 np0005486808 podman[454658]: 2025-10-14 10:05:26.037194614 +0000 UTC m=+0.239870517 container remove d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 14 06:05:26 np0005486808 systemd[1]: libpod-conmon-d8883d62b5b613c1821665d74a7c6f1c4f10970af08b02fe03ec75bc8959c485.scope: Deactivated successfully.
Oct 14 06:05:26 np0005486808 podman[454698]: 2025-10-14 10:05:26.296866155 +0000 UTC m=+0.065625001 container create c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 14 06:05:26 np0005486808 systemd[1]: Started libpod-conmon-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope.
Oct 14 06:05:26 np0005486808 podman[454698]: 2025-10-14 10:05:26.279894259 +0000 UTC m=+0.048653125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:26 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:26 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:26 np0005486808 podman[454698]: 2025-10-14 10:05:26.411680362 +0000 UTC m=+0.180439238 container init c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 14 06:05:26 np0005486808 podman[454698]: 2025-10-14 10:05:26.423156684 +0000 UTC m=+0.191915530 container start c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:05:26 np0005486808 podman[454698]: 2025-10-14 10:05:26.427523891 +0000 UTC m=+0.196282797 container attach c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 14 06:05:26 np0005486808 nova_compute[259627]: 2025-10-14 10:05:26.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:26 np0005486808 nova_compute[259627]: 2025-10-14 10:05:26.978 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:26 np0005486808 nova_compute[259627]: 2025-10-14 10:05:26.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:26 np0005486808 nova_compute[259627]: 2025-10-14 10:05:26.979 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:27 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:27 np0005486808 hardcore_jones[454714]: --> passed data devices: 0 physical, 3 LVM
Oct 14 06:05:27 np0005486808 hardcore_jones[454714]: --> relative data size: 1.0
Oct 14 06:05:27 np0005486808 hardcore_jones[454714]: --> All data devices are unavailable
Oct 14 06:05:27 np0005486808 systemd[1]: libpod-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Deactivated successfully.
Oct 14 06:05:27 np0005486808 systemd[1]: libpod-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Consumed 1.136s CPU time.
Oct 14 06:05:27 np0005486808 podman[454698]: 2025-10-14 10:05:27.638369041 +0000 UTC m=+1.407127957 container died c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:05:27 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fd9fd3ce863412f30b5228a7cc414e26b284072cae82a844ebd9a9c08cc57cda-merged.mount: Deactivated successfully.
Oct 14 06:05:27 np0005486808 podman[454698]: 2025-10-14 10:05:27.725901669 +0000 UTC m=+1.494660555 container remove c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jones, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 14 06:05:27 np0005486808 systemd[1]: libpod-conmon-c9778f51cb17487a854d54010715d46a9f9423e169773555578e25c85a093fec.scope: Deactivated successfully.
Oct 14 06:05:27 np0005486808 nova_compute[259627]: 2025-10-14 10:05:27.977 2 DEBUG oslo_service.periodic_task [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 14 06:05:27 np0005486808 nova_compute[259627]: 2025-10-14 10:05:27.977 2 DEBUG nova.compute.manager [None req-db8e37f6-6697-4086-8c08-44ff017a94e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 14 06:05:28 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.604419035 +0000 UTC m=+0.068741217 container create 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:05:28 np0005486808 systemd[1]: Started libpod-conmon-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope.
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.573311692 +0000 UTC m=+0.037633884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:28 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.709554005 +0000 UTC m=+0.173876287 container init 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.716660899 +0000 UTC m=+0.180983102 container start 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.720092594 +0000 UTC m=+0.184414786 container attach 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 14 06:05:28 np0005486808 practical_ritchie[454915]: 167 167
Oct 14 06:05:28 np0005486808 systemd[1]: libpod-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope: Deactivated successfully.
Oct 14 06:05:28 np0005486808 conmon[454915]: conmon 91b55561a3be60450a6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope/container/memory.events
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.724376619 +0000 UTC m=+0.188698841 container died 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 14 06:05:28 np0005486808 systemd[1]: var-lib-containers-storage-overlay-041bfb483b8a846d1f7bfe7030eeabf19ec58cd95f96048134b5240930404ef9-merged.mount: Deactivated successfully.
Oct 14 06:05:28 np0005486808 podman[454899]: 2025-10-14 10:05:28.781160612 +0000 UTC m=+0.245482844 container remove 91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 14 06:05:28 np0005486808 systemd[1]: libpod-conmon-91b55561a3be60450a6dc3fca59efaf3c6235c5bf5f5a2b6a050b5c00dad76e5.scope: Deactivated successfully.
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.032676024 +0000 UTC m=+0.077189465 container create ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 14 06:05:29 np0005486808 systemd[1]: Started libpod-conmon-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope.
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.000684799 +0000 UTC m=+0.045198260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:29 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:29 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.138763777 +0000 UTC m=+0.183277268 container init ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.15153858 +0000 UTC m=+0.196052011 container start ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.157741882 +0000 UTC m=+0.202255373 container attach ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 14 06:05:29 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:29 np0005486808 nova_compute[259627]: 2025-10-14 10:05:29.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]: {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    "0": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "devices": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "/dev/loop3"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            ],
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_name": "ceph_lv0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_size": "21470642176",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f3a61673-339d-4723-8921-51461af37696,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "name": "ceph_lv0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "tags": {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_uuid": "JzdwSc-FZYf-OMxt-f0sq-QeTH-1G4Q-qGXGGZ",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_name": "ceph",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.crush_device_class": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.encrypted": "0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_fsid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_id": "0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.vdo": "0"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            },
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "vg_name": "ceph_vg0"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        }
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    ],
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    "1": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "devices": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "/dev/loop4"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            ],
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_name": "ceph_lv1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_size": "21470642176",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e4dad3b-8894-46e9-8f78-c9bac8e09fb7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "name": "ceph_lv1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "tags": {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_uuid": "iYMhab-Qxv4-5035-cN1V-SXJC-rPaG-AbGVJf",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_name": "ceph",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.crush_device_class": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.encrypted": "0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_fsid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_id": "1",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.vdo": "0"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            },
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "vg_name": "ceph_vg1"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        }
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    ],
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    "2": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "devices": [
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "/dev/loop5"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            ],
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_name": "ceph_lv2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_size": "21470642176",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c49aadb6-9b04-5cb1-8f5f-4c91676c568e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=89de92de-6791-493e-a676-9fee8315c8cf,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "lv_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "name": "ceph_lv2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "tags": {
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.block_uuid": "gqcqZJ-7KDW-RGdV-5j4z-MCmy-mx62-6itNcW",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cephx_lockbox_secret": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.cluster_name": "ceph",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.crush_device_class": "",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.encrypted": "0",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_fsid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osd_id": "2",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:                "ceph.vdo": "0"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            },
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "type": "block",
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:            "vg_name": "ceph_vg2"
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:        }
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]:    ]
Oct 14 06:05:29 np0005486808 ecstatic_johnson[454954]: }
Oct 14 06:05:29 np0005486808 systemd[1]: libpod-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope: Deactivated successfully.
Oct 14 06:05:29 np0005486808 podman[454938]: 2025-10-14 10:05:29.945362879 +0000 UTC m=+0.989876320 container died ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 14 06:05:29 np0005486808 systemd[1]: var-lib-containers-storage-overlay-fc09856a43c76325644c9255dada90899cd5afad8fdd4dc3e4e4dbc93723872c-merged.mount: Deactivated successfully.
Oct 14 06:05:30 np0005486808 podman[454938]: 2025-10-14 10:05:30.014725311 +0000 UTC m=+1.059238732 container remove ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:05:30 np0005486808 systemd[1]: libpod-conmon-ec5da45eee99be9d55e0130c922e952aef49b57e84b1e17f7d1ddd7d8a2fd8aa.scope: Deactivated successfully.
Oct 14 06:05:30 np0005486808 systemd-logind[799]: New session 60 of user zuul.
Oct 14 06:05:30 np0005486808 systemd[1]: Started Session 60 of User zuul.
Oct 14 06:05:30 np0005486808 podman[455156]: 2025-10-14 10:05:30.943076429 +0000 UTC m=+0.056918368 container create 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:30.91379381 +0000 UTC m=+0.027635759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:31 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:31 np0005486808 systemd[1]: Started libpod-conmon-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope.
Oct 14 06:05:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:31.656976576 +0000 UTC m=+0.770818475 container init 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:31.669428392 +0000 UTC m=+0.783270291 container start 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:31.674284911 +0000 UTC m=+0.788126810 container attach 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:05:31 np0005486808 crazy_perlman[455172]: 167 167
Oct 14 06:05:31 np0005486808 systemd[1]: libpod-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope: Deactivated successfully.
Oct 14 06:05:31 np0005486808 conmon[455172]: conmon 7ab74dcf56e086923831 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope/container/memory.events
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:31.678990286 +0000 UTC m=+0.792832195 container died 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 14 06:05:31 np0005486808 systemd[1]: var-lib-containers-storage-overlay-8f15d3ec28b42f1a8f1e3168cbee9e51b656ae97f471622d855c005b47aa6ab4-merged.mount: Deactivated successfully.
Oct 14 06:05:31 np0005486808 podman[455156]: 2025-10-14 10:05:31.73499899 +0000 UTC m=+0.848840909 container remove 7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 14 06:05:31 np0005486808 systemd[1]: libpod-conmon-7ab74dcf56e0869238314ec5452f72c120095c6ebaf3328ae0edbf21054f0f8a.scope: Deactivated successfully.
Oct 14 06:05:31 np0005486808 podman[455237]: 2025-10-14 10:05:31.933644485 +0000 UTC m=+0.059790778 container create e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 14 06:05:31 np0005486808 systemd[1]: Started libpod-conmon-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope.
Oct 14 06:05:31 np0005486808 nova_compute[259627]: 2025-10-14 10:05:31.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:31 np0005486808 systemd[1]: Started libcrun container.
Oct 14 06:05:31 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:32 np0005486808 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 14 06:05:32 np0005486808 podman[455237]: 2025-10-14 10:05:31.914255399 +0000 UTC m=+0.040401702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 14 06:05:32 np0005486808 podman[455237]: 2025-10-14 10:05:32.018322502 +0000 UTC m=+0.144468795 container init e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 14 06:05:32 np0005486808 podman[455237]: 2025-10-14 10:05:32.025357145 +0000 UTC m=+0.151503448 container start e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:05:32 np0005486808 podman[455237]: 2025-10-14 10:05:32.028852591 +0000 UTC m=+0.154998884 container attach e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] scanning for idle connections..
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [volumes INFO mgr_util] cleaning up connections: []
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Optimize plan auto_2025-10-14_10:05:32
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] do_upmap
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr']
Oct 14 06:05:32 np0005486808 ceph-mgr[74543]: [balancer INFO root] prepared 0/10 changes
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]: {
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    "89de92de-6791-493e-a676-9fee8315c8cf": {
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_id": 2,
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_uuid": "89de92de-6791-493e-a676-9fee8315c8cf",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "type": "bluestore"
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    },
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7": {
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_id": 1,
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_uuid": "8e4dad3b-8894-46e9-8f78-c9bac8e09fb7",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "type": "bluestore"
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    },
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    "f3a61673-339d-4723-8921-51461af37696": {
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "ceph_fsid": "c49aadb6-9b04-5cb1-8f5f-4c91676c568e",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_id": 0,
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "osd_uuid": "f3a61673-339d-4723-8921-51461af37696",
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:        "type": "bluestore"
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]:    }
Oct 14 06:05:33 np0005486808 competent_mahavira[455257]: }
Oct 14 06:05:33 np0005486808 systemd[1]: libpod-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Deactivated successfully.
Oct 14 06:05:33 np0005486808 systemd[1]: libpod-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Consumed 1.085s CPU time.
Oct 14 06:05:33 np0005486808 podman[455385]: 2025-10-14 10:05:33.176987433 +0000 UTC m=+0.044125964 container died e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 14 06:05:33 np0005486808 systemd[1]: var-lib-containers-storage-overlay-e3bf0c281ce26df1d3b5b39960188f5057b284ba19414490c99bd79cc8e85fb3-merged.mount: Deactivated successfully.
Oct 14 06:05:33 np0005486808 podman[455385]: 2025-10-14 10:05:33.234809382 +0000 UTC m=+0.101947893 container remove e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 14 06:05:33 np0005486808 systemd[1]: libpod-conmon-e95648dcb3a87786133e925feace7b37800c02d579392fcf3f4f250e1624f371.scope: Deactivated successfully.
Oct 14 06:05:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 14 06:05:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:33 np0005486808 ceph-mon[74249]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 07a8a1a4-e53c-41c2-a6cb-42d94ca2ff3f does not exist
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [progress WARNING root] complete: ev 6bf32b6e-2595-4401-9ccb-ca0eaf83efcd does not exist
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 14 06:05:33 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 14 06:05:33 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23291 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:34 np0005486808 ceph-mon[74249]: from='mgr.14132 192.168.122.100:0/3181217353' entity='mgr.compute-0.euuwqu' 
Oct 14 06:05:34 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 14 06:05:34 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203869758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 14 06:05:34 np0005486808 nova_compute[259627]: 2025-10-14 10:05:34.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:35 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:37 np0005486808 nova_compute[259627]: 2025-10-14 10:05:37.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:37 np0005486808 ovs-vsctl[455561]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 14 06:05:37 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:38 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 14 06:05:38 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 14 06:05:38 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:38 np0005486808 virtqemud[259351]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 14 06:05:39 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: cache status {prefix=cache status} (starting...)
Oct 14 06:05:39 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: client ls {prefix=client ls} (starting...)
Oct 14 06:05:39 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:39 np0005486808 lvm[455904]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 14 06:05:39 np0005486808 lvm[455904]: VG ceph_vg1 finished
Oct 14 06:05:39 np0005486808 nova_compute[259627]: 2025-10-14 10:05:39.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:39 np0005486808 lvm[455937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 14 06:05:39 np0005486808 lvm[455937]: VG ceph_vg0 finished
Oct 14 06:05:39 np0005486808 lvm[455942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 14 06:05:39 np0005486808 lvm[455942]: VG ceph_vg2 finished
Oct 14 06:05:39 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23295 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:39 np0005486808 podman[455948]: 2025-10-14 10:05:39.828289628 +0000 UTC m=+0.084283519 container health_status 98a7177e8a60aeea23858ea4af3e9d06e5e1b7951f930df83d7089acc1640dc9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:124d7cea22be48d4f1a8cfedec66864ccd3bea72d0fbc0d6c8e6bf4a6820e8fe', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 14 06:05:39 np0005486808 podman[455946]: 2025-10-14 10:05:39.84516083 +0000 UTC m=+0.105207730 container health_status 1a25947b2e2d8e60653a229107fc74e8c5cee390c9decf0cfb20cb399c6b2a19 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:0f26bfcc3cc838a38a36e11055a96f7d28fb841d04aaf952494f27b1f8919d97', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23297 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 14 06:05:40 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 14 06:05:40 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283883360' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 14 06:05:40 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23303 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:40 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:40.988+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 06:05:40 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 06:05:41 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347199880' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 14 06:05:41 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 14 06:05:41 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543063755' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 14 06:05:41 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: ops {prefix=ops} (starting...)
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308033935' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 06:05:41 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569880885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 06:05:42 np0005486808 nova_compute[259627]: 2025-10-14 10:05:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827175330' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 14 06:05:42 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: session ls {prefix=session ls} (starting...)
Oct 14 06:05:42 np0005486808 ceph-mds[100530]: mds.cephfs.compute-0.qkuhkt asok_command: status {prefix=status} (starting...)
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854098007' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 06:05:42 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23317 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 06:05:42 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027536602' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 06:05:42 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533733668' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657510336' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 14 06:05:43 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239420005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 14 06:05:43 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430673640' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 14 06:05:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 06:05:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507163686' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] _maybe_adjust
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23333 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 06:05:44 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:44.191+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:44 np0005486808 nova_compute[259627]: 2025-10-14 10:05:44.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:44 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 14 06:05:44 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618488327' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 14 06:05:44 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23339 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 14 06:05:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227336034' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 14 06:05:45 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23343 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:45 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:45 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23347 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:45 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 14 06:05:45 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251670558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 14 06:05:45 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23349 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 14 06:05:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227497799' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279150592 unmapped: 45400064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002713 data_alloc: 218103808 data_used: 4149248
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279158784 unmapped: 45391872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042873 data_alloc: 218103808 data_used: 9805824
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef373000/0x0/0x4ffc00000, data 0x15184fd/0x16ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.889083862s of 11.892253876s, submitted: 1
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048489 data_alloc: 218103808 data_used: 9805824
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279175168 unmapped: 45375488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279109632 unmapped: 45441024 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.74 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4462 writes, 17K keys, 4462 commit groups, 1.0 writes per commit group, ingest: 20.06 MB, 0.03 MB/s#012Interval WAL: 4462 writes, 1787 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3050563 data_alloc: 218103808 data_used: 9805824
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279117824 unmapped: 45432832 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.724911690s of 15.823020935s, submitted: 21
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6e800 session 0x55b3cbf234a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70b000 session 0x55b3ccad7c20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ccadbe00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef2c2000/0x0/0x4ffc00000, data 0x15c94fd/0x175c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,1])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d14400 session 0x55b3ce1261e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cb6afc00 session 0x55b3ce58bc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279134208 unmapped: 45416448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065094 data_alloc: 218103808 data_used: 9809920
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1f3000/0x0/0x4ffc00000, data 0x169755f/0x182b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3ce5af680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02c000 session 0x55b3cde57680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279142400 unmapped: 45408256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3cba94000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ccadb0e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068328 data_alloc: 218103808 data_used: 9809920
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279453696 unmapped: 45096960 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1ce000/0x0/0x4ffc00000, data 0x16bb56f/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073608 data_alloc: 218103808 data_used: 10457088
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.005136490s of 16.128065109s, submitted: 29
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef1cc000/0x0/0x4ffc00000, data 0x16bc56f/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073916 data_alloc: 218103808 data_used: 10457088
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279470080 unmapped: 45080576 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eebb8000/0x0/0x4ffc00000, data 0x1cd156f/0x1e66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279379968 unmapped: 45170688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279257088 unmapped: 45293568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146050 data_alloc: 218103808 data_used: 10694656
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea54000/0x0/0x4ffc00000, data 0x1e2d56f/0x1fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279691264 unmapped: 44859392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.522329330s of 11.824616432s, submitted: 88
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eea3d000/0x0/0x4ffc00000, data 0x1e4c56f/0x1fe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140686 data_alloc: 218103808 data_used: 10698752
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279814144 unmapped: 44736512 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cdc6cd20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf611400 session 0x55b3cb9981e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3cbe8dc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061845 data_alloc: 218103808 data_used: 9809920
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef0d8000/0x0/0x4ffc00000, data 0x15ca4fd/0x175d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.392840385s of 13.488458633s, submitted: 24
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3ce5af4a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e7800 session 0x55b3cd3b2f00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 279822336 unmapped: 44728320 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02dc00 session 0x55b3cd3b30e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271908864 unmapped: 52641792 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa3f000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956124 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 52633600 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.573257446s of 15.623365402s, submitted: 15
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 52625408 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271958016 unmapped: 52592640 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271966208 unmapped: 52584448 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271974400 unmapped: 52576256 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955948 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271982592 unmapped: 52568064 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4efa40000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.134544373s of 26.442840576s, submitted: 90
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ccadba40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6fc00 session 0x55b3cbf23860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbdc7c20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3cbd421e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cb99b860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271990784 unmapped: 52559872 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cb9414a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984026 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 271998976 unmapped: 52551680 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010266 data_alloc: 218103808 data_used: 6356992
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272007168 unmapped: 52543488 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ef6be000/0x0/0x4ffc00000, data 0x11d047b/0x1360000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 272015360 unmapped: 52535296 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.337787628s of 21.399259567s, submitted: 5
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042632 data_alloc: 218103808 data_used: 6356992
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 48848896 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275709952 unmapped: 48840704 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275587072 unmapped: 48963584 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081410 data_alloc: 218103808 data_used: 6582272
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eee85000/0x0/0x4ffc00000, data 0x1a0947b/0x1b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275595264 unmapped: 48955392 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce5a25a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3d0b64b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3cb986960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce569a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.855088234s of 15.102807999s, submitted: 57
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082882 data_alloc: 218103808 data_used: 6582272
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275603456 unmapped: 48947200 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3cfba2d20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa50000 session 0x55b3ce39d0e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3ce1274a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e400 session 0x55b3cdc6d0e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd6f000 session 0x55b3ce58ab40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275857408 unmapped: 48693248 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126608 data_alloc: 218103808 data_used: 6582272
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee928000/0x0/0x4ffc00000, data 0x1f654dd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275865600 unmapped: 48685056 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3cbe8c780
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126757 data_alloc: 218103808 data_used: 6582272
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.413484573s of 10.376444817s, submitted: 44
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275808256 unmapped: 48742400 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157769 data_alloc: 218103808 data_used: 10874880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee927000/0x0/0x4ffc00000, data 0x1f65500/0x20f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158077 data_alloc: 218103808 data_used: 10874880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee925000/0x0/0x4ffc00000, data 0x1f66500/0x20f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 275816448 unmapped: 48734208 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.610369682s of 12.620978355s, submitted: 2
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276971520 unmapped: 47579136 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed285000/0x0/0x4ffc00000, data 0x2467500/0x25f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201721 data_alloc: 218103808 data_used: 10940416
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277209088 unmapped: 47341568 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277331968 unmapped: 47218688 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed268000/0x0/0x4ffc00000, data 0x2484500/0x2616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200861 data_alloc: 218103808 data_used: 10952704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.501975060s of 12.738298416s, submitted: 59
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3cb987860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15800 session 0x55b3ce126f00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ccaa0780
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edce3000/0x0/0x4ffc00000, data 0x1a0a47b/0x1b9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277340160 unmapped: 47210496 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cfba3680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c000 session 0x55b3cbdc7680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3d0b641e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276422656 unmapped: 48128000 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276430848 unmapped: 48119808 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276439040 unmapped: 48111616 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276447232 unmapped: 48103424 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276463616 unmapped: 48087040 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276471808 unmapped: 48078848 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276480000 unmapped: 48070656 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ee8a0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974230 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276488192 unmapped: 48062464 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.562591553s of 46.725173950s, submitted: 51
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 276496384 unmapped: 48054272 heap: 324550656 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cbe8c1e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf4e6400 session 0x55b3cc8025a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce10d2c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ce10d0e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d400 session 0x55b3ce5ae000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277823488 unmapped: 50405376 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4edeb5000/0x0/0x4ffc00000, data 0x18384dd/0x19c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058685 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50397184 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512e800 session 0x55b3ce0e9a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.344988823s of 10.475560188s, submitted: 43
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277929984 unmapped: 50298880 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063134 data_alloc: 218103808 data_used: 2838528
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 277938176 unmapped: 50290688 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278339584 unmapped: 49889280 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134014 data_alloc: 234881024 data_used: 12783616
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ede91000/0x0/0x4ffc00000, data 0x185c4dd/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134494 data_alloc: 234881024 data_used: 12795904
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 278347776 unmapped: 49881088 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.812140465s of 12.816347122s, submitted: 1
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 43753472 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3255354 data_alloc: 234881024 data_used: 13996032
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf2d000/0x0/0x4ffc00000, data 0x26204dd/0x27b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250878 data_alloc: 234881024 data_used: 14000128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.050657272s of 14.411172867s, submitted: 131
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebf09000/0x0/0x4ffc00000, data 0x26444dd/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250758 data_alloc: 234881024 data_used: 14000128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284819456 unmapped: 43409408 heap: 328228864 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02d800 session 0x55b3cba952c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5b800 session 0x55b3ccaa0f00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84ec00 session 0x55b3ce0e83c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5fc00 session 0x55b3cdcb23c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce58b4a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb526000/0x0/0x4ffc00000, data 0x30274dd/0x31b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326714 data_alloc: 234881024 data_used: 14000128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63800 session 0x55b3cb941e00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce707400 session 0x55b3d0b65a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbd43e00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaa0000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326262 data_alloc: 234881024 data_used: 14000128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 47579136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 47489024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399382 data_alloc: 234881024 data_used: 21020672
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 45400064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb523000/0x0/0x4ffc00000, data 0x302a4dd/0x31bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.038463593s of 18.175695419s, submitted: 14
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb521000/0x0/0x4ffc00000, data 0x302b4dd/0x31bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400218 data_alloc: 234881024 data_used: 21020672
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 287039488 unmapped: 45391872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289325056 unmapped: 43106304 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289488896 unmapped: 42942464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450960 data_alloc: 234881024 data_used: 21569536
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289554432 unmapped: 42876928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.170224190s of 15.374196053s, submitted: 52
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb08e000/0x0/0x4ffc00000, data 0x34b74dd/0x3648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb095000/0x0/0x4ffc00000, data 0x34b84dd/0x3649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449236 data_alloc: 234881024 data_used: 21671936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3ccadc000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd60400 session 0x55b3ccadb4a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289562624 unmapped: 42868736 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5400 session 0x55b3cfba2b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebefe000/0x0/0x4ffc00000, data 0x264f4dd/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259330 data_alloc: 218103808 data_used: 10498048
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 45703168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd51c00 session 0x55b3ce5ae1e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3cde570e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5d400 session 0x55b3cc7ec1e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ebef2000/0x0/0x4ffc00000, data 0x265b4dd/0x27ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996996 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 49577984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4ab000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174bc00 session 0x55b3ce5a2960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cbe8c960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce0f2000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cde57680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.481346130s of 44.651317596s, submitted: 51
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3cba94000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84e800 session 0x55b3ccada000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3ccad6d20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613c00 session 0x55b3cf5acf00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282976256 unmapped: 49455104 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021392 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282984448 unmapped: 49446912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135c400 session 0x55b3cbf22960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed509000/0x0/0x4ffc00000, data 0x104448b/0x11d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282935296 unmapped: 49496064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038361 data_alloc: 218103808 data_used: 4456448
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282943488 unmapped: 49487872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 282951680 unmapped: 49479680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed4e4000/0x0/0x4ffc00000, data 0x10684ae/0x11fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.536443710s of 19.611005783s, submitted: 14
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 283099136 unmapped: 49332224 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285564928 unmapped: 46866432 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095057 data_alloc: 218103808 data_used: 5337088
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0f1000/0x0/0x4ffc00000, data 0x14454ae/0x15d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285589504 unmapped: 46841856 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285655040 unmapped: 46776320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086421 data_alloc: 218103808 data_used: 5337088
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086741 data_alloc: 218103808 data_used: 5345280
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 47448064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e6000/0x0/0x4ffc00000, data 0x14664ae/0x15f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 47439872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.636660576s of 14.872914314s, submitted: 73
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfa53c00 session 0x55b3ccaddc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512d800 session 0x55b3ce5a3860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3d0b64d20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc0000 session 0x55b3cc7ffc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f4c00 session 0x55b3ccaddc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157783 data_alloc: 218103808 data_used: 5345280
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cbf22960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 285499392 unmapped: 46931968 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198420 data_alloc: 218103808 data_used: 10727424
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 46170112 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214100 data_alloc: 234881024 data_used: 12992512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec857000/0x0/0x4ffc00000, data 0x1cf54ae/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728010178s of 16.851394653s, submitted: 25
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 45809664 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336672 data_alloc: 234881024 data_used: 14049280
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 36315136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 36978688 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb921000/0x0/0x4ffc00000, data 0x2c2a4ae/0x2dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340164 data_alloc: 234881024 data_used: 14286848
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb91f000/0x0/0x4ffc00000, data 0x2c2d4ae/0x2dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337508 data_alloc: 234881024 data_used: 14286848
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 36921344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70c800 session 0x55b3ccada000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.508075714s of 14.903322220s, submitted: 136
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d1d15c00 session 0x55b3cb940f00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d800 session 0x55b3ccadc1e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0e0000/0x0/0x4ffc00000, data 0x146c4ae/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102246 data_alloc: 218103808 data_used: 5394432
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290496512 unmapped: 41934848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x14734ae/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290504704 unmapped: 41926656 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbcb5800 session 0x55b3cc7ff2c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba95c20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d2ff1400 session 0x55b3cde57e00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290512896 unmapped: 41918464 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 41910272 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290529280 unmapped: 41902080 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290537472 unmapped: 41893888 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019668 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed200000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.965858459s of 40.134490967s, submitted: 59
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf610800 session 0x55b3ce5a32c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccbbec00 session 0x55b3ce58a5a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce39cd20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce126780
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cc5990e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290545664 unmapped: 41885696 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030916 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed576000/0x0/0x4ffc00000, data 0xfd847b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cd3b2000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290553856 unmapped: 41877504 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045352 data_alloc: 218103808 data_used: 4300800
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290562048 unmapped: 41869312 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed552000/0x0/0x4ffc00000, data 0xffc47b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 290570240 unmapped: 41861120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.726144791s of 19.748613358s, submitted: 2
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291405824 unmapped: 41025536 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134708 data_alloc: 218103808 data_used: 5177344
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134868 data_alloc: 218103808 data_used: 5181440
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135188 data_alloc: 218103808 data_used: 5189632
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc45000/0x0/0x4ffc00000, data 0x190947b/0x1a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cba943c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce5aeb40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce568960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58b680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.785821915s of 15.976529121s, submitted: 45
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cfbc1c00 session 0x55b3ce58ab40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d135dc00 session 0x55b3ce569a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cbe8da40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d512c000 session 0x55b3cfba3a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ce58a960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec945000/0x0/0x4ffc00000, data 0x1c0947b/0x1d99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292454400 unmapped: 39976960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153364 data_alloc: 218103808 data_used: 5189632
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292462592 unmapped: 39968768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174c000 session 0x55b3cba95a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292618240 unmapped: 39813120 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec921000/0x0/0x4ffc00000, data 0x1c2d47b/0x1dbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180149 data_alloc: 218103808 data_used: 8339456
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.082937241s of 16.159908295s, submitted: 8
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec91f000/0x0/0x4ffc00000, data 0x1c2e47b/0x1dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 39804928 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293462016 unmapped: 38969344 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293478400 unmapped: 38952960 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241169 data_alloc: 218103808 data_used: 8445952
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec26e000/0x0/0x4ffc00000, data 0x22e047b/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293486592 unmapped: 38944768 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba945a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.404949188s of 17.572023392s, submitted: 33
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b800 session 0x55b3cc599c20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293502976 unmapped: 38928384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3ccadd860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141441 data_alloc: 218103808 data_used: 5251072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293519360 unmapped: 38912000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293527552 unmapped: 38903808 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70dc00 session 0x55b3cc7ec5a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3cbd43a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ecc44000/0x0/0x4ffc00000, data 0x190a47b/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5e000 session 0x55b3cba952c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289751040 unmapped: 42680320 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3034629 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289759232 unmapped: 42672128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed700000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.787899017s of 36.916973114s, submitted: 30
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3ce5a21e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb23e5a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf615400 session 0x55b3cbe3ed20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5c00 session 0x55b3cba94d20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cd84f400 session 0x55b3ce58ba40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0x12584dd/0x13e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289767424 unmapped: 42663936 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075374 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cbdc7860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3ce1274a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce5a30e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba25a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289775616 unmapped: 42655744 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096732 data_alloc: 218103808 data_used: 5402624
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f4000/0x0/0x4ffc00000, data 0x12584ed/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 289792000 unmapped: 42639360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.343187332s of 19.452217102s, submitted: 33
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187434 data_alloc: 218103808 data_used: 5513216
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292077568 unmapped: 40353792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fa000/0x0/0x4ffc00000, data 0x1e4a4ed/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec6fd000/0x0/0x4ffc00000, data 0x1e4e4ed/0x1fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189592 data_alloc: 218103808 data_used: 5500928
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291241984 unmapped: 41189376 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291250176 unmapped: 41181184 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188172 data_alloc: 218103808 data_used: 5505024
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec661000/0x0/0x4ffc00000, data 0x1eeb4ed/0x207d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.344091415s of 16.666501999s, submitted: 103
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb99b860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cb986960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ce70d000 session 0x55b3cbe8c780
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb987860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291258368 unmapped: 41172992 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc62c00 session 0x55b3cdcb25a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5f000 session 0x55b3cb9863c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cfba2f00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf6f4c00 session 0x55b3cbe3e960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cbdc6d20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199049 data_alloc: 218103808 data_used: 5505024
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291266560 unmapped: 41164800 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf614c00 session 0x55b3cfba3e00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291282944 unmapped: 41148416 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199181 data_alloc: 218103808 data_used: 5505024
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202541 data_alloc: 218103808 data_used: 6029312
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.294864655s of 16.384281158s, submitted: 18
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291323904 unmapped: 41107456 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291332096 unmapped: 41099264 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec5d2000/0x0/0x4ffc00000, data 0x1f7a4ed/0x210c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202957 data_alloc: 218103808 data_used: 6066176
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292683776 unmapped: 39747584 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292478976 unmapped: 39952384 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292495360 unmapped: 39936000 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec1ae000/0x0/0x4ffc00000, data 0x23984ed/0x252a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2257 writes, 9325 keys, 2257 commit groups, 1.0 writes per commit group, ingest: 11.79 MB, 0.02 MB/s#012Interval WAL: 2257 writes, 876 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292511744 unmapped: 39919616 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 39911424 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292528128 unmapped: 39903232 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244059 data_alloc: 218103808 data_used: 6135808
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec194000/0x0/0x4ffc00000, data 0x23aa4ed/0x253c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.300401688s of 30.381093979s, submitted: 46
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3cbf234a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63000 session 0x55b3cde56b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292536320 unmapped: 39895040 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cb99b2c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194413 data_alloc: 218103808 data_used: 5505024
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf02ac00 session 0x55b3ce0f3860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3d174b000 session 0x55b3cba95a40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292544512 unmapped: 39886848 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3ccb61400 session 0x55b3ccadd680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291127296 unmapped: 41304064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291135488 unmapped: 41295872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291143680 unmapped: 41287680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051140 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed6ff000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 41279488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.373273849s of 24.570438385s, submitted: 43
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3d0b64960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdce5c00 session 0x55b3cb998960
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3cba943c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cc7f5000 session 0x55b3cb999680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdc63c00 session 0x55b3ce39d860
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292216832 unmapped: 40214528 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ccadad20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099194 data_alloc: 218103808 data_used: 2686976
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed1fc000/0x0/0x4ffc00000, data 0x1350500/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cdd5c000 session 0x55b3ce39dc20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cbd50c00 session 0x55b3ce126780
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 ms_handle_reset con 0x55b3cf613800 session 0x55b3cf5ade00
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ece6e000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057926 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.473459244s of 18.660179138s, submitted: 60
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291790848 unmapped: 40640512 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291807232 unmapped: 40624128 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291840000 unmapped: 40591360 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291848192 unmapped: 40583168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291856384 unmapped: 40574976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291864576 unmapped: 40566784 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291872768 unmapped: 40558592 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291880960 unmapped: 40550400 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057574 data_alloc: 218103808 data_used: 2682880
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291889152 unmapped: 40542208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.734951019s of 48.040054321s, submitted: 90
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2f0000/0x0/0x4ffc00000, data 0xe4e47b/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 293 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cba943c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291897344 unmapped: 40534016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061748 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291905536 unmapped: 40525824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed2ec000/0x0/0x4ffc00000, data 0xe5004c/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291921920 unmapped: 40509440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291938304 unmapped: 40493056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291946496 unmapped: 40484864 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291954688 unmapped: 40476672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291971072 unmapped: 40460288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291979264 unmapped: 40452096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291987456 unmapped: 40443904 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 291995648 unmapped: 40435712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292012032 unmapped: 40419328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292020224 unmapped: 40411136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292028416 unmapped: 40402944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292036608 unmapped: 40394752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292044800 unmapped: 40386560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292061184 unmapped: 40370176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292069376 unmapped: 40361984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292085760 unmapped: 40345600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292093952 unmapped: 40337408 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292118528 unmapped: 40312832 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292126720 unmapped: 40304640 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292134912 unmapped: 40296448 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292143104 unmapped: 40288256 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 40280064 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 40271872 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 40263680 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292175872 unmapped: 40255488 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292184064 unmapped: 40247296 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2e9000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292200448 unmapped: 40230912 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064722 data_alloc: 218103808 data_used: 2691072
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.866653442s of 113.942031860s, submitted: 31
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cdd5f800 session 0x55b3ccadd680
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 292208640 unmapped: 40222720 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 ms_handle_reset con 0x55b3cbd50000 session 0x55b3cb99b2c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed2ea000/0x0/0x4ffc00000, data 0xe51aaf/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299098112 unmapped: 33333248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060002 data_alloc: 234881024 data_used: 11603968
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 33300480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 295 ms_handle_reset con 0x55b3cf4e7400 session 0x55b3cfba25a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e365d/0x376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.775878906s of 11.974079132s, submitted: 56
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 38535168 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 296 ms_handle_reset con 0x55b3cdd5cc00 session 0x55b3cc5990e0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2954206 data_alloc: 218103808 data_used: 151552
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf54000/0x0/0x4ffc00000, data 0x1e520b/0x378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956988 data_alloc: 218103808 data_used: 151552
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 297 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x1e6c6e/0x37b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 38526976 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.541853905s of 11.648766518s, submitted: 41
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959762 data_alloc: 218103808 data_used: 151552
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 ms_handle_reset con 0x55b3cd84f000 session 0x55b3ce5aed20
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 38494208 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 38486016 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 38477824 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 38469632 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 38461440 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 38453248 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 38445056 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 38428672 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 38420480 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 38412288 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 38404096 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 38387712 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 38379520 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 38371328 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 38363136 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 38354944 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 38346752 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 38338560 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294100992 unmapped: 38330368 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 38322176 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 38313984 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 38305792 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edf4d000/0x0/0x4ffc00000, data 0x1e8831/0x380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 38297600 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965472 data_alloc: 218103808 data_used: 159744
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.011367798s of 106.076034546s, submitted: 19
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295182336 unmapped: 37249024 heap: 332431360 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 54001664 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 299 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cb940b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed748000/0x0/0x4ffc00000, data 0x9ea3f4/0xb85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 53968896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 ms_handle_reset con 0x55b3d135dc00 session 0x55b3cbe8c000
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295264256 unmapped: 53952512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295272448 unmapped: 53944320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 53936128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 53927936 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed743000/0x0/0x4ffc00000, data 0x9ebf94/0xb89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 53911552 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035247 data_alloc: 218103808 data_used: 167936
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.694004059s of 29.852605820s, submitted: 37
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 301 ms_handle_reset con 0x55b3cf60e000 session 0x55b3cbd825a0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035515 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ed742000/0x0/0x4ffc00000, data 0x9edb1f/0xb8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 53821440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 53739520 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 53624832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 53551104 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 818 writes, 2112 keys, 818 commit groups, 1.0 writes per commit group, ingest: 1.15 MB, 0.00 MB/s#012Interval WAL: 818 writes, 377 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 53321728 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: mgrc ms_handle_reset ms_handle_reset con 0x55b3d1d16400
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: mgrc handle_mgr_configure stats_period=5
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 53493760 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295739392 unmapped: 53477376 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038297 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 53387264 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed740000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 278.443786621s of 278.633789062s, submitted: 64
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 53239808 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 53215232 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 53207040 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 53198848 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 53190656 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 53116928 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 53108736 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 53100544 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 53092352 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 53084160 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 53075968 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 53067776 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 53059584 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 53043200 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296181760 unmapped: 53035008 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 53026816 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ed741000/0x0/0x4ffc00000, data 0x9ef582/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3037417 data_alloc: 218103808 data_used: 176128
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 53018624 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 53010432 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296214528 unmapped: 53002240 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 146.330047607s of 146.644271851s, submitted: 90
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 52994048 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 303 ms_handle_reset con 0x55b3cf610400 session 0x55b3d0b64b40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ed73e000/0x0/0x4ffc00000, data 0x9f1130/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2986189 data_alloc: 218103808 data_used: 184320
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 52928512 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 304 ms_handle_reset con 0x55b3cf611400 session 0x55b3cdcb23c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 52920320 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x1f2cbb/0x390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987987 data_alloc: 218103808 data_used: 192512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 52912128 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f471e/0x393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 52895744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 ms_handle_reset con 0x55b3ccad0000 session 0x55b3ce5afa40
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 52879360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 52871168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 52862976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 52854784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996423 data_alloc: 218103808 data_used: 192512
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 52846592 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 52838400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296386560 unmapped: 52830208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296394752 unmapped: 52822016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296411136 unmapped: 52805632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 52797440 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296427520 unmapped: 52789248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 52781056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 52772864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996743 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0x1f62be/0x397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.713172913s of 64.038635254s, submitted: 108
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 52764672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 307 ms_handle_reset con 0x55b3cbcb4c00 session 0x55b3cc8003c0
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 52740096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997229 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 52731904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0x1f7e6c/0x399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 52690944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 52682752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 52674560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 52666368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 52658176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 52649984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 52641792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 52633600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 52625408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296599552 unmapped: 52617216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 52609024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 52600832 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296632320 unmapped: 52584448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296640512 unmapped: 52576256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 52568064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 52559872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 52944896 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf dump' '{prefix=perf dump}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf schema' '{prefix=perf schema}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 53919744 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 53903360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295313408 unmapped: 53903360 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 53895168 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 53886976 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 53878784 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 53862400 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295362560 unmapped: 53854208 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 53846016 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 53837824 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 53829632 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 53813248 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 53805056 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 53796864 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295428096 unmapped: 53788672 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 53780480 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 53772288 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 53764096 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 53755904 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 53747712 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 53731328 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 53723136 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 53714944 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 53706752 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 53698560 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 53690368 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 53682176 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 53673984 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 53665792 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 53657600 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23353 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 53649408 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 53641216 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295583744 unmapped: 53633024 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 53616640 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 53608448 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 53600256 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 53592064 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 53583872 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295641088 unmapped: 53575680 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 53567488 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 53559296 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 53542912 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 53526528 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 53518336 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 53510144 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 53501952 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 53485568 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 53469184 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 53460992 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 53452800 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 53444608 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 53436416 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 53428224 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 53420032 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 53411840 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 53403648 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 53395456 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 53379072 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 39K writes, 151K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 421 writes, 898 keys, 421 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 421 writes, 192 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b3ca4651f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 53370880 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 53362688 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295862272 unmapped: 53354496 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 53346304 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 53338112 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 53313536 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 53305344 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 53297152 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 53288960 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 53280768 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 53272576 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 53256192 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 53231616 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf31000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000203 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 389.405700684s of 389.475738525s, submitted: 31
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 53223424 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 53182464 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 53174272 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 53166080 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 53157888 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 53149696 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 53141504 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 53133312 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 296091648 unmapped: 53125120 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 53329920 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: bluestore.MempoolThread(0x55b3ca543b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2999323 data_alloc: 218103808 data_used: 200704
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 53248000 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: osd.2 308 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0x1f98cf/0x39c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 53264384 heap: 349216768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:46 np0005486808 ceph-osd[89514]: do_command 'log dump' '{prefix=log dump}'
Oct 14 06:05:46 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 14 06:05:46 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745312317' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 14 06:05:46 np0005486808 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 14 06:05:46 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23357 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 14 06:05:47 np0005486808 nova_compute[259627]: 2025-10-14 10:05:47.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606158406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 14 06:05:47 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23361 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 06:05:47 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392519678' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 14 06:05:47 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23365 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 14 06:05:47 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200062440' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 14 06:05:48 np0005486808 ceph-mgr[74543]: log_channel(audit) log [DBG] : from='client.23371 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 14 06:05:48 np0005486808 ceph-c49aadb6-9b04-5cb1-8f5f-4c91676c568e-mgr-compute-0-euuwqu[74539]: 2025-10-14T10:05:48.370+0000 7f6b82b53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 06:05:48 np0005486808 ceph-mgr[74543]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.560444) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348560519, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 255, "total_data_size": 828869, "memory_usage": 843832, "flush_reason": "Manual Compaction"}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348569256, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 821059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70474, "largest_seqno": 71200, "table_properties": {"data_size": 817182, "index_size": 1592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8942, "raw_average_key_size": 19, "raw_value_size": 809336, "raw_average_value_size": 1755, "num_data_blocks": 70, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760436297, "oldest_key_time": 1760436297, "file_creation_time": 1760436348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 8854 microseconds, and 5371 cpu microseconds.
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.569311) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 821059 bytes OK
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.569335) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571346) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571368) EVENT_LOG_v1 {"time_micros": 1760436348571361, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.571393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 825013, prev total WAL file size 830318, number of live WAL files 2.
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.572226) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(801KB)], [167(9271KB)]
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348572263, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10314908, "oldest_snapshot_seqno": -1}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8765 keys, 10206829 bytes, temperature: kUnknown
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348621481, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10206829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10151558, "index_size": 32246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 231306, "raw_average_key_size": 26, "raw_value_size": 9998126, "raw_average_value_size": 1140, "num_data_blocks": 1244, "num_entries": 8765, "num_filter_entries": 8765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760430103, "oldest_key_time": 0, "file_creation_time": 1760436348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f7a3431-24cf-46a0-945d-86985446add9", "db_session_id": "BMHF7YJUWAF2M714LKG8", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.621695) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10206829 bytes
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.623392) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.3 rd, 207.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(25.0) write-amplify(12.4) OK, records in: 9286, records dropped: 521 output_compression: NoCompression
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.623413) EVENT_LOG_v1 {"time_micros": 1760436348623402, "job": 104, "event": "compaction_finished", "compaction_time_micros": 49283, "compaction_time_cpu_micros": 21991, "output_level": 6, "num_output_files": 1, "total_output_size": 10206829, "num_input_records": 9286, "num_output_records": 8765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348623662, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760436348624999, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.572167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: rocksdb: (Original Log Time 2025/10/14-10:05:48.625049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959110844' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 14 06:05:48 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/896764778' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553801755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3535606231' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 14 06:05:49 np0005486808 ceph-mgr[74543]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 457 KiB data, 988 MiB used, 59 GiB / 60 GiB avail
Oct 14 06:05:49 np0005486808 nova_compute[259627]: 2025-10-14 10:05:49.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049773450' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 14 06:05:49 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109722576' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771726257' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377864442' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737011142' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392728208' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516472 data_alloc: 218103808 data_used: 6881280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320446464 unmapped: 59604992 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ed7000/0x0/0x4ffc00000, data 0x2b09b7c/0x2c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4857 writes, 19K keys, 4857 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s#012Interval WAL: 4857 writes, 1947 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8ec0000/0x0/0x4ffc00000, data 0x2b28b7c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510628 data_alloc: 218103808 data_used: 6881280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.153414726s of 11.554138184s, submitted: 132
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320356352 unmapped: 59695104 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320364544 unmapped: 59686912 heap: 380051456 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9395c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c8b54000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c8f4f2c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9322f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513901 data_alloc: 218103808 data_used: 6881280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bdab40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7ec7e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320315392 unmapped: 67174400 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c6f11c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9fa5a40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7edd000/0x0/0x4ffc00000, data 0x3b0ab8c/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320323584 unmapped: 67166208 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631519 data_alloc: 218103808 data_used: 6885376
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8ec32c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8baad20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7eda000/0x0/0x4ffc00000, data 0x3b0db8c/0x3ca4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320331776 unmapped: 67158016 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9059e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.693615913s of 12.837650299s, submitted: 26
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdb2c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634757 data_alloc: 218103808 data_used: 6885376
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320348160 unmapped: 67141632 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752677 data_alloc: 234881024 data_used: 23490560
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7ed9000/0x0/0x4ffc00000, data 0x3b0dbaf/0x3ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321585152 unmapped: 65904640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753333 data_alloc: 234881024 data_used: 23494656
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.706723213s of 11.763413429s, submitted: 15
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321593344 unmapped: 65896448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326164480 unmapped: 61325312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x4830baf/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868965 data_alloc: 234881024 data_used: 24391680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326238208 unmapped: 61251584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326246400 unmapped: 61243392 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3868737 data_alloc: 234881024 data_used: 24412160
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e71b4000/0x0/0x4ffc00000, data 0x4832baf/0x49ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326254592 unmapped: 61235200 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.880608559s of 13.121785164s, submitted: 92
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bab4a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df14a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c9fac5a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e86ea000/0x0/0x4ffc00000, data 0x2b43b7c/0x2cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529945 data_alloc: 218103808 data_used: 6897664
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318914560 unmapped: 68575232 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c7e0e5a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2c00 session 0x5597c8da74a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9df0960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.259336472s of 23.626934052s, submitted: 120
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318963712 unmapped: 68526080 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 68509696 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 68452352 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 68444160 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319053824 unmapped: 68435968 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319062016 unmapped: 68427776 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379525 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319070208 unmapped: 68419584 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c91d70e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8b552c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c93234a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d48400 session 0x5597c9fbb0e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.051465988s of 26.344564438s, submitted: 90
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9f9a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8bade00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c8eb6960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c7e0eb40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705ad20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca805c00 session 0x5597c9058b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319397888 unmapped: 68091904 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460245 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9f8bc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e8000/0x0/0x4ffc00000, data 0x23febee/0x2596000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.328366280s of 10.499547958s, submitted: 49
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 68083712 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465278 data_alloc: 218103808 data_used: 4861952
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319340544 unmapped: 68149248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523198 data_alloc: 218103808 data_used: 13045760
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e95e7000/0x0/0x4ffc00000, data 0x23fec11/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319995904 unmapped: 67493888 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.949416161s of 10.952005386s, submitted: 1
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554408 data_alloc: 218103808 data_used: 13467648
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322412544 unmapped: 65077248 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e9e000/0x0/0x4ffc00000, data 0x2736c11/0x28cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567410 data_alloc: 234881024 data_used: 13811712
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322486272 unmapped: 65003520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e20000/0x0/0x4ffc00000, data 0x27b4c11/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566358 data_alloc: 234881024 data_used: 13824000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8e00000/0x0/0x4ffc00000, data 0x27d5c11/0x296e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323543040 unmapped: 63946752 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.258024216s of 14.533938408s, submitted: 65
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da6000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8da63c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c78414a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c6f934a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323551232 unmapped: 63938560 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8df9000/0x0/0x4ffc00000, data 0x27dbc20/0x2975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568414 data_alloc: 234881024 data_used: 13824000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330170368 unmapped: 57319424 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c8eb6780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9394d20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4e780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8f4f860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9feb800 session 0x5597c7e0f4a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623576 data_alloc: 234881024 data_used: 13828096
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323559424 unmapped: 63930368 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323567616 unmapped: 63922176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050421715s of 10.386352539s, submitted: 13
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c71854a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624501 data_alloc: 234881024 data_used: 13828096
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323584000 unmapped: 63905792 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324468736 unmapped: 63021056 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672021 data_alloc: 234881024 data_used: 20504576
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8782000/0x0/0x4ffc00000, data 0x2e52c20/0x2fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672373 data_alloc: 234881024 data_used: 20504576
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325877760 unmapped: 61612032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.206857681s of 13.240119934s, submitted: 8
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331653120 unmapped: 55836672 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331759616 unmapped: 55730176 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331833344 unmapped: 55656448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788835 data_alloc: 234881024 data_used: 21499904
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b26000/0x0/0x4ffc00000, data 0x3aa6c20/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331161600 unmapped: 56328192 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7b2d000/0x0/0x4ffc00000, data 0x3aa7c20/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784939 data_alloc: 234881024 data_used: 21671936
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8d714a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.424279213s of 12.732007027s, submitted: 100
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9f8a1e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331169792 unmapped: 56320000 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8d71e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dfa000/0x0/0x4ffc00000, data 0x27dbc11/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260c00 session 0x5597c6489c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329654272 unmapped: 57835520 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8dee000/0x0/0x4ffc00000, data 0x27e7c11/0x2980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [0,0,1])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9323a40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 62840832 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324657152 unmapped: 62832640 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324665344 unmapped: 62824448 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 62816256 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324681728 unmapped: 62808064 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabc8000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324689920 unmapped: 62799872 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402339 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324698112 unmapped: 62791680 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8f4ed20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8ec32c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c91761e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9fec000 session 0x5597c705bc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.449398041s of 46.750865936s, submitted: 92
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c6f114a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324829184 unmapped: 62660608 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324837376 unmapped: 62652416 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324853760 unmapped: 62636032 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8baa780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69c000/0x0/0x4ffc00000, data 0x1f7cb7c/0x2112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441271 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c9fa43c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8babc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8260800 session 0x5597c911cf00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324861952 unmapped: 62627840 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 9494528
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480869 data_alloc: 218103808 data_used: 9551872
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.233131409s of 23.287984848s, submitted: 4
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481685 data_alloc: 218103808 data_used: 9555968
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323919872 unmapped: 63569920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea69b000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323928064 unmapped: 63561728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507191 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 64299008 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 64290816 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507207 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 64282624 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.272737503s of 16.378507614s, submitted: 28
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323747840 unmapped: 63741952 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c8f4f0e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323215360 unmapped: 64274432 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323223552 unmapped: 64266240 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3542575 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa6000/0x0/0x4ffc00000, data 0x2671b8c/0x2808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 64258048 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c8ec6f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c8edd4a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8d71a40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9fad860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544413 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323239936 unmapped: 64249856 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323248128 unmapped: 64241664 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570813 data_alloc: 218103808 data_used: 13381632
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9fa5000/0x0/0x4ffc00000, data 0x2671b9c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323264512 unmapped: 64225280 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.227336884s of 20.305019379s, submitted: 9
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324141056 unmapped: 63348736 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e997f000/0x0/0x4ffc00000, data 0x2c8fb9c/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98e9000/0x0/0x4ffc00000, data 0x2d2db9c/0x2ec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627635 data_alloc: 218103808 data_used: 13598720
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324927488 unmapped: 62562304 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324935680 unmapped: 62554112 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628135 data_alloc: 218103808 data_used: 13598720
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324943872 unmapped: 62545920 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e98c5000/0x0/0x4ffc00000, data 0x2d51b9c/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628455 data_alloc: 218103808 data_used: 13606912
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324952064 unmapped: 62537728 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c91cb000 session 0x5597c705bc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9176b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.985599518s of 14.288364410s, submitted: 86
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f64c00 session 0x5597c705a5a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515600 data_alloc: 218103808 data_used: 9674752
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4ea38e000/0x0/0x4ffc00000, data 0x2289b8c/0x2420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c6f44000 session 0x5597c9394f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322830336 unmapped: 64659456 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c9fba5a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 68558848 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca804800 session 0x5597c9322d20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c8fbb000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8db4400 session 0x5597c911c780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c7e2c800 session 0x5597c8bacb40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 68501504 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 318996480 unmapped: 68493312 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319004672 unmapped: 68485120 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4eabca000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415797 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319012864 unmapped: 68476928 heap: 387489792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8da7c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9df1680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e792c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f74c00 session 0x5597c8ec6b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.484188080s of 41.605556488s, submitted: 34
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 325509120 unmapped: 66715648 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8da70e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9f8b860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c7e78960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976a000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca800000 session 0x5597c7840b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320528384 unmapped: 71696384 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d07000/0x0/0x4ffc00000, data 0x2911b7c/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c90592c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 320536576 unmapped: 71688192 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c976be00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8baad20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527710 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c9323860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319496192 unmapped: 72728576 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 319447040 unmapped: 72777728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633229 data_alloc: 234881024 data_used: 18477056
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 69566464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397739410s of 19.592414856s, submitted: 36
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9d06000/0x0/0x4ffc00000, data 0x2911b8c/0x2aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,1,0,3,1])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e96e5000/0x0/0x4ffc00000, data 0x2f32b8c/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326262784 unmapped: 65961984 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707751 data_alloc: 234881024 data_used: 19681280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9633000/0x0/0x4ffc00000, data 0x2fe3b8c/0x317a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700087 data_alloc: 234881024 data_used: 19681280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9631000/0x0/0x4ffc00000, data 0x2fe6b8c/0x317d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c79745a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9176780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c6f11e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326410240 unmapped: 65814528 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c93230e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.581520081s of 14.833774567s, submitted: 84
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c705ab40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c8baad20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c976be00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c90592c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c976a000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771905 data_alloc: 234881024 data_used: 19681280
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326508544 unmapped: 65716224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 326721536 unmapped: 65503232 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3836865 data_alloc: 234881024 data_used: 27766784
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329981952 unmapped: 62242816 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824155807s of 16.902429581s, submitted: 11
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329990144 unmapped: 62234624 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5d000/0x0/0x4ffc00000, data 0x38b9b9c/0x3a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x38bab9c/0x3a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878863 data_alloc: 234881024 data_used: 28008448
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333660160 unmapped: 58564608 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7659000/0x0/0x4ffc00000, data 0x3e17b9c/0x3faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3886749 data_alloc: 234881024 data_used: 27828224
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.783678055s of 12.008990288s, submitted: 48
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e763c000/0x0/0x4ffc00000, data 0x3e2cb9c/0x3fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c7e78960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 58441728 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df03c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710153 data_alloc: 234881024 data_used: 18591744
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e848f000/0x0/0x4ffc00000, data 0x2fe8b8c/0x317f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8ec65a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9df0b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330792960 unmapped: 61431808 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9df0f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441058 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a2a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321388544 unmapped: 70836224 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9059e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8d49c00 session 0x5597c8bdab40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fa4b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8b550e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.645046234s of 42.893882751s, submitted: 79
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8d70b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7184000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6c800 session 0x5597c9fb23c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7840000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c8eb7e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321708032 unmapped: 70516736 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321716224 unmapped: 70508544 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9fa50e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541334 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c8ecab40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321724416 unmapped: 70500352 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597cc6e8000 session 0x5597c7e0fa40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8ec7c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321617920 unmapped: 70606848 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321626112 unmapped: 70598656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 321634304 unmapped: 70590464 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614275 data_alloc: 234881024 data_used: 14381056
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8f05000/0x0/0x4ffc00000, data 0x2572b8c/0x2709000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 323674112 unmapped: 68550656 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.752676010s of 19.861513138s, submitted: 33
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 62496768 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681253 data_alloc: 234881024 data_used: 15691776
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8853000/0x0/0x4ffc00000, data 0x2c1bb8c/0x2db2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e883d000/0x0/0x4ffc00000, data 0x2c3ab8c/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674689 data_alloc: 234881024 data_used: 15691776
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.384753227s of 12.701920509s, submitted: 117
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9323a40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c8b552c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330375168 unmapped: 61849600 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c8fd0000 session 0x5597c8eddc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9fba000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7ec6960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df0000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8d70000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c7974f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840c000/0x0/0x4ffc00000, data 0x3069bfe/0x3202000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c7ec6960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717851 data_alloc: 234881024 data_used: 15691776
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c7ec63c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.189133644s of 10.343238831s, submitted: 42
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fba000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330383360 unmapped: 61841408 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 330391552 unmapped: 61833216 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748708 data_alloc: 234881024 data_used: 19931136
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749364 data_alloc: 234881024 data_used: 19935232
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708535194s of 11.744414330s, submitted: 9
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331309056 unmapped: 60915712 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e840b000/0x0/0x4ffc00000, data 0x3069c21/0x3203000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,1,1])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 57638912 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808274 data_alloc: 234881024 data_used: 20135936
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 56434688 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e00000/0x0/0x4ffc00000, data 0x366cc21/0x3806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803170 data_alloc: 234881024 data_used: 20140032
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 57098240 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.447299004s of 12.729538918s, submitted: 84
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803346 data_alloc: 234881024 data_used: 20140032
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7e05000/0x0/0x4ffc00000, data 0x366fc21/0x3809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335134720 unmapped: 57090048 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9fac1e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335142912 unmapped: 57081856 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8da70e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686054 data_alloc: 234881024 data_used: 15679488
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335192064 unmapped: 57032704 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c9f8a1e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9f8b860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329187328 unmapped: 63037440 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8f4e780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e882a000/0x0/0x4ffc00000, data 0x2c4db8c/0x2de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329203712 unmapped: 63021056 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 63012864 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467082 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329220096 unmapped: 63004672 heap: 392224768 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.834590912s of 39.179557800s, submitted: 109
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6e400 session 0x5597c705a960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9866000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9102000/0x0/0x4ffc00000, data 0x2376b7c/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324231168 unmapped: 71671808 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540834 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c9f8a3c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93223c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324239360 unmapped: 71663616 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c9058b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8b554a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324386816 unmapped: 71516160 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324395008 unmapped: 71507968 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614328 data_alloc: 234881024 data_used: 13717504
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e90dd000/0x0/0x4ffc00000, data 0x239ab9f/0x2531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.299053192s of 19.407587051s, submitted: 25
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 324722688 unmapped: 71180288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662080 data_alloc: 234881024 data_used: 14364672
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328425472 unmapped: 67477504 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b89000/0x0/0x4ffc00000, data 0x28e6b9f/0x2a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673472 data_alloc: 234881024 data_used: 14249984
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 67100672 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e8b8e000/0x0/0x4ffc00000, data 0x28e9b9f/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666320 data_alloc: 234881024 data_used: 14249984
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 67084288 heap: 395902976 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.285617828s of 17.535190582s, submitted: 85
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa41e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771938 data_alloc: 234881024 data_used: 14249984
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c9fad2c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8b54960
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c911da40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d97000/0x0/0x4ffc00000, data 0x36e0b9f/0x3877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80e400 session 0x5597c8f4fe00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 329211904 unmapped: 78757888 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3829579 data_alloc: 234881024 data_used: 19238912
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839499 data_alloc: 234881024 data_used: 19476480
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.400768280s of 15.509059906s, submitted: 20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7d72000/0x0/0x4ffc00000, data 0x3704bc2/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331464704 unmapped: 76505088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870475 data_alloc: 234881024 data_used: 19501056
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 74211328 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e760e000/0x0/0x4ffc00000, data 0x3e68bc2/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917109 data_alloc: 234881024 data_used: 20856832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 45K writes, 176K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2838 writes, 11K keys, 2838 commit groups, 1.0 writes per commit group, ingest: 12.66 MB, 0.02 MB/s#012Interval WAL: 2838 writes, 1140 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7581000/0x0/0x4ffc00000, data 0x3ef5bc2/0x408d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.800569534s of 12.333517075s, submitted: 84
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 73072640 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 73064448 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911673 data_alloc: 234881024 data_used: 20856832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e7560000/0x0/0x4ffc00000, data 0x3f16bc2/0x40ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 73056256 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80f800 session 0x5597c9394d20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c9df1e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.377149582s of 18.387834549s, submitted: 2
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 73048064 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643825 data_alloc: 218103808 data_used: 9093120
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 73039872 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e885d000/0x0/0x4ffc00000, data 0x28eab9f/0x2a81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705bc20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c9f8b2c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c90583c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9a29000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332382208 unmapped: 75587584 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491469 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332390400 unmapped: 75579392 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c93941e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8bab680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8edc000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.921491623s of 24.220367432s, submitted: 89
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597caeeb400 session 0x5597c8d70780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c91d7860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8edc780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c976af00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fad860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332398592 unmapped: 75571200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597ca80b000 session 0x5597c93941e0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544403 data_alloc: 218103808 data_used: 4796416
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 332414976 unmapped: 75554816 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 73883648 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c70cac00 session 0x5597c8da7680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9538400 session 0x5597c8baa3c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e94fb000/0x0/0x4ffc00000, data 0x1f7cb8c/0x2113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331317248 unmapped: 76652544 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8ec6b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.342603683s of 18.682430267s, submitted: 72
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331341824 unmapped: 76627968 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331374592 unmapped: 76595200 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [1])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 76570624 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 76562432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331415552 unmapped: 76554240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 76546048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 76537856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331440128 unmapped: 76529664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e961a000/0x0/0x4ffc00000, data 0x1a4eb7c/0x1be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498793 data_alloc: 218103808 data_used: 4210688
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331448320 unmapped: 76521472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.002799988s of 47.298183441s, submitted: 90
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 293 ms_handle_reset con 0x5597ca9b2000 session 0x5597c9fbb680
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331456512 unmapped: 76513280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500843 data_alloc: 218103808 data_used: 4218880
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9618000/0x0/0x4ffc00000, data 0x1a50619/0x1be5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331472896 unmapped: 76496896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331481088 unmapped: 76488704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331489280 unmapped: 76480512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331497472 unmapped: 76472320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331505664 unmapped: 76464128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331513856 unmapped: 76455936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331522048 unmapped: 76447744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 76439552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331538432 unmapped: 76431360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331546624 unmapped: 76423168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331554816 unmapped: 76414976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331563008 unmapped: 76406784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331571200 unmapped: 76398592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331579392 unmapped: 76390400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331587584 unmapped: 76382208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331595776 unmapped: 76374016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331603968 unmapped: 76365824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331612160 unmapped: 76357632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331620352 unmapped: 76349440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331628544 unmapped: 76341248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504841 data_alloc: 218103808 data_used: 4227072
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 331636736 unmapped: 76333056 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597ca80e400 session 0x5597c9322d20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71a40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 70279168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517001 data_alloc: 218103808 data_used: 11038720
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9615000/0x0/0x4ffc00000, data 0x1a5207c/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 70270976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.750770569s of 123.860015869s, submitted: 42
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519975 data_alloc: 218103808 data_used: 11038720
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 295 ms_handle_reset con 0x5597c9538400 session 0x5597c9fa5c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 73490432 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597c9f6f800 session 0x5597c8b54f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9e10000/0x0/0x4ffc00000, data 0x12557fb/0x13ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312682 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 296 heartbeat osd_stat(store_statfs(0x4eae11000/0x0/0x4ffc00000, data 0x2557eb/0x3ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 296 ms_handle_reset con 0x5597ca80e400 session 0x5597c9177860
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.878767967s of 10.074717522s, submitted: 53
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 73482240 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 297 heartbeat osd_stat(store_statfs(0x4eae0e000/0x0/0x4ffc00000, data 0x25724e/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 73474048 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 ms_handle_reset con 0x5597ca9b2000 session 0x5597c7975e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 73465856 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334512128 unmapped: 73457664 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334520320 unmapped: 73449472 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334528512 unmapped: 73441280 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334536704 unmapped: 73433088 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334544896 unmapped: 73424896 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 73416704 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334561280 unmapped: 73408512 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 73400320 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334577664 unmapped: 73392128 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 73383936 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 73375744 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 73367552 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 73359360 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334618624 unmapped: 73351168 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334626816 unmapped: 73342976 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 73334784 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 73326592 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334651392 unmapped: 73318400 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 73310208 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334667776 unmapped: 73302016 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334675968 unmapped: 73293824 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334684160 unmapped: 73285632 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334692352 unmapped: 73277440 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 heartbeat osd_stat(store_statfs(0x4eae0a000/0x0/0x4ffc00000, data 0x258ddb/0x3f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 73269248 heap: 407969792 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320444 data_alloc: 218103808 data_used: 1077248
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.781181335s of 110.807998657s, submitted: 17
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334716928 unmapped: 81649664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 299 ms_handle_reset con 0x5597c8fbac00 session 0x5597c8ec7e00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 334733312 unmapped: 81633280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 80560128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 ms_handle_reset con 0x5597c70cac00 session 0x5597c7184f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 80543744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 80535552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 80519168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 80510976 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335863808 unmapped: 80502784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447775 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335872000 unmapped: 80494592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9d21000/0x0/0x4ffc00000, data 0x133c507/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335880192 unmapped: 80486400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447935 data_alloc: 218103808 data_used: 1089536
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.965335846s of 30.084932327s, submitted: 23
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 301 ms_handle_reset con 0x5597c8fbac00 session 0x5597c9fad2c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448852 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335896576 unmapped: 80470016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9d20000/0x0/0x4ffc00000, data 0x133e0c8/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335912960 unmapped: 80453632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335929344 unmapped: 80437248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335937536 unmapped: 80429056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335945728 unmapped: 80420864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335953920 unmapped: 80412672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335962112 unmapped: 80404480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 80388096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335986688 unmapped: 80379904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 335994880 unmapped: 80371712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336011264 unmapped: 80355328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336019456 unmapped: 80347136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 80330752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336044032 unmapped: 80322560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 80314368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336060416 unmapped: 80306176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 80297984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 80289792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 80281600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336093184 unmapped: 80273408 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 80265216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 80257024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 80248832 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 80240640 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 80232448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 80216064 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 80207872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 80199680 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336175104 unmapped: 80191488 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 14 06:05:50 np0005486808 ceph-mon[74249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578665092' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 80175104 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 80166912 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 80158720 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 80150528 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 80142336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 80134144 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336240640 unmapped: 80125952 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 80117760 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336257024 unmapped: 80109568 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 80101376 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336273408 unmapped: 80093184 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 80084992 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 80076800 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 80068608 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336306176 unmapped: 80060416 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 80052224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 80044032 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 80035840 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336338944 unmapped: 80027648 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 80019456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.70 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 762 writes, 2197 keys, 762 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 762 writes, 346 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 80011264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336363520 unmapped: 80003072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c91cb000 session 0x5597c91d6780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc ms_handle_reset ms_handle_reset con 0x5597c9fec000
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3625056923
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3625056923,v1:192.168.122.100:6801/3625056923]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: mgrc handle_mgr_configure stats_period=5
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597cc6e8400 session 0x5597c6c22b40
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 ms_handle_reset con 0x5597c8db4400 session 0x5597c9df0780
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 79994880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 79986688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 79978496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 79970304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 79962112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 79953920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 79945728 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 79937536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 79929344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 79921152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 79912960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 79896576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 79888384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451826 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 79880192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1d000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.959259033s of 278.040557861s, submitted: 28
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 79872000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 79822848 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 79798272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 79790080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 79781888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 79773696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 79765504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336609280 unmapped: 79757312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 79749120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 79740928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336633856 unmapped: 79732736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 79724544 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 79716352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 79708160 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 79699968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 79691776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 79683584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 79667200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 79659008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336715776 unmapped: 79650816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 79642624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 79634432 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 79626240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 79609856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 79601664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 79593472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 79585280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 79577088 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 79560704 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 79552512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450946 data_alloc: 218103808 data_used: 1085440
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 79536128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 79527936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9d1e000/0x0/0x4ffc00000, data 0x133fb2b/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 146.745452881s of 147.051895142s, submitted: 90
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 303 ms_handle_reset con 0x5597ca9b2000 session 0x5597c705ad20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370800 data_alloc: 218103808 data_used: 1093632
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea98b000/0x0/0x4ffc00000, data 0x6d16fc/0x873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 79511552 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 304 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8da63c0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4ea987000/0x0/0x4ffc00000, data 0x6d32cd/0x876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 79503360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342983 data_alloc: 218103808 data_used: 1101824
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 79495168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 304 heartbeat osd_stat(store_statfs(0x4eadf9000/0x0/0x4ffc00000, data 0x26329b/0x404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 79462400 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c9f6d800 session 0x5597c8d70f00
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x16d4d0e/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 79437824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336936960 unmapped: 79429632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 79421440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 79413248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 79405056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336969728 unmapped: 79396864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 79388672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 79380480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 79372288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 79364096 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 79347712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 79339520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore.MempoolThread(0x5597c578db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3493113 data_alloc: 218103808 data_used: 1110016
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337035264 unmapped: 79331328 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9981000/0x0/0x4ffc00000, data 0x16d688b/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.765888214s of 64.009460449s, submitted: 63
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 ms_handle_reset con 0x5597c7e2d800 session 0x5597c8ec65a0
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: osd.1 307 ms_handle_reset con 0x5597c70cac00 session 0x5597c8d71c20
Oct 14 06:05:50 np0005486808 ceph-osd[88375]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 79298560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
